|
FISHMANPET posted:So vCenter and it's database, how many vms should it be split across? Right now it's going to be 4 hosts, but it will probably grow to at least 10, with a lot of vms. I know it will be too big for SQL Express, so we'll be using full SQL. Should I split them across 2 VMs? Or is a single dual or tri core VM sufficient? And what about the SSO component. Can that live on its own machine, or can it share with vCenter and SQL? A single SQL will probably be able to handle a large amount ofVM's(HA on SQL is another thing), the size will depend on the loging of the environment as well as how dynamic it is; VUM is also a concideration. Put SQL on it's own VM, it will most likely need need 2 vCPU's and a decent amount of ram. Followed by vCenter/SSO components on it's own VM, dual vCPU's. VUM I would say on it's own VM as well. There is a tool in vCenter that will help you gauge the size of your SQL DB size, I just can't seem to put my finger on where it is. How many Virtual machines? Are you planning on using most of your features in license level? (which license level) Dilbert As FUCK fucked around with this message at 15:13 on Jan 10, 2013 |
# ? Jan 10, 2013 14:58 |
|
|
# ? May 14, 2024 04:21 |
|
thebmw posted:In the vSphere client, take a look at the host's Hardware Status tab. Expand Memory and you can see which slots are occupied, and by what. There's no such tab on my system. There's a configuration tab, with memory listed there but it just says Total/System/Virtual Machines and nothing about the actual physical slots are used. I don't have vCenter, if that makes a difference.
|
# ? Jan 10, 2013 15:09 |
|
Frozen-Solid posted:There's no such tab on my system. There's a configuration tab, with memory listed there but it just says Total/System/Virtual Machines and nothing about the actual physical slots are used. I don't have vCenter, if that makes a difference. Yeah you need vCenter for that tab/plugin to be functional
|
# ? Jan 10, 2013 15:12 |
|
Corvettefisher posted:ESXi has a firewall, you can do Virtal Firewall appliances, and virtual networking which shape the traffic do exist. The extra budget for firewall is pretty much $0, or as low as possible if I do go the MikroTik route. The SW firewall appliance route makes the most sense to me; yeah, this model has iLO shared with the first NIC, but seeing as that's getting strapped straight to the Internet I'm going to disable that from the get-go. Seems like I can make a tiny VM for pf/monowall on one of the SSDs, give it 1 CPU / 1 or 2GB of RAM and call it good.
|
# ? Jan 10, 2013 16:21 |
|
http://www.vmsources.com/resources/doc_download/35-pfsense pfsense has a VA out, tools and all installed already, just an FYI. The latest can be found here, however I must be blind and miss the link Other virtual appliances can be found here
|
# ? Jan 10, 2013 16:32 |
|
Corvettefisher posted:Yeah you need vCenter for that tab/plugin to be functional Is there any other way to get physical memory details with VMware? Either through ssh or the vSphere client?
|
# ? Jan 10, 2013 16:33 |
|
Frozen-Solid posted:Is there any other way to get physical memory details with VMware? Either through ssh or the vSphere client? Enable ssh on the box Configuration>Software box > Security Profile>Services>Edit>find SSH>start ssh ssh into said box Login with username password for root or whatever account you use type 'esxtop' examine the memory tab by hitting M Command line arguements and such can be found here E: Well that won't show it actually I misread your question, this will show you detailed information of how the memory is being used. Not aware of how to do it via powercli Dilbert As FUCK fucked around with this message at 16:49 on Jan 10, 2013 |
# ? Jan 10, 2013 16:46 |
|
Corvettefisher posted:http://www.vmsources.com/resources/doc_download/35-pfsense Awesome, looks like that might work. How does disk assignment work? I was planning on leverage Vt-d and just flinging the HBA straight at my ZFS VM when it comes to storage, but I was thinking of having ESXi create VMDK flat-files on the SSDs for everything else. 256GB pool split into a production/"dev" split (128GB each), and then the 80GB SSDs hold the installation of Linux/BSD/etc + pfSense. Or am I better off feeding raw devices to ESXi?
|
# ? Jan 10, 2013 16:56 |
|
Corvettefisher posted:E: Well that won't show it actually I misread your question, this will show you detailed information of how the memory is being used. Not aware of how to do it via powercli Turns out it's "smbiosDump | less" And we have a bunch of 2 GB sticks. So that sucks.
|
# ? Jan 10, 2013 16:59 |
|
movax posted:Awesome, looks like that might work. Before I jump to conclusions exactly how heavy are you looking at this server being loaded. Take a look at this new recent study http://blogs.vmware.com/vsphere/2013/01/vsphere-5-1-vmdk-versus-rdm.html Eager Zero VMDK's might prove nearly as effective while maintaining a 'ease' of management. What HBA is it? Some HBA's like the Adaptec 6 series will do SSD caching on the raid controller, while also utilizing HDD backend. However if you are using ZFS that changes a few things.
|
# ? Jan 10, 2013 17:15 |
|
Corvettefisher posted:Before I jump to conclusions exactly how heavy are you looking at this server being loaded. I'd class it as pretty "light" (<100 simultaneous users which will probably never happen). HBA is the M1015 (LSI2008) tied to 4 2TB RE4s. Probably going to throw all the SSDs on the motherboard Intel AHCI controller.
|
# ? Jan 10, 2013 17:21 |
|
For that I would think Eager Zero disks on VMFS datastores would most likely suffice, given the concurrent connections. I assume it is a LAMP setup + AV? Any other high stress servers.
|
# ? Jan 10, 2013 17:31 |
|
I have nothing interesting to add. Just wanted to say I'm on day 4/5 of ICM 5.0 with Optimize&Scale next week and my brain is mush. I can't believe I kept myself in the dark about virtualization for so long; my old job really sucked rear end in hindsight. I'm having so much fun with the mundane stuff like HA and DRS and simply learning how freaking cool some of this stuff is.
|
# ? Jan 11, 2013 06:24 |
|
talaena posted:I have nothing interesting to add. Just wanted to say I'm on day 4/5 of ICM 5.0 with Optimize&Scale next week and my brain is mush. I can't believe I kept myself in the dark about virtualization for so long; my old job really sucked rear end in hindsight. I'm having so much fun with the mundane stuff like HA and DRS and simply learning how freaking cool some of this stuff is. VMware is really fun! Any questions feel free to ask, no one knows it all, but drat if we won't try to!
|
# ? Jan 12, 2013 01:23 |
|
Corvettefisher posted:A single SQL will probably be able to handle a large amount ofVM's(HA on SQL is another thing), the size will depend on the loging of the environment as well as how dynamic it is; VUM is also a concideration. What do you mean by "HA on SQL is another thing?" Are you meaning that have HA protect the SQL database is a problem? Or something else? And you're saying one VM for SQL, a single machine with both vCenter/SSO, and another VM for VUM? Would you stick Inventory Services with vCenter/SSO as well? We've got Enterprise, but probably the only feature we'll use at that level is DRS (and I had to fight for that, I had to yell at one of my managers for basically thinking our time was worthless so that Standard was by default cheaper). From the Standard level a lot of HA and vMotion and SvMotion. As for VMs, I don't really know. With our two independent ESX 4.1 machines (which will be added as a cluster once this vCenter is up) have 30-40 machines between the two of them, or 15-20 machines per host. We've got two more hosts now, and a third is on order, so that's another 45-60 VMs at our previous rate, but I suspect that with test VMs and splitting up services, we'll probalby be doubling that pretty quickly, so 30-40 VMs per host, for a total 120-160 VMs total. Another vCenter question, how much would you trust the security controls in vCenter? We've got a bunch of instructional machines where we'll give students access to their VMs through vCenter (so they'll be logging into vCenter). Should we setup a second vCenter instance, or is it fine if we add those hosts to our production vCenter server as another cluster?
|
# ? Jan 13, 2013 07:53 |
|
Which book does one start with to learn about virtualization? I got Scott Lowe's book however this is way too advanced to start out with. Any suggestions? I really want something that's chalk full of labs as learning by doing is always the best.
|
# ? Jan 13, 2013 13:30 |
|
Tab8715 posted:Which book does one start with to learn about virtualization?
|
# ? Jan 13, 2013 23:34 |
|
Is anyone here a Xen expert? I am trying to use the xe CLI to get basic info about various servers. However, despite using the right credentials I continually get an authentication failed. I am running a command like this: code:
XenCenter works fine on the machine I am trying this from, I don't understand what the gently caress is going on. The logs on the target machine show that a connection was established but says nothing about failure. The workstation is a W2k8 server going to various hosts.
|
# ? Jan 14, 2013 15:15 |
|
What is the general thought on using DvSwitches on a non-converged vSphere environment? I am working on doing some reconfiguring in my new environment but have never used DvSwitches before (but they seem to like them here without any real reason). The environment is currently only used for VDI. Hosts will have 8x1gb pNICs. My current plan is two for management (different cards) running into a dedicated management switch. Two for storage (different cards) each running to their own storage switch. Two for VM network (different cards) running to their own front facing switch. One for vMotion running to a storage switch (isolated by VLAN). Then the last switch saved for future use. What advantages do I gain with this setup and DvSwtiches? Should I just go with standard vSwitches instead?
|
# ? Jan 14, 2013 20:24 |
|
I would only use a dvSwitch if I planned on using the features specific to it (inbound traffic shaping, private VLANs, LACP, etc). I think they add unnecessary complexity otherwise.
|
# ? Jan 14, 2013 20:50 |
|
Not to mention, the license/support uplift charges between Enterprise and Enterprise Plus are so not worth it for most organizations.
|
# ? Jan 14, 2013 20:55 |
|
And all of the fun little gotchas. Like they don't work quite right if your vCenter blows up.
|
# ? Jan 14, 2013 21:03 |
|
It is licensed for View so we get all the bells and whistles without the Ent + price gouging. Seems like I should just go with Standard vSwitches.
|
# ? Jan 14, 2013 21:07 |
|
Standard vSwitches will probably be fine unless you've got some giant cluster you're not telling us about. Then go with dvSwitches.
|
# ? Jan 14, 2013 21:50 |
|
Clusters are going to remain pretty small. Two primary sites each with a tier 1 cluster consisting of 4 hosts. I will probably get a tier two cluster at each site about the same size with some older hardware as well.
|
# ? Jan 14, 2013 22:04 |
|
Anyone have tips getting BackupExec 2012 to backup my VM guests? I have a few host machines all integrated with Vcenter and I'm having a rough time trying to get them to back up correctly in BackupExec. I have the proper licensing for vcenter and windows agents, But I'm not sure if I need to install the agent on each VM itself, or just the VM hosts? Anyone have a similar setup?
|
# ? Jan 14, 2013 22:30 |
|
The Onion posted:I'm not sure if I need to install the agent on each VM itself Yes, you do
|
# ? Jan 14, 2013 22:36 |
|
Syano posted:Yes, you do It depends on what method he is utilizing. If he is using the VMware Option (or whatever they call it), it will be doing backups the same way that Veeam, PHD Virtual, vRanger, etc. do it and it does not require an agent.
|
# ? Jan 15, 2013 00:00 |
|
The Onion posted:Anyone have tips getting BackupExec 2012 to backup my VM guests? I have a few host machines all integrated with Vcenter and I'm having a rough time trying to get them to back up correctly in BackupExec. I have the proper licensing for vcenter and windows agents, But I'm not sure if I need to install the agent on each VM itself, or just the VM hosts? Anyone have a similar setup? Not really a tip, but uninstalling BackupExec 2012 and using something else would me my advice. Seems like it will work fine for a week or so and then all of the sudden you start getting random failures for no reason at all.
|
# ? Jan 15, 2013 00:33 |
|
Well the other sysadmin here who I believe inherited the environment seems fine with switching to Standard vSwitches. After doing some more poking around today, it looks like all of their traffic in these existing hosts is running over a single gb connection. Joy! Now to rebuild all these hosts properly.
|
# ? Jan 15, 2013 00:38 |
|
three posted:It depends on what method he is utilizing. If he is using the VMware Option (or whatever they call it), it will be doing backups the same way that Veeam, PHD Virtual, vRanger, etc. do it and it does not require an agent. Ive never been able to get it to work like its 'supposed' to without installing the agent on each machine. But yeah I think the better answer is to back up the VMs themselves with Veeam or PHD virtual. We still do our application items with BackupExec due to some of its archiving functionality.
|
# ? Jan 15, 2013 01:01 |
|
We started running de-dupes on the OS partition VM volume on our NetApp about a month ago. The initial pass of 1.8TB took about 11 days and brought it down to about 650GB actual usage, so a pretty good dedupe ratio. Since then, he's been trying to run dedupes off and on as the change delta percentage starts creeping up but when they fire off they take another 10-11 days to complete which seems way too long. Other volumes containing CIFS shares and upwards of a TB take about 30 minutes or so (but I suspect the block fingerprinting has very little matching, requiring less of the block by block inspection pass, so a very different beast). Both the vswaps and pagefile (inside the boot volumes) reside there as well and he is under the impression that this would be destroying performance. I'm not that convinced since the vswap should be full of zeros since they've never been used and the pagefiles aren't being encrypted or dumped at reboot so that data should be relatively stable. Ideally I would like to move all the pagefiles from SATA to FC and possibly the vswap while I am at it, but we don't have the capacity to handle it right now until some more budget frees up and frankly I'm not convinced this is the source of our problem. Any thoughts?
|
# ? Jan 15, 2013 23:25 |
|
FISHMANPET posted:What do you mean by "HA on SQL is another thing?" Are you meaning that have HA protect the SQL database is a problem? Or something else? HA is one way to protect a server from extended downtime. What am trying to convey is that you'll need to cluster SQL appropriately to provide application and service uptime; this coupled with an affinity rule to keep the VM's on separate hosts will do you good. If SQL is down SSO and most of vCenter will be pretty unresponsive (AKA not work). Set up a new cluster and let them play with it, do not let them play with the production vCenter I don't care who says they are the poo poo at it. Run ESXi nested, vlan it, and set up vCenter on it. If you want to manage that you can install the root vCenter server with linked mode to manage the other vcenter from one console, but I would keep it completely separate. Our lab at the school I help out with does it this way, the ICM and VCAP courses have their own physically different servers running nested ESXi hosts for the students. Tab8715 posted:Which book does one start with to learn about virtualization? http://www.amazon.com/Administering...ywords=vmware+5 This book is theory/lab style approaching, ti really works well if you are more into HERE IS A LAB DO IT HERE ARE THE STEPS http://www.amazon.com/s/ref=nb_sb_noss_1?url=search-alias%3Dstripbooks&field-keywords=vmware+press&rh=n%3A283155%2Ck%3Avmware+press VMware also has a really good press line up Moey posted:What is the general thought on using DvSwitches on a non-converged vSphere environment? I am working on doing some reconfiguring in my new environment but have never used DvSwitches before (but they seem to like them here without any real reason). The environment is currently only used for VDI. DV switches are awesome in VDI, you'll really see limitations on the VSS have in VDI, centrally managing virtual networks and dynamically changing them is something you'll kick yourself for. What you have looks good, I would throw VM traffic on a VDS for simplicity sake, Storage and vmotion on a per host. Management could be either or due to the little traffic it takes up Dilbert As FUCK fucked around with this message at 23:53 on Jan 15, 2013 |
# ? Jan 15, 2013 23:40 |
|
OK, so I got my ML110 G7 up w/ 32GB of RAM and installed ESXi on a 8GB Corsair Flash Voyager attached to the internal USB port. Currently my only SATA controller is the 6-port Intel PCH (relabeled by HP as the B110i because they are HP, I guess). I think the two SATA ports on the motherboard are the 6Gb/s ports, and the other 4 exposed through a SFF connector are the 3Gb/s scrub ports. What I want to do is: - Pass through 4 2TB SATA HDDs to a VM as directly/rawly as possible - Pass through SSDs hooked up to the other AHCI ports to a different VM as directly/rawly as possible I assume the above conditions mean I can't just pass the entire SATA controller to a given VM as I need the ports doing different things. I was then going to pick up some cheap-rear end HBA (I have a line on a free Adaptec one) that will be used to host a few 80GB SSDs where the OS's will live, probably as VMDK files. This would include that pfSense I mentioned earlier, a CentOS/BSD/something install that gets the 2TB drives for ZFS, and a CentOS to run the actual application (which gets the other SSDs directly to throw the database/scratch on). Everything is tied to each other via internal VMware networking, whose throughput I understand is "pretty good". Am I on the right track here?
|
# ? Jan 15, 2013 23:43 |
|
Because I'm a sucker for passing controllers through instead of individual drives - can you stick a second HBA in there? I picked up a couple of HP-branded LSI 3041 SAS controllers for under $25 each shipped on eBay. Thus, you get to pass your 6Gbps ports straight to a VM, and your 4x 2TB drives can sit on one controller together which gets passed straight to another VM.
|
# ? Jan 16, 2013 00:10 |
|
movax posted:OK, so I got my ML110 G7 up w/ 32GB of RAM and installed ESXi on a 8GB Corsair Flash Voyager attached to the internal USB port. I'm still wondering what draws you to do the pass-through on the server. Given the number of users you have, systems running, and such I don't really think the gain will be substantial unless you plan to see how long it takes till the controller gives up, or in simulations you are seeing where the hypervisor layer + VMFS is a deal breaker. Onboard passthrough can be a bit iffy from some of my labs where I have attempted and failed on it. Generally for the best networking you'll want to use the VMXNET3 when possible, it offers many performance improvements over the E1000
|
# ? Jan 16, 2013 00:12 |
|
Corvettefisher posted:Generally for the best networking you'll want to use the VMXNET3 when possible, it offers many performance improvements over the E1000 Which specifically, and is this true for all OSes?
|
# ? Jan 16, 2013 00:15 |
|
IOwnCalculus posted:Because I'm a sucker for passing controllers through instead of individual drives - can you stick a second HBA in there? I picked up a couple of HP-branded LSI 3041 SAS controllers for under $25 each shipped on eBay. Thus, you get to pass your 6Gbps ports straight to a VM, and your 4x 2TB drives can sit on one controller together which gets passed straight to another VM. I am not aware of any limitations of VMdirectpath and HBA's other than the host max of 8 PCI/PCIE devices.
|
# ? Jan 16, 2013 00:20 |
|
luminalflux posted:Which specifically, and is this true for all OSes? http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1001805 KB SUPPORTED VMXNET3 quote:32- and 64-bit versions of Microsoft Windows 7, XP, 2003, 2003 R2, 2008, 2008 R2, and Server 2012 Other benefits of the VMXNET 3 Less CPU overhead higher throughput IP offload Higher throughput few other queue'ing features and how it presents the virtual networking to the guest VM. The benefit of the e1000 is most OS's it is plug and play.
|
# ? Jan 16, 2013 00:25 |
|
|
# ? May 14, 2024 04:21 |
|
luminalflux posted:Which specifically, and is this true for all OSes?
|
# ? Jan 16, 2013 00:37 |