|
This is probably a pretty newbie question given the OP which seems to be heavily based on IT and moving all sorts of servers to VMs. Basically, I have Windows software that I'd like to test simultaneously on a bunch of different flavors of Windows (basically everything from XP up, 32 and 64 bit, so with Win8 we're up to 7 supported OS). The software itself is mildly CPU intensive, but not much on the RAM usage. No servers or server OS involved at all -- I just want one box running a whole bunch of single user Windows logins. My question is: What sort of hardware am I looking at if I want to run that many OSes at once and have them be generally responsive? I know you don't need to dedicate CPU cores to a VM, but if I spread 5 VMs plus a host OS over a quad-core processor, is it going to be horrible usability wise? Do I need to dedicate RAM to each VM or can I "over-provision" that too? (RAM seems cheap, though).
|
# ? Jun 13, 2012 04:52 |
|
|
# ? May 21, 2024 15:13 |
|
MrChupon posted:My question is: What sort of hardware am I looking at if I want to run that many OSes at once and have them be generally responsive? I know you don't need to dedicate CPU cores to a VM, but if I spread 5 VMs plus a host OS over a quad-core processor, is it going to be horrible usability wise? Do I need to dedicate RAM to each VM or can I "over-provision" that too? (RAM seems cheap, though). My guess is that for your testing you will have only one or two VMs actually doing things at any given time, the rest will be idle. Buy as much RAM as you can, a CPU that supports virtualization extensions, and have fun.
|
# ? Jun 13, 2012 05:51 |
|
adorai posted:On desktop virtualization software, you cannot (typically) overprovision RAM but you can overprovision CPU cores. Depending on load, you can have significantly more cores provisioned than you have physical cores. We are at roughly 4 to 1 in our environment, which to be fair, is very much an enterprise shop. Thanks for the quick response and primer, that helps a lot. That said, our software involves a lot of wizards where you kick off a process and the machine churns away for 15+ minutes, so while our testers aren't supermen who will always be using 4+ virtual machines at once, it's not that crazy a test case for us to have them go and kick off a wizard on all 7 OS, then check back on the first to see if it's crashed, where it is in the log file, and so on. So I don't want to make the assumption that only 1 or 2 VMs will be using CPU and the rest idle. We certainly wouldn't performance test this way, so by "generally responsive" I just mean that I don't want it to be an hourglass-cursor type pain in the rear end for them to do stuff like Alt-Tab between the various VMs, capture screenshots and crop them in MS Paint, copy log files to a share on the company network, log bugs via a web browser on the host OS, etc. Basically what I'm reading from your statement is the higher the suspected simultaneous load, the less I should overprovision? Makes sense I suppose.
|
# ? Jun 13, 2012 06:43 |
|
Hitting up the rumor mill: the annual new release of Parallels (Parallels Desktop 8) should come around September. Is there any evidence that it (or the new VMWare Fusion) will support DirectX 10 or 11?
|
# ? Jun 13, 2012 12:05 |
|
So I just extended a 1 drive VM using gparted. Expanded the drive size, booted to the live CD, extended to the partition, rebooted, let the drive check itself and boom, all was well. I tried using diskpart but it wouldn't let me resize the system partition. Now I need to do the opposite and shrink a drive. I cannot use VMware tools for this because it says it's disabled for the drive (these were P2Ved to a 4.1 machine because the hardware was dying) and then imported into 5. My Virtualized setup is VERY limited, we're using local storage for about 3 machines which are not super critical. so If I shrink the drive with gparted, can I just reduce the hard drive size in the Vsphere client and be good to go?
|
# ? Jun 13, 2012 15:35 |
|
LmaoTheKid posted:So I just extended a 1 drive VM using gparted. Expanded the drive size, booted to the live CD, extended to the partition, rebooted, let the drive check itself and boom, all was well. I tried using diskpart but it wouldn't let me resize the system partition. The best way to shrink is to just use VMware Converter and v2v the VM while shrinking the drive. For extending drives, you can use Dell's Extpart. It will let you extend system volumes, assuming there is contiguous space to extend. (Note: Always create 1 partition per VMDK.)
|
# ? Jun 13, 2012 15:59 |
|
LmaoTheKid posted:So I just extended a 1 drive VM using gparted. Expanded the drive size, booted to the live CD, extended to the partition, rebooted, let the drive check itself and boom, all was well. I tried using diskpart but it wouldn't let me resize the system partition.
|
# ? Jun 13, 2012 16:01 |
|
three posted:The best way to shrink is to just use VMware Converter and v2v the VM while shrinking the drive. Thanks, I'll give this a shot. Misogynist posted:What guest OS are you running? Windows 7 and Server 2008 R2 can grow the boot partition through diskmgmt.msc. Server 2003.
|
# ? Jun 13, 2012 16:07 |
|
Misogynist posted:The very concept of virtual RAM per physical socket doesn't even make sense. I meant vRAM added to the pool per license which is per socket. Reading the VMware site it looks like I was correct. The vRAM entitlement per CPU/Socket for essentials is 32GB or 192GB total if you're using all 6 licenses. edit: Reading the white paper each VM can have up to 96GB of RAM. doomisland fucked around with this message at 16:40 on Jun 13, 2012 |
# ? Jun 13, 2012 16:36 |
|
Trying to find a secure way to P2V customer equipment to our environment. Right now we have our Virtualcenter server with a 2nd NIC that goes to a dedicated VLAN that can be securely used for P2V operations. The problem is that it doesn't work. I can start the P2V but it fails while making the VM. Machine being P2V'd only has access to this private VLAN and can only see the Virtualcenter server. It has no default gateway or DNS servers. It cannot talk to the ESXi nodes directly in any way. Why does this P2V operation fail? Shouldn't it only have to talk to Virtualcenter?
|
# ? Jun 13, 2012 18:11 |
|
The source machine needs to talk to ESXi on port 902. http://www.vmware.com/pdf/convsa_50_guide.pdf See "TCP/IP and UDP Port Requirements for Conversion" I guess it makes sense that it would hand off the big traffic to ESXi directly, so vCenter doesn't have to relay tons of data.
|
# ? Jun 13, 2012 18:29 |
|
Thanks for the help guys, both VMs now have sane hard drive sizes. V2V took a while, but I dont really care if their blackberries work for an hour. I can't wait for the day I can delete that stupid goddamn BES VM.
|
# ? Jun 13, 2012 19:44 |
|
Is there any real concern about virtualizing your domain controllers and going 100% virtualized? I've never done it but I'm kinda wondering if we could just put everything on ESXi - or would I regret that for some untold reason? e: Hell, if you think this is a stupid question, realize that I only recently realized you can P2V a vCenter server into the same group of hosts that it manages - I thought it had to be on the outside looking in, a-doyyyy MC Fruit Stripe fucked around with this message at 20:07 on Jun 13, 2012 |
# ? Jun 13, 2012 20:04 |
|
MC Fruit Stripe posted:Is there any real concern about virtualizing your domain controllers and going 100% virtualized? I've never done it but I'm kinda wondering if we could just put everything on ESXi - or would I regret that for some untold reason? The one thing that I've learned is that if you have a power outage, make sure that the DC's are scheduled to be the first ones powered up and schedule a delay between them booting up and the rest of your servers.
|
# ? Jun 13, 2012 20:12 |
|
MC Fruit Stripe posted:Is there any real concern about virtualizing your domain controllers and going 100% virtualized? I've never done it but I'm kinda wondering if we could just put everything on ESXi - or would I regret that for some untold reason? The only concern to note is that you don't want both DC VMs running on the same physical box for failure reasons. As long as they're on separate hosts you're fine. (Unless you have a license for VMware that includes HA, in which case YMMV as to whether or not the HA migration on failure causes your DCs to poo themselves)
|
# ? Jun 13, 2012 20:38 |
|
MC Fruit Stripe posted:Is there any real concern about virtualizing your domain controllers and going 100% virtualized? I've never done it but I'm kinda wondering if we could just put everything on ESXi - or would I regret that for some untold reason? As long as you have some non-AD account on the hosts so you can get AD back up and running without a chicken and egg scenario, you should be fine (as fine as running vCenter in the cluster it manages - I do it, but it can make for a fun puzzle at 3am and should be avoided if you're big enough). I happened to have a brand new physical server I wasn't using any more (1 socket, so not a host candidate) so I threw a DC on there. If I didn't have it, I wouldn't have bought hardware for a physical DC. Also, make sure whichever DC is your top-level NTP server syncs to an outside NTP server, or your whole infrastructure will drift rather quickly.
|
# ? Jun 13, 2012 20:51 |
|
How useful is a HP Proliant DL585 G5 for VMs? Its a quad CPU quad core Opteron 2.3GHz and 72GB of DDR2 RAM. Using 4GB FC and/or quad bonded GbE, connected to two Dell Powervault NF500s (dual 2.0GHz quad core Xeons) and 8GB each with 146GB of U320 in RAID 1? I paid next to nothing for the setup (less than $100 total), so I have them on eBay, but if the hardware is decent enough I'd be interested in keeping them for VM stuff. Note that its about 3000W of hardware all together, not exactly energy efficient.
|
# ? Jun 13, 2012 20:58 |
|
Awesome, thanks all
|
# ? Jun 13, 2012 21:08 |
|
MC Fruit Stripe posted:Is there any real concern about virtualizing your domain controllers and going 100% virtualized? I've never done it but I'm kinda wondering if we could just put everything on ESXi - or would I regret that for some untold reason? There's certain scenarios it might be beneficial to have a physical DC hanging around just in case, but you should be OK in a HA environment. I attended a Hyper-V thing put on by Microsoft, and the guy there was telling a story about how he had a Hyper-V cluster setup, and both DC's inside the cluster. The cluster went down....and well it didn't come back up. So they recommended leaving a DC outside the cluster. We're Virtualizing all our DC's...but the odds of a complete AD meltdown is practically nil. Both redundant WAN links would have to go down, and there would have to be a catastrophic SAN failure.
|
# ? Jun 13, 2012 21:14 |
|
skipdogg posted:Both redundant WAN links would have to go down, and there would have to be a catastrophic SAN failure. So in other words an earthquake or similar regional natural disaster? That even assumes your assets are in two physically separate locations. I have no illusions of grandeur here, if we have a natural disaster where I work we're up a creek without a paddle. I don't have the budget or man hours to dedicate to that kind of DR
|
# ? Jun 13, 2012 21:54 |
|
Well I'm part of a global company, so we have about 9 different physical locations with 2 DC's each across the world, but as far as a local AD outage, yeah, both WAN links and the entire SAN would have to crash. AD would be fine in other locations, an extended outage would require another site grabbing the FSMO roles, but short of the world ending we're covered.
|
# ? Jun 13, 2012 22:57 |
|
Even with the AD VM going down, it's not really the end of the world. Like the others said, you just need to either a) have it boot back up first or b) know what host it is on and make sure you can log in locally to boot it up.
|
# ? Jun 13, 2012 23:46 |
|
MC Fruit Stripe posted:Is there any real concern about virtualizing your domain controllers and going 100% virtualized? I've never done it but I'm kinda wondering if we could just put everything on ESXi - or would I regret that for some untold reason? UPS's make sure you have them. Snapshots in an on a DC are a NO NO, high water marks are a pain. I run a fully VM'ed environment DC's and all, if it wasn't for the fact I have to get my VCP5 and CCNA before the end of month I would post more and update the OP which it needs. But VCP5 and CCNA before june 30th and trying to get centralized storage of EMC equipment makes me very busy... feld posted:Trying to find a secure way to P2V customer equipment to our environment. Right now we have our Virtualcenter server with a 2nd NIC that goes to a dedicated VLAN that can be securely used for P2V operations. The problem is that it doesn't work. I can start the P2V but it fails while making the VM. Machine being P2V'd only has access to this private VLAN and can only see the Virtualcenter server. It has no default gateway or DNS servers. It cannot talk to the ESXi nodes directly in any way. MC Fruit Stripe posted:Is there any real concern about virtualizing your domain controllers and going 100% virtualized? I've never done it but I'm kinda wondering if we could just put everything on ESXi - or would I regret that for some untold reason? Make sure you have shared storage and a backup plan. My shared storage was a 30k budget with FCoE, no sas connectors. The way I have it is Active/Active + Hspare on arrays, then local backup device + cloud backups. Yes I know hurrr durrr some people would fuss at me but I only have a set budget, make the most of what you have DJ Commie posted:How useful is a HP Proliant DL585 G5 for VMs? Its a quad CPU quad core Opteron 2.3GHz and 72GB of DDR2 RAM. Using 4GB FC and/or quad bonded GbE, connected to two Dell Powervault NF500s (dual 2.0GHz quad core Xeons) and 8GB each with 146GB of U320 in RAID 1? I paid next to nothing for the setup (less than $100 total), so I have them on eBay, but if the hardware is decent enough I'd be interested in keeping them for VM stuff. If your HW and Licsence supports it enable DPM Also Some dipshit before me ordered 3/5 serves with intergrated raid controllers... some are failing, others are raid 6 and have no write cache enabled... loving poo poo Also I gave up drinking so stupid posts should end now Dilbert As FUCK fucked around with this message at 03:15 on Jun 14, 2012 |
# ? Jun 14, 2012 02:49 |
|
skipdogg posted:
I know next to nothing about HyperV but doesn't it use Windows clustering, which requires AD? That would require an annoying bootstrap process to get working and then of course should the cluster ever fail.. Vmware wins again.
|
# ? Jun 14, 2012 16:27 |
|
A virtualization company saying not to virtualize something is ironic. I can't think of anything that VMware says not to virtualize.
|
# ? Jun 14, 2012 16:33 |
|
Is Thinware vBackup any good for basic weekly backups of a few non critical VMs? As above mentioned, my ragtag VM setup using local storage has been going along fine but it would be good to have a backup of everything. I've got a poo poo ton of storage in another room on the other side of the office, so I can run backups once on Saturday when no one is using it. Seems pretty decent for the free version, no?
|
# ? Jun 14, 2012 17:17 |
|
Nukelear v.2 posted:I know next to nothing about HyperV but doesn't it use Windows clustering, which requires AD? That would require an annoying bootstrap process to get working and then of course should the cluster ever fail.. Hyper-V just installs a barebones version of Server 2k8 R2 to run the hypervisor, it doesn't require AD at all. If you've got multiple Hyper-V hosts though this becomes a problem since multi-host management is handled by Server Center. I only tested out Hyper-V with one physical host before deciding to go with VMware, so I didn't have to worry about managing multiple Hyper-V Hosts. That being said I'd still keep at least one DC on each physical host, and not put your eggs all in one basket with two virtual DCs running on the same Hyper-V Host. Digital_Jesus fucked around with this message at 17:24 on Jun 14, 2012 |
# ? Jun 14, 2012 17:19 |
|
Nukelear v.2 posted:I know next to nothing about HyperV but doesn't it use Windows clustering, which requires AD? While I only have a cursory understanding of Hyper-V clustering, this seems to be true. Both DC's were on the HV cluster, the cluster went down and couldn't come back up because there was no available DC's. So now they leave 1 DC outside of the cluster on a different Hyper-V host. An interesting gotcha. The guy at the MS Event also touched on their newer datacenter here in San Antonio which I found interesting. He claims there are only 4 admins that work there.. The rest of the people are facilities and security. It's almost completely automated with System Center 2012, holds 140,000 servers running an average of 10VM's each. The datacenter runs at exactly 94 degrees F. and they use over 900,000 gallons a day of grey water to cool the place down. He mentioned many features of System Center 2012 were directly influenced by Microsoft's own issues trying to manage their datacenters. I thought they put it here because our city power company has some of the lowest rates around and we are fairly insulated from most natural disasters. According to this guy it was the ability to get the grey water from the city that pushed the decision over. He mentioned that hardware failure was under 3% even running at 94% and the savings in cooling costs outweighed the increased hardware costs by many factors.
|
# ? Jun 14, 2012 17:38 |
|
For something like a DC where you can easily have multiple copies, I'm not sure I see the point in clustering it to begin with. I haven't setup VMs in a cluster configuration on Hyper-V, but my understanding is it's not an all or nothing thing, each VM is a separate cluster service. The solution would seem to be just to not cluster that VM, you shouldn't even need another physical host. Also, failover usually involves putting a VM into a paused state which is a very "bad thing" to do with a DC, so I wouldn't cluster any of them. I know first hand that the underlying VMs don't give a poo poo about the domain allegiance of the physical host. We had an 'incident' where someone un-joined one of our hyper-v hosts from the domain, but all the VMs happily started up on it after it rebooted. By default, all of the hyper-v services run under "Local System" account, so there shouldn't be a problem with starting up standalone VMs if the host can't find a DC right away. bull3964 fucked around with this message at 18:03 on Jun 14, 2012 |
# ? Jun 14, 2012 17:59 |
|
LmaoTheKid posted:Is Thinware vBackup any good for basic weekly backups of a few non critical VMs? I don't have any input regarding Thinware, but Veeam just released a free version of their backup - annoying as poo poo marketing campaign from them, but seems well-suited for one-off or once-in-a-while grabs: http://www.veeam.com/virtual-machine-backup-solution-free.html
|
# ? Jun 14, 2012 18:02 |
|
Are any of you VSPP partners with very small footprints? Maybe a few hosts, couple of SANs, etc? Your typical small MSP environment, basically.
|
# ? Jun 14, 2012 19:40 |
|
Erwin posted:I don't have any input regarding Thinware, but Veeam just released a free version of their backup - annoying as poo poo marketing campaign from them, but seems well-suited for one-off or once-in-a-while grabs: http://www.veeam.com/virtual-machine-backup-solution-free.html Thanks! I might try that second but thinware looks like it will let you schedule the backups and that's more of a one off thing. EDIT: WELP, Thinware kind of sucks. Going to give this Veeam thing a shot. Can't wait for the coldcalls since I had to give my GD info to register. Matt Zerella fucked around with this message at 20:49 on Jun 14, 2012 |
# ? Jun 14, 2012 20:05 |
|
LmaoTheKid posted:Thanks! I might try that second but thinware looks like it will let you schedule the backups and that's more of a one off thing. I was asking for SKUs so I could buy Veeam this week a few weeks ago and have actually only had one phone call. No emails or anything. Surprised me actually, because I totally forgot about needing to call the guy back until you mentioned this. Veeam rocks in my testing, by the way.
|
# ? Jun 14, 2012 20:56 |
|
vty posted:Veeam rocks in my testing, by the way.
|
# ? Jun 14, 2012 21:00 |
|
Misogynist hates Veeam.
|
# ? Jun 14, 2012 21:04 |
|
Misogynist posted:How many VMs are you backing up? Not many, about 25. At what numbers did you have issues? Speed?
|
# ? Jun 14, 2012 21:10 |
|
three posted:A virtualization company saying not to virtualize something is ironic. I went to a Microsoft demo where they mentioned they had a partner that virtualized all their DC's. Their Hyper-V servers were domain joined and the power went out one day. I'm guessing you can figure out what happened next.
|
# ? Jun 14, 2012 22:09 |
|
If you virtualize all your DCs, you could consider ensuring that it's pinned to one or two boxes (via DRS rules). This helps you quickly locate them and start them up in the event of a datacenter outage. You can do the same with VC, really. Now, for this handful of select ESX/ESXi boxes, you probably want to turn off AD authentication and rely on local, too.
|
# ? Jun 14, 2012 23:10 |
|
Noghri_ViR posted:I went to a Microsoft demo where they mentioned they had a partner that virtualized all their DC's. Their Hyper-V servers were domain joined and the power went out one day. I'm guessing you can figure out what happened next. That really shouldn't be a problem. As I mentioned, the Hyper-V services use "Local System" as their login so they shouldn't need a DC to start up the VMs. The only time it's going to be an issue is if the VMs are clustered, but again that's something you shouldn't really do with DCs. I mean, I haven't actually tried it, but I can't see a reason why non-clustered VMs wouldn't start up if the DC couldn't be contacted. bull3964 fucked around with this message at 00:10 on Jun 15, 2012 |
# ? Jun 14, 2012 23:56 |
|
|
# ? May 21, 2024 15:13 |
|
Kachunkachunk posted:If you virtualize all your DCs, you could consider ensuring that it's pinned to one or two boxes (via DRS rules). This helps you quickly locate them and start them up in the event of a datacenter outage. You can do the same with VC, really. Exactly. For this we do the same thing. ESXi servers all have local accounts, but authentication is done through vcenter and AD. Create multiple AD servers and pin them to different machines and if you're lucky enough to have an offsite backup esxi cluster then add some more there. Then create a hard DC. Literally only have 2 hard machines and it's a dc at the backup datacenter and a local dc.
|
# ? Jun 15, 2012 01:41 |