|
On the server side, how do people partition what gets its own VM? Like if you have a license server, web server and storage server do each get their own vm? Or is it dependant on load? Just curious, as I could see things could get over partitioned where every little task gets its own OS VM which would add overhead. Perhaps this is getting into app virtualizing..
|
# ? Jul 7, 2016 22:15 |
|
|
# ? May 8, 2024 08:36 |
|
Really it will depend on your licensing more than anything. We license datacenter edition, so we can deploy as many VMs as we want on X number of procs. We put every function on it's own server, for a few reasons: 1) easier migrations and retirements 2) crashed service A won't effect other service B 3) runaway service A won't impact other services 4) more effective firewalling
|
# ? Jul 7, 2016 22:18 |
|
adorai posted:Really it will depend on your licensing more than anything. We license datacenter edition, so we can deploy as many VMs as we want on X number of procs. We put every function on it's own server, for a few reasons: We do the same here for the same reasons. Only real disadvantage is VM sprawl, but automate your patching and you are fine.
|
# ? Jul 7, 2016 22:24 |
|
adorai posted:Really it will depend on your licensing more than anything. We license datacenter edition, so we can deploy as many VMs as we want on X number of procs. We put every function on it's own server, for a few reasons:
|
# ? Jul 7, 2016 22:37 |
|
evil_bunnY posted:Samey. My boss loves to decom physical machines then suddenly realize it ran more than he thought. I've never had that issue (also because I actually write doc, but you know...). I have came across quite a few of those killing off physicals here as well. No one documents poo poo here.
|
# ? Jul 7, 2016 23:10 |
|
evil_bunnY posted:Samey. My boss loves to decom physical machines then suddenly realize it ran more than he thought. I've never had that issue (also because I actually write doc, but you know...). These are the best (worst) surprises, especially on a friday afternoon.
|
# ? Jul 8, 2016 13:51 |
|
I have been having a problem with lacp link flapping on the uplinks to my vdSwitches and I think its due to my switches running on short lacp timeouts while vmware runs by default on long. I'd prefer the short timeout obviously so I followed this guide: https://ssbkang.com/2014/04/30/advanced-lacp-configuration-using-esxcli/ to configure the timeout to fast which seems to make everything happy. BUT the change doesn't persist through a host reboot. I'm not sure if there's something I'm missing to commit the change to the switch config or if the vdSwitch config is coming down at reboot and overwriting my changes but if anyone has ideas I would appreciate the input.
|
# ? Jul 11, 2016 21:30 |
|
Wibla posted:These are the best (worst) surprises, especially on a friday afternoon. First rule of change management, never schedule anything on a Friday aka "no fiddle friday".
|
# ? Jul 12, 2016 08:26 |
|
Trastion posted:Good to know. I planned on keeping things separate. FWIW, I wouldn't do this. If you're building new, build new and don't touch the existing stuff at all. Then once you're validated working on the new environment, power the old one off. This approach hasn't failed me yet and I've been doing Citrix since the Metaframe days. I am a big proponent of keeping the different pieces separate and treating them as a more modular fashion, that way one service misbehaving won't make another angry. Especially for things like WI/SF which have an immediate and very visible impact to the end user population if they keel over.
|
# ? Jul 12, 2016 18:27 |
|
Is the HTML5 vSphere fling mature enough for daily use in a non-production lab environment? Opinions solicited, obviously, as it'll be subjective. Hope this gets baked in before long. Don't want to dedicate /yet/ another VM to a two node vSphere management setup.
|
# ? Jul 13, 2016 13:23 |
|
H2SO4 posted:FWIW, I wouldn't do this. If you're building new, build new and don't touch the existing stuff at all. Then once you're validated working on the new environment, power the old one off. This approach hasn't failed me yet and I've been doing Citrix since the Metaframe days. I am a big proponent of keeping the different pieces separate and treating them as a more modular fashion, that way one service misbehaving won't make another angry. Especially for things like WI/SF which have an immediate and very visible impact to the end user population if they keel over. Yeah everything is having to go on new systems anyways because everything requires a newer OS than the 2003 stuff the old is running on. I am more than fine with this as I am trying to get rid of all the 2003 boxes. Biggest thing i have ran into so far has been that I now need a $1000 NetScaler Gateway for external access when I didn't have that cost before with the Web Interface.
|
# ? Jul 13, 2016 16:23 |
|
I upgraded to Win 10 Home to Pro on my computer for Hyper-V and apparently it can't coexist with Virtualbox? Any way to solve this without getting rid of Hyper-V?
|
# ? Jul 17, 2016 15:40 |
|
SinineSiil posted:I upgraded to Win 10 Home to Pro on my computer for Hyper-V and apparently it can't coexist with Virtualbox? Run a hyper-v VM that has a guest os running VirtualBox (I had the same issue when trying out different hypervisors)
|
# ? Jul 17, 2016 16:20 |
|
priznat posted:Run a hyper-v VM that has a guest os running VirtualBox Ohh! I have read about that, it really works? I have to try it out sometime then. I also found a solution: https://derekgusoff.wordpress.com/2012/09/05/run-hyper-v-and-virtualbox-on-the-same-machine/ Lets me disable and re-enable Hype-V super easily and quickly.
|
# ? Jul 17, 2016 16:38 |
|
Is it conflicting because they're both trying to address the virtualization instructions on the processor? You might be able to do it if you disabled those features in BIOS and force the hypervisors to run in "software" with vt-x disabled.
|
# ? Jul 18, 2016 16:42 |
|
BangersInMyKnickers posted:Is it conflicting because they're both trying to address the virtualization instructions on the processor? You might be able to do it if you disabled those features in BIOS and force the hypervisors to run in "software" with vt-x disabled.
|
# ? Jul 18, 2016 16:52 |
|
Hyper-v doesn't do binary translation/software virt. Virtualbox won't run 64 bit guests without hardware virt (and maybe not at all now, it's been a while)
|
# ? Jul 18, 2016 16:54 |
|
BangersInMyKnickers posted:Is it conflicting because they're both trying to address the virtualization instructions on the processor? You might be able to do it if you disabled those features in BIOS and force the hypervisors to run in "software" with vt-x disabled. Microsoft VBS (virtualization based security) is running Windows 10 as a VM without you realizing it is a VM. So running VMware or VirtualBox would really be running as a nested VM. HyperV could still run, because it is smart enough to basically pass itself though to the root hypervisor. The link to disable HyperV on boot means you no longer run nested, so you get the HV extensions in Windows 10 again.
|
# ? Jul 18, 2016 20:56 |
|
Maybe this is more of a storage question than a virtualization one, but is there any reason why you would run VMware host datastores off of an NFS share instead of VMFS iSCSI targets, and wouldn't VMFS be faster anyway?
|
# ? Jul 19, 2016 03:11 |
|
anthonypants posted:Maybe this is more of a storage question than a virtualization one, but is there any reason why you would run VMware host datastores off of an NFS share instead of VMFS iSCSI targets, and wouldn't VMFS be faster anyway?
|
# ? Jul 19, 2016 03:36 |
|
Performance differences between NFS, iSCSI and FC are all minimal. Any of them can perform well if the connectivity is properly configured.
|
# ? Jul 19, 2016 06:27 |
|
Being able to browse to your VM datastores is nice as well. Stupid simple VMware question: If I have two sites and want to use site recovery manager then I need vCenter at each end, don't I?
|
# ? Jul 19, 2016 11:02 |
|
anthonypants posted:Maybe this is more of a storage question than a virtualization one, but is there any reason why you would run VMware host datastores off of an NFS share instead of VMFS iSCSI targets, and wouldn't VMFS be faster anyway? VMware has whitepapers that show no real world performance/overhead benefit of iSCSI over NFS except in extreme edge-case scenarios, and if that was a concern you should be going FC anyway. NFS allows you to dynamically expand a vol in one step, instead of having to expand the lun and then grow the VMFS volume inside it. You don't have to worry about VMFS version issues for old vols as your hosts get upgraded. You can browse the contents with any NFS client if that's a thing you want. LACP/etherchannel can handle the path redundancy and aggregation. NFS also allows you to shrink a volume on the fly if you start shuffling things around and the space is automatically released and available again. If you want to make something smaller on iSCSI you have to completely evacuate the LUN, delete and recreate it at the smaller size, then move data back. If your storage device doesn't support the VAAI extensions then VMware's recommendation is you either thick provision everything or only run 15 VMs per lun due to lun locking issues as thin disks grow. NFS has no real limit on this since its a file-level protocol.
|
# ? Jul 19, 2016 15:34 |
|
Thanks Ants posted:Stupid simple VMware question: If I have two sites and want to use site recovery manager then I need vCenter at each end, don't I? Two vCenter instances in linked-mode. Or you can be cheap as hell like us and have identical vlan/storage IPs/storage vol names on each side and get your data to both sides with storage replication and then manually import the VMX to your inventory and everything lines up.
|
# ? Jul 19, 2016 15:35 |
|
BangersInMyKnickers posted:Two vCenter instances in linked-mode.
|
# ? Jul 19, 2016 15:45 |
|
Vulture Culture posted:If you can manage this, and your environment is small enough where setting your DR site as primary on all your volumes is easy, it's honestly the better way to go IMO Yeah, two sites with a single netapp at each side and 120vms in total so only 60ish to bring up in a site failure and maybe a 1/3 of them are dev or unimportant. Once I get going I can have everything back online with minimal data loss inside 6 hours. We're going to probably double in size in the next five years so I'll get SRM up soonish since it tips my tolerance level of manually bringing things up but for now its fine.
|
# ? Jul 19, 2016 15:52 |
|
You can probably use some powershell scripts to automate your current process.
|
# ? Jul 19, 2016 17:15 |
|
adorai posted:You can probably use some powershell scripts to automate your current process. I'm going to make this my out of office reply
|
# ? Jul 19, 2016 20:44 |
|
BangersInMyKnickers posted:Two vCenter instances in linked-mode. Yeah this is the conclusion I have come to since it looks like trying to do this in an automated fashion just ends up taking you down the rabbit hole of fixing the next weak link in the chain and then you're into pushing routing updates to get traffic flowing to your DR site etc. For the 6 VMs that people want to protect in this way I'm tempted to just use Hyper-V with local storage at each side. Applications that can't cluster in tyool 2016 Thanks Ants fucked around with this message at 20:57 on Jul 19, 2016 |
# ? Jul 19, 2016 20:53 |
|
Look at VSphere replication, Zero, or VEEAM replication if you just have small number of high priority VMs that need to be replicated.
|
# ? Jul 19, 2016 21:02 |
|
That was actually what I was thinking of when I said site recovery manager, sorry. Presumably this still requires a vCenter Server license per site though as all the unscheduled failover documentation talks about instigating it from the replica side (makes sense if the primary is down). Can you do this with 2x Essentials Plus clusters or is there a license limitation that prevents that? I'll throw the question at our VAR and see what they come back with, they can decipher the documentation for me. Thanks for your help. Thanks Ants fucked around with this message at 21:53 on Jul 19, 2016 |
# ? Jul 19, 2016 21:36 |
|
Vulture Culture posted:If you can manage this, and your environment is small enough where setting your DR site as primary on all your volumes is easy, it's honestly the better way to go IMO I assume you're talking about two geographically separate sites in this case? Our company is going through this right now after an audit discovered we had no backups and no DR strategy in place, so now we're scrambling to come up with something. Right now my boss is leaning towards a hardware replication system to take Nimble volumes and use Nimble to Nimble replication (a hosting company owns the other Nimble device) to get everything offsite, and then have them spin up VMs in their own vCloud environment. Would using something like vCloud Air DR & Veeam be an option here, or should we run screaming from anything vCloud Air related?
|
# ? Jul 19, 2016 23:57 |
|
Veeam adds some nice stuff to the failover such as re-addressing networks for different subnets etc. If you're small and don't need auto failover, synchronous replication etc. then co-locating a server with a bunch of local disk and replicating VMs across with Veeam is hard to beat for the money.
|
# ? Jul 20, 2016 00:02 |
|
Thanks Ants posted:Veeam adds some nice stuff to the failover such as re-addressing networks for different subnets etc. I would generally agree with this, however we really need a cloud hosted solution because of the following: 1) Our datacenter is right down the street from our main office. Replicating data for DR purposes there obviously wouldn't work 2) Our second site is located in Texas, however it has no server room to speak of, or equipment that we could replicate to 3) Our third site is located in Ireland, and we can't use it for DR purposes since it's not in the continental United States
|
# ? Jul 20, 2016 00:13 |
|
I meant buy an R730 or similar and then rent a quarter rack of space and connectivity at a convenient location.
|
# ? Jul 20, 2016 00:17 |
|
VEEAM partners with colo and cloud providers to allow replication or backup to cloud resources. There are also DR as a service plans where they will turn up your VMs in their cloud and run them there.
|
# ? Jul 20, 2016 01:22 |
|
Thanks Ants posted:That was actually what I was thinking of when I said site recovery manager, sorry. Presumably this still requires a vCenter Server license per site though as all the unscheduled failover documentation talks about instigating it from the replica side (makes sense if the primary is down). Can you do this with 2x Essentials Plus clusters or is there a license limitation that prevents that? VSphere replication does not require two VCenter servers, however you would need a way to restore a relatively recent copy of the VCenter server at the remote site so that you could recover the other VMs. You could use VSphere replication to replicate the VCenter as well if it is virtualized, and then manually recover it (possible but not supported I don't think) and then recover the other VMs through VCenter. VEEAM handles this a little better since the replica VM is there and ready to turn on even without VCenter. You could also run your VCenter in the DR site so that it's available during a prod failure.
|
# ? Jul 20, 2016 01:37 |
|
So my boss really would like me to virtualize certain services (DNS, DHCP, etc) on a server. My question. How do I best go about that on a non-gui server for non-gui VMs?
|
# ? Jul 21, 2016 07:49 |
|
What are you using for virt and those services currently?
|
# ? Jul 21, 2016 08:17 |
|
|
# ? May 8, 2024 08:36 |
|
Tab8715 posted:What are you using for virt and those services currently? Nothing yet. I was going to use VirtualBox on Ubuntu Server (not my first choice but oh well). As for services...well, standard. bind, dhcpcd, and so on.
|
# ? Jul 21, 2016 12:17 |