|
Dilbert As gently caress posted:Well VMworld is around the corner. I am interested on how they are going to present it this year. Speaking of which, who is heading to VMworld? If it's not too many of us I would be willing to set up a evening of drinking at someplace cool like Bourbon and Branch.
|
# ? Aug 8, 2013 23:24 |
|
|
# ? May 9, 2024 15:58 |
|
Tequila25 posted:I didn't even think about flash drives. Do you even need a hard drive in the host? I guess maybe for logs? A lot of newer servers have an internal USB connection direct on the motherboard. Pick up a 8GB USB stick for a few bucks each and never worry about log files or hard drives. Also, I once had a dev ESX 3 server go for over three months with a dead RAID controller. It booted and then the controller died, killing the local disk volume. It ran just fine until someone physically sat at the console and saw all the SCSI errors scrolling by. Hard drives are overrated. Oh yeah, but proper monitoring isn't...
|
# ? Aug 9, 2013 00:52 |
|
Noghri_ViR posted:Speaking of which, who is heading to VMworld? If it's not too many of us I would be willing to set up a evening of drinking at someplace cool like Bourbon and Branch. I finally booked everything today. Was wondering how many ~*~goons~*~ would be attending.
|
# ? Aug 9, 2013 01:43 |
|
Linux Nazi posted:I finally booked everything today. Was wondering how many ~*~goons~*~ would be attending. Booked today? Was there even a decent hotel left or do you have to stay in San Jose or even worse, Oakland?
|
# ? Aug 9, 2013 16:18 |
|
Noghri_ViR posted:Booked today? Was there even a decent hotel left or do you have to stay in San Jose or even worse, Oakland? Most of Oakland isn't that bad.
|
# ? Aug 9, 2013 21:03 |
|
Noghri_ViR posted:Booked today? Was there even a decent hotel left or do you have to stay in San Jose or even worse, Oakland? Idk, I booked the same hotel as one of my workmates who knows the area. He's probably not to be trusted though.
|
# ? Aug 10, 2013 19:08 |
|
1000101 posted:Most of Oakland isn't that bad. It's the stabbing capital of the US. I think I'd rather stay in Detroit where I know I have a better chance of getting shot then stabbed.
|
# ? Aug 13, 2013 22:30 |
|
Senior windows guy told me was reading about 2012 SP1/R2 and one of the changes somehow makes it possible to setup a failover storage cluster under VMware but still have the machines be able to vMotion. I couldn't find anything after some Googling and he's probably lost the link he was reading, anybody know anything about this?
|
# ? Aug 14, 2013 03:46 |
|
FISHMANPET posted:Senior windows guy told me was reading about 2012 SP1/R2 and one of the changes somehow makes it possible to setup a failover storage cluster under VMware but still have the machines be able to vMotion. I couldn't find anything after some Googling and he's probably lost the link he was reading, anybody know anything about this? Pretty sure this is the vMotion+NLB issue. If you vMotion machines in a NLB cluster array (like Exchange CAS servers commonly are) there is an issue with ARP updates that causes them to lose connectivity to each other. Essentially you have to reboot the box which may or may not be a big deal.
|
# ? Aug 14, 2013 04:17 |
|
It's not a network issue, it's a storage issue. The disk had to be attached with a software initiator in the guest OS, and the way that's required to be done locks the VM to a single host. It's a combination of the Windows storage requirements for clustered file services and the way VMware implements those requirements.
|
# ? Aug 14, 2013 04:36 |
|
Quick networking question.... I'm out of my depth here. What's the best way to setup the iSCSI network for the following config.. At this point I'm just looking for failover, no teaming or anything, although I wouldn't be opposed to it. Bringing up a single ESXi host connected over GigE iSCSI to a VNXe SAN. Config: Server has 8 ethernet ports, vmnic0-7 SAN is configured to use 2 ethernet ports on each storage processor (They're mirrored, so the config on eth2 and eth3 on SPA is the same on SPB and only act as fail over). Right now I have 8 total ethernet connections going to a non routable iSCSI only VLAN. From the SAN eth2 is configured with .20 IP, and eth3 is configured as .21. 4 ports total, 2 for Storage Processor A, and 2 for B for failover. From the host, I have vmnic 0,1,4,5 physically connected to the iSCSI VLAN. Right now only vmnic4 is configured, vmkernel port on vSwitch1. I can see the storage and everything looks happy with green check marks. Where I'm getting lost is how to bring the other 3 into the fold as standby/failover connections. Do I just add the additional adapters to the vSwitch? If I do that, the iSCSI initiator yells about something being not compliant.
|
# ? Aug 14, 2013 17:23 |
|
skipdogg posted:Quick networking question.... I'm out of my depth here. My lab is similar to your setup in that each of my hosts has four NICs that I want to set up using multiple pathways: 1 LAN nic, 1 vMotion nic and two iSCSI nics that I have configured for round-robin access to help distribute the load. Each connection type is on a dedicated VLAN, and the vMotion and iSCSI VLANs are on-routable. Here's a pic of my current config from one of my hosts: I have each nic in its own vSwitch. On my iSCSI target I grant access to each NIC path and grant access to each volume group (or however your target calls them) so that each stroage volume will then appear twice in vSphere. Then in the iSCSI initiator properties you will add the iSCSI nics you have previously ientified. I think the default setting is failover-only, but in my case I have configured the connections as round-robin:right click on a storage volume and select manage paths In this case I have four connections to a volume group because by iSCSI target has two active/active heads. Two heads, times two paths to each head = four paths. In your case, you will have two nics on your SAN times four nics on your ESX host = eight paths. Agrikk fucked around with this message at 18:05 on Aug 14, 2013 |
# ? Aug 14, 2013 17:56 |
|
drat man, thanks for taking the time to reply with a very informative post. Much appreciated.
|
# ? Aug 14, 2013 18:03 |
|
skipdogg posted:drat man, thanks for taking the time to reply with a very informative post. Much appreciated. Glad to help. Multipathing can be a little tricky and it took me a lot of trial-and-error to get it working so I'm happy to help anyone else avoid that pain. Also, I have not done any bonding or anything special on my switch. From what I've read you get the best performace if you let ESXi handle the load balancing/portgrouping and avoid creating a LAG on your switch. All you need to do on the switch side is configure your vlans and set maximum packet size to 9000. Agrikk fucked around with this message at 18:08 on Aug 14, 2013 |
# ? Aug 14, 2013 18:05 |
|
This is just a single host that will be hosting maybe 2 or 3 very low load VM's and obviously the setup isn't 24/7 production critical. I'll be happy if they can just fail over if anything goes wrong. I haven't even changed MTU to 9000 or anything just yet. Need to check with the guy who manages the switch. I scored some space on a 6513 with Sup720's and 6748 line cards, so switching shouldn't be a limitation. Here's how it's configured right now. Anything else I should look for?
|
# ? Aug 14, 2013 18:17 |
|
Any reason you are not going with multiple VMK's to a VSS? Not saying that won't work just wondering.skipdogg posted:This is just a single host that will be hosting maybe 2 or 3 very low load VM's and obviously the setup isn't 24/7 production critical. I'll be happy if they can just fail over if anything goes wrong. I haven't even changed MTU to 9000 or anything just yet. Need to check with the guy who manages the switch. I scored some space on a 6513 with Sup720's and 6748 line cards, so switching shouldn't be a limitation. I wouldn't so much worry to set the MTU size to 9000 right away, make sure your environment is stable first with default MTU. If you have a VNXe you should have access to a document telling you best practice on how it should be setup. Dilbert As FUCK fucked around with this message at 18:28 on Aug 14, 2013 |
# ? Aug 14, 2013 18:25 |
|
Uhhh... I have no idea what you just said. VMware class can't start soon enough... edit: The VNXe HA doc was pretty useful, but didn't get into the actual ESXi networking options. Another thing that threw me for a loop was I'm not using 2 separate iSCSI subnets which the doc assumed you would be. skipdogg fucked around with this message at 18:53 on Aug 14, 2013 |
# ? Aug 14, 2013 18:26 |
|
skipdogg posted:Uhhh... I have no idea what you just said. VMware class can't start soon enough... Sorry was meaning to reply to the other guy. You can load multiple ISCSI VM Kernel's(VMK's)on virtual standard switches I was wondering why he was doing a 1:1 instead of Multiple VMK's to a VSS to X nic's. IIRC you should use Round Robin for VNXe's my new place is 3Par mostly so I may be wrong. Dilbert As FUCK fucked around with this message at 18:51 on Aug 14, 2013 |
# ? Aug 14, 2013 18:32 |
|
I do it that way too, multiple vmkernels on one switch, with each vmkernel tied to its own physical nic by making all other NIC's unused in the properties for each vmkernel. I doubt it makes a difference it just looks tidier
|
# ? Aug 14, 2013 18:33 |
|
I found this (and the documents linked from it) very helpful when trying to get my head around multipathing when I first set up an iSCSI SAN for VMware. http://jpaul.me/?p=413
|
# ? Aug 14, 2013 18:44 |
|
sanchez posted:I do it that way too, multiple vmkernels on one switch, with each vmkernel tied to its own physical nic by making all other NIC's unused in the properties for each vmkernel. Thirding this. I like things to look pretty. My current setup is 2x10gb running iscsi/vm network/management then 2x1gb for vMotion. Still have two more on board NICs if I ever need them and room for another PCIe card or two.
|
# ? Aug 14, 2013 21:34 |
|
Also gotta give a shout out to KS for the vCloud class, it is AMAZING the first day pretty much cleared up many of my misunderstanding(most of it was completely over thinking) of some of the vCloud stuff that has been my hang up. The class is great some of the people may have thought this was teaching VCP5:ICM but yeah amazing none the less.
Dilbert As FUCK fucked around with this message at 01:05 on Aug 15, 2013 |
# ? Aug 15, 2013 01:01 |
|
sanchez posted:I do it that way too, multiple vmkernels on one switch, with each vmkernel tied to its own physical nic by making all other NIC's unused in the properties for each vmkernel. "Tidier" is in the eye of the beholder. While I agree that a single vSwitch for all iSCSI adapters looks tidier, I prefer unique vswitches for each iSCSI connection so that I don't have unused connections in my configuration, making the configuration itself tidier IMO. This way I avoid the potential mistake of "Wait, why is this adapter unused in this vSwitch? I should turn it back on." when it is four in the morning and I'm feeling dingy after some issue that has kept me up all night. But whatev'. I don't think it makes any performance difference at all.
|
# ? Aug 15, 2013 02:36 |
|
Moey posted:Thirding this. I like things to look pretty. If you do a lot of vmotion and have big hosts, get it onto 10 gig! It's a whole lot more pleasant. 256GB hosts empty out in <30 seconds. If you're worried about link saturation, hopefully you're on Ent+ and can use NIOC. I see so many incorrect vmotion configs! Stuff like vmotion traffic going over the VPC/ISL because VLANs and port groups aren't configured optimally. Just make sure you don't join the club. Dilbert As gently caress posted:Also gotta give a shout out to KS for the vCloud class Very glad to hear it worked out.
|
# ? Aug 15, 2013 03:04 |
|
After some google searching, I only found one website that mentioned home network desktop virtualization (http://www.cringely.com/2011/11/24/silence-is-golden/). Here's my situation: I have a pretty powerful desktop that I built a few months ago. I currently run Win7 and a Hackintosh off of it. My fiance is a mac user but doesn't do a lot of heavy lifting with it (maybe some Illustrator and Photoshop, but mostly just ram intensive). Unfortunately, I think it's dying, and I don't want to pay $$$ for a new iMac. I like to game, and also do some photo editing and occasionally video editing. If I run a VMWare server off of the desktop, and use a virtual windows 7 session ON the server machine, does that make it a feasible option for gaming? You can do that, right? Or do I need some sort of thin client instead? Could someone suggest a good resource for a project like this?
|
# ? Aug 15, 2013 04:40 |
|
http://www.sysprobs.com/easily-run-mac-os-x-10-8-mountain-lion-retail-on-pc-with-vmware-image VMware workstation or VirtualBox (free) should accomplish this for you.
|
# ? Aug 15, 2013 05:10 |
|
Dilbert As gently caress posted:IIRC you should use Round Robin for VNXe's my new place is 3Par mostly so I may be wrong. Use round robin and change the number of IOPS before it switches paths to 1. I can't find the EMC whitepaper on this but we're doing it here and it results in a fairly massive performance increase. I have a little script I use to fix the default pathing stuff: code:
1) Change the path selection plugin to round robin 2) Change the number of IOPS the round robin plugin will use before switching to another path to 1 3) Change the default PSP for the VMWare ALUA driver to round robin It affects all LUNs so you may need to edit for taste.
|
# ? Aug 15, 2013 16:31 |
|
Goon Matchmaker posted:1) Change the path selection plugin to round robin Changing paths every single IOP seems pretty aggressive. I would think that would introduce some heavy overhead, but I have never worked with EMC's stuff. The default IOPS before changing paths is 100 correct?
|
# ? Aug 15, 2013 16:55 |
|
Moey posted:Changing paths every single IOP seems pretty aggressive. I would think that would introduce some heavy overhead, but I have never worked with EMC's stuff. I actually think it is 1000
|
# ? Aug 15, 2013 16:56 |
|
Dilbert As gently caress posted:I actually think it is 1000 Maybe I'll setup a test datastore on one of our Nimbles and do some testing with changing this. We have not even began to really stress these things though. I did boot up 150ish VMs at once the other day, didn't have any issues. The unit was hitting around 9k IOPS while it was going on.
|
# ? Aug 15, 2013 17:09 |
|
I think the real thing it is trying to avoid by setting it to 1 is to avoid an IOPS storm where VM's may grasp for XXX IOPS and not get properly distributed, and additional to avoid VM failures in the event of a path down issue which may affect stability of VM's.
|
# ? Aug 15, 2013 17:24 |
|
I do have a new site I will be deploying within the next month or so, so I can do some good stress testing there before putting it into production.
|
# ? Aug 15, 2013 17:33 |
|
OH gently caress YES got a VCDX to confirm he is stopping by the ICM class I am helping teaching at my local CC, in the fall! Not sure what number, but any east coast goon VCDX's visiting the East Coast/tidewater area from Sept to May hit, if you could do a talk on some stuff I would buy you a dinner and drinks.
|
# ? Aug 15, 2013 23:07 |
|
I'm a total idiot when it comes to virtualization, but I've read the OP and the first chapter of the Scott Lowe book. At work I have a domain controller that's run on a virtual server. Is there any reason I wouldn't want it to be run in High Availability mode if cost isn't an issue? I was told by some coworkers that running a virtual server on two physical machines simultaneously would cause problems. Is that actually a problem with HA?
|
# ? Aug 16, 2013 07:40 |
|
HA is what you want. What you probably don't want is fault tolerance.
|
# ? Aug 16, 2013 08:16 |
|
evil_bunnY posted:HA is what you want. What you probably don't want is fault tolerance. Yeah, that's what I thought. HA means there's two physical servers doing the work of one in some sort of lockstep thingy. FT is where the server starts itself back up, which is nice but not perfect.
|
# ? Aug 16, 2013 10:08 |
|
It's the other way around.
|
# ? Aug 16, 2013 12:12 |
|
Does anyone honestly use FT in production? I don't think I have ever heard of a case. The only thing I can think of for it being used would be some Win2K box running a mission critical app that no one understands anymore.
|
# ? Aug 16, 2013 12:29 |
|
Cronus posted:Does anyone honestly use FT in production? I don't think I have ever heard of a case. The only thing I can think of for it being used would be some Win2K box running a mission critical app that no one understands anymore. I don't think so. The 1 vCPU limit pretty much means any guest where FT would be really nice can't use it anyways.
|
# ? Aug 16, 2013 15:53 |
|
|
# ? May 9, 2024 15:58 |
|
"Hey guys vSMP FT is coming any day now!!" -- VMware since 2008
|
# ? Aug 16, 2013 16:19 |