Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Noghri_ViR
Oct 19, 2001

Your party has died.
Please press [ENTER] to continue to the
Las Vegas Bowl

Dilbert As gently caress posted:

Well VMworld is around the corner. I am interested on how they are going to present it this year.

Speaking of which, who is heading to VMworld? If it's not too many of us I would be willing to set up a evening of drinking at someplace cool like Bourbon and Branch.

Adbot
ADBOT LOVES YOU

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Tequila25 posted:

I didn't even think about flash drives. Do you even need a hard drive in the host? I guess maybe for logs?

A lot of newer servers have an internal USB connection direct on the motherboard. Pick up a 8GB USB stick for a few bucks each and never worry about log files or hard drives.


Also, I once had a dev ESX 3 server go for over three months with a dead RAID controller. It booted and then the controller died, killing the local disk volume. It ran just fine until someone physically sat at the console and saw all the SCSI errors scrolling by.

Hard drives are overrated. :)

Oh yeah, but proper monitoring isn't...

Blame Pyrrhus
May 6, 2003

Me reaping: Well this fucking sucks. What the fuck.
Pillbug

Noghri_ViR posted:

Speaking of which, who is heading to VMworld? If it's not too many of us I would be willing to set up a evening of drinking at someplace cool like Bourbon and Branch.

I finally booked everything today. Was wondering how many ~*~goons~*~ would be attending.

Noghri_ViR
Oct 19, 2001

Your party has died.
Please press [ENTER] to continue to the
Las Vegas Bowl

Linux Nazi posted:

I finally booked everything today. Was wondering how many ~*~goons~*~ would be attending.

Booked today? Was there even a decent hotel left or do you have to stay in San Jose or even worse, Oakland?

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Noghri_ViR posted:

Booked today? Was there even a decent hotel left or do you have to stay in San Jose or even worse, Oakland?

Most of Oakland isn't that bad.

Blame Pyrrhus
May 6, 2003

Me reaping: Well this fucking sucks. What the fuck.
Pillbug

Noghri_ViR posted:

Booked today? Was there even a decent hotel left or do you have to stay in San Jose or even worse, Oakland?

Idk, I booked the same hotel as one of my workmates who knows the area. He's probably not to be trusted though.

Noghri_ViR
Oct 19, 2001

Your party has died.
Please press [ENTER] to continue to the
Las Vegas Bowl

1000101 posted:

Most of Oakland isn't that bad.

It's the stabbing capital of the US. I think I'd rather stay in Detroit where I know I have a better chance of getting shot then stabbed.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Senior windows guy told me was reading about 2012 SP1/R2 and one of the changes somehow makes it possible to setup a failover storage cluster under VMware but still have the machines be able to vMotion. I couldn't find anything after some Googling and he's probably lost the link he was reading, anybody know anything about this?

Cronus
Mar 9, 2003

Hello beautiful.
This...is gonna get gross.

FISHMANPET posted:

Senior windows guy told me was reading about 2012 SP1/R2 and one of the changes somehow makes it possible to setup a failover storage cluster under VMware but still have the machines be able to vMotion. I couldn't find anything after some Googling and he's probably lost the link he was reading, anybody know anything about this?

Pretty sure this is the vMotion+NLB issue. If you vMotion machines in a NLB cluster array (like Exchange CAS servers commonly are) there is an issue with ARP updates that causes them to lose connectivity to each other. Essentially you have to reboot the box which may or may not be a big deal.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
It's not a network issue, it's a storage issue. The disk had to be attached with a software initiator in the guest OS, and the way that's required to be done locks the VM to a single host. It's a combination of the Windows storage requirements for clustered file services and the way VMware implements those requirements.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

Quick networking question.... I'm out of my depth here.

What's the best way to setup the iSCSI network for the following config.. At this point I'm just looking for failover, no teaming or anything, although I wouldn't be opposed to it.

Bringing up a single ESXi host connected over GigE iSCSI to a VNXe SAN.

Config:

Server has 8 ethernet ports, vmnic0-7

SAN is configured to use 2 ethernet ports on each storage processor (They're mirrored, so the config on eth2 and eth3 on SPA is the same on SPB and only act as fail over).

Right now I have 8 total ethernet connections going to a non routable iSCSI only VLAN.

From the SAN eth2 is configured with .20 IP, and eth3 is configured as .21. 4 ports total, 2 for Storage Processor A, and 2 for B for failover.

From the host, I have vmnic 0,1,4,5 physically connected to the iSCSI VLAN. Right now only vmnic4 is configured, vmkernel port on vSwitch1. I can see the storage and everything looks happy with green check marks. Where I'm getting lost is how to bring the other 3 into the fold as standby/failover connections. Do I just add the additional adapters to the vSwitch? If I do that, the iSCSI initiator yells about something being not compliant.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

skipdogg posted:

Quick networking question.... I'm out of my depth here.

What's the best way to setup the iSCSI network for the following config.. At this point I'm just looking for failover, no teaming or anything, although I wouldn't be opposed to it.

Bringing up a single ESXi host connected over GigE iSCSI to a VNXe SAN.

Config:

Server has 8 ethernet ports, vmnic0-7

SAN is configured to use 2 ethernet ports on each storage processor (They're mirrored, so the config on eth2 and eth3 on SPA is the same on SPB and only act as fail over).

Right now I have 8 total ethernet connections going to a non routable iSCSI only VLAN.

From the SAN eth2 is configured with .20 IP, and eth3 is configured as .21. 4 ports total, 2 for Storage Processor A, and 2 for B for failover.

From the host, I have vmnic 0,1,4,5 physically connected to the iSCSI VLAN. Right now only vmnic4 is configured, vmkernel port on vSwitch1. I can see the storage and everything looks happy with green check marks. Where I'm getting lost is how to bring the other 3 into the fold as standby/failover connections. Do I just add the additional adapters to the vSwitch? If I do that, the iSCSI initiator yells about something being not compliant.

My lab is similar to your setup in that each of my hosts has four NICs that I want to set up using multiple pathways: 1 LAN nic, 1 vMotion nic and two iSCSI nics that I have configured for round-robin access to help distribute the load. Each connection type is on a dedicated VLAN, and the vMotion and iSCSI VLANs are on-routable.

Here's a pic of my current config from one of my hosts:



I have each nic in its own vSwitch.

On my iSCSI target I grant access to each NIC path and grant access to each volume group (or however your target calls them) so that each stroage volume will then appear twice in vSphere.

Then in the iSCSI initiator properties you will add the iSCSI nics you have previously ientified.



I think the default setting is failover-only, but in my case I have configured the connections as round-robin:right click on a storage volume and select manage paths



In this case I have four connections to a volume group because by iSCSI target has two active/active heads. Two heads, times two paths to each head = four paths.



In your case, you will have two nics on your SAN times four nics on your ESX host = eight paths.

Agrikk fucked around with this message at 18:05 on Aug 14, 2013

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

drat man, thanks for taking the time to reply with a very informative post. Much appreciated.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

skipdogg posted:

drat man, thanks for taking the time to reply with a very informative post. Much appreciated.

Glad to help. Multipathing can be a little tricky and it took me a lot of trial-and-error to get it working so I'm happy to help anyone else avoid that pain.

Also, I have not done any bonding or anything special on my switch. From what I've read you get the best performace if you let ESXi handle the load balancing/portgrouping and avoid creating a LAG on your switch. All you need to do on the switch side is configure your vlans and set maximum packet size to 9000.

Agrikk fucked around with this message at 18:08 on Aug 14, 2013

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

This is just a single host that will be hosting maybe 2 or 3 very low load VM's and obviously the setup isn't 24/7 production critical. I'll be happy if they can just fail over if anything goes wrong. I haven't even changed MTU to 9000 or anything just yet. Need to check with the guy who manages the switch. I scored some space on a 6513 with Sup720's and 6748 line cards, so switching shouldn't be a limitation.

Here's how it's configured right now. Anything else I should look for?


Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Any reason you are not going with multiple VMK's to a VSS? Not saying that won't work just wondering.

skipdogg posted:

This is just a single host that will be hosting maybe 2 or 3 very low load VM's and obviously the setup isn't 24/7 production critical. I'll be happy if they can just fail over if anything goes wrong. I haven't even changed MTU to 9000 or anything just yet. Need to check with the guy who manages the switch. I scored some space on a 6513 with Sup720's and 6748 line cards, so switching shouldn't be a limitation.

Here's how it's configured right now. Anything else I should look for?




I wouldn't so much worry to set the MTU size to 9000 right away, make sure your environment is stable first with default MTU.

If you have a VNXe you should have access to a document telling you best practice on how it should be setup.

Dilbert As FUCK fucked around with this message at 18:28 on Aug 14, 2013

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

Uhhh... I have no idea what you just said. VMware class can't start soon enough...

edit:

The VNXe HA doc was pretty useful, but didn't get into the actual ESXi networking options. Another thing that threw me for a loop was I'm not using 2 separate iSCSI subnets which the doc assumed you would be.

skipdogg fucked around with this message at 18:53 on Aug 14, 2013

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

skipdogg posted:

Uhhh... I have no idea what you just said. VMware class can't start soon enough...

Sorry was meaning to reply to the other guy.

You can load multiple ISCSI VM Kernel's(VMK's)on virtual standard switches I was wondering why he was doing a 1:1 instead of Multiple VMK's to a VSS to X nic's.


IIRC you should use Round Robin for VNXe's my new place is 3Par mostly so I may be wrong.

Dilbert As FUCK fucked around with this message at 18:51 on Aug 14, 2013

sanchez
Feb 26, 2003
I do it that way too, multiple vmkernels on one switch, with each vmkernel tied to its own physical nic by making all other NIC's unused in the properties for each vmkernel.

I doubt it makes a difference it just looks tidier

Thanks Ants
May 21, 2004

#essereFerrari


I found this (and the documents linked from it) very helpful when trying to get my head around multipathing when I first set up an iSCSI SAN for VMware.

http://jpaul.me/?p=413

Moey
Oct 22, 2010

I LIKE TO MOVE IT

sanchez posted:

I do it that way too, multiple vmkernels on one switch, with each vmkernel tied to its own physical nic by making all other NIC's unused in the properties for each vmkernel.

I doubt it makes a difference it just looks tidier

Thirding this. I like things to look pretty. :)

My current setup is 2x10gb running iscsi/vm network/management then 2x1gb for vMotion. Still have two more on board NICs if I ever need them and room for another PCIe card or two.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Also gotta give a shout out to KS for the vCloud class, it is AMAZING the first day pretty much cleared up many of my misunderstanding(most of it was completely over thinking) of some of the vCloud stuff that has been my hang up. The class is great some of the people may have thought this was teaching VCP5:ICM but yeah amazing none the less.

Dilbert As FUCK fucked around with this message at 01:05 on Aug 15, 2013

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

sanchez posted:

I do it that way too, multiple vmkernels on one switch, with each vmkernel tied to its own physical nic by making all other NIC's unused in the properties for each vmkernel.

I doubt it makes a difference it just looks tidier

"Tidier" is in the eye of the beholder. While I agree that a single vSwitch for all iSCSI adapters looks tidier, I prefer unique vswitches for each iSCSI connection so that I don't have unused connections in my configuration, making the configuration itself tidier IMO. This way I avoid the potential mistake of "Wait, why is this adapter unused in this vSwitch? I should turn it back on." when it is four in the morning and I'm feeling dingy after some issue that has kept me up all night.

But whatev'. I don't think it makes any performance difference at all.

KS
Jun 10, 2003
Outrageous Lumpwad

Moey posted:

Thirding this. I like things to look pretty. :)

My current setup is 2x10gb running iscsi/vm network/management then 2x1gb for vMotion. Still have two more on board NICs if I ever need them and room for another PCIe card or two.

If you do a lot of vmotion and have big hosts, get it onto 10 gig! It's a whole lot more pleasant. 256GB hosts empty out in <30 seconds. If you're worried about link saturation, hopefully you're on Ent+ and can use NIOC.

I see so many incorrect vmotion configs! Stuff like vmotion traffic going over the VPC/ISL because VLANs and port groups aren't configured optimally. Just make sure you don't join the club.

Dilbert As gently caress posted:

Also gotta give a shout out to KS for the vCloud class

Very glad to hear it worked out.

Anti_Social
Jan 1, 2007

My problem is you dancing all the time
After some google searching, I only found one website that mentioned home network desktop virtualization (http://www.cringely.com/2011/11/24/silence-is-golden/).

Here's my situation:
I have a pretty powerful desktop that I built a few months ago. I currently run Win7 and a Hackintosh off of it. My fiance is a mac user but doesn't do a lot of heavy lifting with it (maybe some Illustrator and Photoshop, but mostly just ram intensive). Unfortunately, I think it's dying, and I don't want to pay $$$ for a new iMac. I like to game, and also do some photo editing and occasionally video editing.

If I run a VMWare server off of the desktop, and use a virtual windows 7 session ON the server machine, does that make it a feasible option for gaming? You can do that, right? Or do I need some sort of thin client instead? Could someone suggest a good resource for a project like this?

Moey
Oct 22, 2010

I LIKE TO MOVE IT
http://www.sysprobs.com/easily-run-mac-os-x-10-8-mountain-lion-retail-on-pc-with-vmware-image

VMware workstation or VirtualBox (free) should accomplish this for you.

Goon Matchmaker
Oct 23, 2003

I play too much EVE-Online

Dilbert As gently caress posted:

IIRC you should use Round Robin for VNXe's my new place is 3Par mostly so I may be wrong.

Use round robin and change the number of IOPS before it switches paths to 1. I can't find the EMC whitepaper on this but we're doing it here and it results in a fairly massive performance increase.

I have a little script I use to fix the default pathing stuff:

code:
#!/bin/bash
for host in <list of hosts>; 
do
    echo $host
    ssh root@$host 'for i in `ls /vmfs/devices/disks/ | grep naa.600` ; do esxcli storage nmp device set --device $i --psp VMW_PSP_RR; done'
    ssh root@$host 'for i in `ls /vmfs/devices/disks/ | grep naa.600` ; do esxcli storage nmp psp roundrobin deviceconfig set -d $i --iops 1 --type iops;done'
    ssh root@$host 'esxcli storage nmp satp set -s VMW_SATP_ALUA -P VMW_PSP_RR'
done
This will
1) Change the path selection plugin to round robin
2) Change the number of IOPS the round robin plugin will use before switching to another path to 1
3) Change the default PSP for the VMWare ALUA driver to round robin

It affects all LUNs so you may need to edit for taste.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Goon Matchmaker posted:

1) Change the path selection plugin to round robin
2) Change the number of IOPS the round robin plugin will use before switching to another path to 1
3) Change the default PSP for the VMWare ALUA driver to round robin

It affects all LUNs so you may need to edit for taste.

Changing paths every single IOP seems pretty aggressive. I would think that would introduce some heavy overhead, but I have never worked with EMC's stuff.

The default IOPS before changing paths is 100 correct?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Moey posted:

Changing paths every single IOP seems pretty aggressive. I would think that would introduce some heavy overhead, but I have never worked with EMC's stuff.

The default IOPS before changing paths is 100 correct?

I actually think it is 1000

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Dilbert As gently caress posted:

I actually think it is 1000

Maybe I'll setup a test datastore on one of our Nimbles and do some testing with changing this. We have not even began to really stress these things though.

I did boot up 150ish VMs at once the other day, didn't have any issues. The unit was hitting around 9k IOPS while it was going on.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
I think the real thing it is trying to avoid by setting it to 1 is to avoid an IOPS storm where VM's may grasp for XXX IOPS and not get properly distributed, and additional to avoid VM failures in the event of a path down issue which may affect stability of VM's.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
I do have a new site I will be deploying within the next month or so, so I can do some good stress testing there before putting it into production.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
OH gently caress YES got a VCDX to confirm he is stopping by the ICM class I am helping teaching at my local CC, in the fall!


Not sure what number, but any east coast goon VCDX's visiting the East Coast/tidewater area from Sept to May hit, if you could do a talk on some stuff I would buy you a dinner and drinks.

Dr. Arbitrary
Mar 15, 2006

Bleak Gremlin
I'm a total idiot when it comes to virtualization, but I've read the OP and the first chapter of the Scott Lowe book.

At work I have a domain controller that's run on a virtual server.
Is there any reason I wouldn't want it to be run in High Availability mode if cost isn't an issue?
I was told by some coworkers that running a virtual server on two physical machines simultaneously would cause problems. Is that actually a problem with HA?

evil_bunnY
Apr 2, 2003

HA is what you want. What you probably don't want is fault tolerance.

Dr. Arbitrary
Mar 15, 2006

Bleak Gremlin

evil_bunnY posted:

HA is what you want. What you probably don't want is fault tolerance.

Yeah, that's what I thought. HA means there's two physical servers doing the work of one in some sort of lockstep thingy. FT is where the server starts itself back up, which is nice but not perfect.

evil_bunnY
Apr 2, 2003

It's the other way around.

Cronus
Mar 9, 2003

Hello beautiful.
This...is gonna get gross.
Does anyone honestly use FT in production? I don't think I have ever heard of a case. The only thing I can think of for it being used would be some Win2K box running a mission critical app that no one understands anymore.

Mierdaan
Sep 14, 2004

Pillbug

Cronus posted:

Does anyone honestly use FT in production? I don't think I have ever heard of a case. The only thing I can think of for it being used would be some Win2K box running a mission critical app that no one understands anymore.

I don't think so. The 1 vCPU limit pretty much means any guest where FT would be really nice can't use it anyways.

Adbot
ADBOT LOVES YOU

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
"Hey guys vSMP FT is coming any day now!!"
-- VMware since 2008

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply