Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
evil_bunnY
Apr 2, 2003

Not unless you're patching Hyper-V itself. The parent partition lives on top of the hypervisor.

Adbot
ADBOT LOVES YOU

Syano
Jul 13, 2005

MC Fruit Stripe posted:

In Hyper-V do I want to shut down the VMs when patching the physical server?

You should enter maintenance mode so yes. If its a single hyper-v host then that means shutting them down. If its a cluster then just migrate the work loads to another host.

^^Poster above me is technically correct but best practice is to go maintenance mode

MC Fruit Stripe
Nov 26, 2002

around and around we go
The yes and the no of it was basically the same conversation I had with my coworkers, where we all stood around and went, uh I think you should but I think it doesn't matter but I think it's best practice but I think it's irrelevant.

I think we will, just to be safe.

Syano
Jul 13, 2005

MC Fruit Stripe posted:

The yes and the no of it was basically the same conversation I had with my coworkers, where we all stood around and went, uh I think you should but I think it doesn't matter but I think it's best practice but I think it's irrelevant.

I think we will, just to be safe.

The really weird part is that even though the host windows installation basically jumps up into 'dom0' so to speak, theres a small chance that whatever patch you install is going to update a service on the host install and on the hypervisor and cause your machines to shutdown anyways or something. Theres no way to know really unless you comb through the update notes to see what is actually updating. The more common scenario though is the person who wrote the update flagged it to require restart, or it actually does require restart, meaning even though your host OS is riding above the hypervisor, youre still rebooting youre hypervisor. Of course the best way to solve all these problems, provided you are lucky enough to be in this position, is to install just the hyper-v hypervisor or the server core version of windows.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
The Hyper-V virtual switch frequently failing, is that a common issue? Once in a while (varying between once an hour to once every six) network communication just cuts out completely for 1-2 minutes. Nothing goes between the host, the guests and the external network.

evil_bunnY
Apr 2, 2003

Combat Pretzel posted:

The Hyper-V virtual switch frequently failing, is that a common issue?
Anything failing on non-R2 Hyper-V is par for the course.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
The host is Windows 8, aka Windows Server 2012. You'd think that something like this wouldn't turn up as a regression.

Syano
Jul 13, 2005
It shouldn't be. Ive put my hands on more hyper-v set ups than I care to remember and I have never seen this. Are you sure its the virtual switch that's eating it? Are you doing any sort of nic teaming on the host?

Nebulis01
Dec 30, 2003
Technical Support Ninny

Combat Pretzel posted:

The Hyper-V virtual switch frequently failing, is that a common issue? Once in a while (varying between once an hour to once every six) network communication just cuts out completely for 1-2 minutes. Nothing goes between the host, the guests and the external network.

I had this issue happen to us on a few older Dell PowerEdge 2970 machines that used the inbuilt Broadcom NICs, a firmware and driver update resolved the issue.

MC Fruit Stripe
Nov 26, 2002

around and around we go

Combat Pretzel posted:

The Hyper-V virtual switch frequently failing, is that a common issue? Once in a while (varying between once an hour to once every six) network communication just cuts out completely for 1-2 minutes. Nothing goes between the host, the guests and the external network.
What in the hell? We just had this exact issue last week, and I wish I was kidding.

My coworker saved the day, there's actually a Microsoft hotfix for it, which we had applied and hadn't fixed it, but digging a bit more into it, it looks like you then need to go to each VM on the box, reinsert the integration services disk, and let it update the driver. I just shot him an email asking for the link to the article we found which discusses it. It fixed our issue, that's for sure.

Infact, this is the same issue and reason that I was asking if we needed to shut down the VMs to patch the box itself earlier on this page. We've come full circle.

e: But then I read more and you're on 8/2012, oh bleh bleh bleh. Ours is 2008 - I'll get you the link but I may be heading in the wrong direction. This post gets worse every time I edit it.

vvv Freakin seriously. I don't want to play my platform is better than your platform, but half of our environments are vSphere, the other half Hyper-V - one amazes me, the other just annoys me.

e: Here is the hotfix we applied: http://support.microsoft.com/kb/974909 - there was an article he found which discusses reinserting the integrated services disk but my google abilities are failing me at the moment.

MC Fruit Stripe fucked around with this message at 17:58 on Mar 11, 2013

evil_bunnY
Apr 2, 2003

MC Fruit Stripe posted:

Infact, this is the same issue and reason that I was asking if we needed to shut down the VMs to patch the box itself earlier on this page. We've come full circle.
Oh god I'm so glad I don't have to deal with Hyper-V anymore.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I guess I'll be giving VMware Workstation another good look. Hyper-V was handy because it's integrated and I need to keep it running for the Windows Phone 8 emulator.

--edit: That said, the only third party OS VM with integration components is the Linux one, which runs on a 3.5 kernel. Maybe it ain't patched for this. The FreeBSD ones don't have them yet.

Combat Pretzel fucked around with this message at 18:12 on Mar 11, 2013

MC Fruit Stripe
Nov 26, 2002

around and around we go
To follow up though, here's the article about the issue, I am not sure this applies to Windows 8 / Server 2012 though, but you can chalk it up to knowledge gained.

http://blog.compnology.com/2011/09/netvsc-error-with-hyper-v-guest.html

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
Hyper-V certainly isn't the only hypervisor to have networking issues before.

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2019944
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2006277

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Well Mysoginist's thread fell off the cliff of the forums so I guess I will post it here.

The school I help out at was looking for a way to re-invent the lab environment as the current way has many complications in it, firstly limited IP addresses, secondly someone always puts a vmkernel to the same IP as the NetApp box(crashing everything), thirdly we couldn't really offer it online with the setup we have, and lastly resetting the environment left residue from previous classes.

To account for this I talked with a few of the teachers I work with and created a nice little vApp that is fully deploy-able, and addresses most all the issues(some we can compensate for with some minor design changes). It's very similar to AutoLab, however it really doesn't use as many powershell/batch scripts to create the environment.



not shown 2 Domain Controllers, SQL 2008 R2 Server, vCenter Server, outside of VAPP for Management of physical hosts and clusters

I am trying to keep simplicity in mind here, and still cover all the course objectives as designated by the blueprint and VMware. Which the vApp does in it's current state, however I think there are improvements that can be made.

One of the problems I am having is how to integrate this with VMware view while maintaining a level of simplisity. Basically, I want to put a connection and security server into the environment and have it link up to the Jump-Box within the vApp for online availability. I realize I could probably just install the agent on Desktops, and push the login permissions down every 8-16 weeks, however the classes have accommodation for 25 students at a time, so upto 75 students isn't terrible, but I would rather not have to. I could just do linked clones, refresh the desktop image every 8-16 weeks, the only fallback is this vAPP will most likely be redeployed every semester and the used labs deleted, which means I would still need to go and add them back to the vApp's networks.

The more I think about it I probably could just write some powercli scripts to deploy the vAPP(removing the jump-box) while adding linked clones to the vApp's network, and use linked clones for all this. However, I would really like to keep it as simplistic as possible without utilizing too many scripts to do the work, but the more simplistic and flat I make it the more manual work I do... Or I could entirely be over-thinking this whole thing and missing the easiest solution.

So yeah I am trying to find a nice middle ground but am open to suggestions

Dilbert As FUCK fucked around with this message at 16:01 on Mar 12, 2013

Docjowles
Apr 9, 2009

Corvettefisher posted:

secondly someone always puts a vmkernel to the same IP as the NetApp box(crashing everything)

Haha, this is great. I actually kind of like this as an opportunity to learn by example. "Grats, you just took down the entire production cluster and your company has ground to a halt. NOW do you see why storage should get its own network, shitlords?"

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Docjowles posted:

Haha, this is great. I actually kind of like this as an opportunity to learn by example. "Grats, you just took down the entire production cluster and your company has ground to a halt. NOW do you see why storage should get its own network, shitlords?"

Yeah the switches we currently have after unmanaged switches... It always sucks when class ends early. Utilizing the router and virtual NAS we localize it to a vApp and can fix it relatively easily.

My teacher likes to call it a "resume generating event"

Dilbert As FUCK fucked around with this message at 16:17 on Mar 12, 2013

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Corvettefisher posted:

Well Mysoginist's thread fell off the cliff of the forums so I guess I will post it here.
Nobody here works on cool poo poo outside the Cavern of COBOL :(

My most recent project belongs there as well, really.

Crossbar
Jun 16, 2002
Chronic Lurker
I need to P2V a domain controller onto a hyper-v 2012 host. Is there a way to do this with free or cheap tools?

I'm assuming the best way to do this is offline but I having trouble finding a way to do it that doesn't involve VMM.

Syano
Jul 13, 2005

Crossbar posted:

I need to P2V a domain controller onto a hyper-v 2012 host. Is there a way to do this with free or cheap tools?

I'm assuming the best way to do this is offline but I having trouble finding a way to do it that doesn't involve VMM.

Don't do it. Just build a second virtual dc and transfer your fsmo roles. Will probably actually be faster than a p2v

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
On the bright side, you'll be able to P2V your domain controllers when they're Windows 2012!

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Crossbar posted:

I need to P2V a domain controller onto a hyper-v 2012 host. Is there a way to do this with free or cheap tools?

I'm assuming the best way to do this is offline but I having trouble finding a way to do it that doesn't involve VMM.


The only way I know is using a legacy cold boot ISO from vmware and P2V, but for it to work you have to power off ALL domain controllers, and bring them up all together and NEVER EVER touch the physical domain controllers again.


I can't say I know any time it is reccommended to do a P2V of a DC. Just do a normal domain upgrade to a new server

evil_bunnY
Apr 2, 2003

If your hosts/storage are decent it'll be faster too.

whaam
Mar 18, 2008
Hearing mixed things on this but is it possible to attain 4Gb bandwidth between shared storage and a host using 4 1gb nics and LACP on stackable switches? My vendor says yes, some forums say no.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

whaam posted:

Hearing mixed things on this but is it possible to attain 4Gb bandwidth between shared storage and a host using 4 1gb nics and LACP on stackable switches? My vendor says yes, some forums say no.
It is, but not in a single stream. With round robin MPIO and multiple iscsi VMKs you can get ~4gbps, though obviously there will be some loss. If using NFS you will not get above 1gbps.

whaam
Mar 18, 2008

adorai posted:

It is, but not in a single stream. With round robin MPIO and multiple iscsi VMKs you can get ~4gbps, though obviously there will be some loss. If using NFS you will not get above 1gbps.

We are using NFS, looks like we got some bad info from our vendor. Thats a raw deal because I doubt we will get the IO we need on 1Gbit.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

whaam posted:

We are using NFS, looks like we got some bad info from our vendor. Thats a raw deal because I doubt we will get the IO we need on 1Gbit.
Do you need more than 1gbps on a single datastore? You can always assign multiple IPs to your storage, that way 4 different VMs could theoretically each get 1gbps on NFS.

whaam
Mar 18, 2008

adorai posted:

Do you need more than 1gbps on a single datastore? You can always assign multiple IPs to your storage, that way 4 different VMs could theoretically each get 1gbps on NFS.

We have an IO heavy SQL application we are installing that according to the software company needs more than that kind of speed.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

whaam posted:

We have an IO heavy SQL application we are installing that according to the software company needs more than that kind of speed.
Can you use iSCSI inside the guest? That's how we get around this issue.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Also keep in mind if you put your app and the SQL server on the same host using VMXNET3 you can talk at 10Gb/s. Granted it will still have to go to disk for a good amount of things but anything in the SQL memory will be very fast. It will also help you lighten the burden on your network.

ragzilla
Sep 9, 2005
don't ask me, i only work here


adorai posted:

Can you use iSCSI inside the guest? That's how we get around this issue.

You'd still get limited by the src/dst load balancing schemes unless the storage had multiple IPs and you had multiple LUNs.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

ragzilla posted:

You'd still get limited by the src/dst load balancing schemes unless the storage had multiple IPs and you had multiple LUNs.
Well, yes. In the described scenario, you can put 4 interfaces on the storage backend and get 4gbps to your backend storage with iSCSI and MPIO. You cannot say the same with NFS backed storage, no matter what you do you will still only get 1gbps from that guest OS to it's database.

edit: I suppose you could create multiple VMDKs on multiple datastores which are mapped with different IPs, and raid them in the guest to get >1gbps via NFS, but that seems a little over the top.

ragzilla
Sep 9, 2005
don't ask me, i only work here


adorai posted:

Well, yes. In the described scenario, you can put 4 interfaces on the storage backend and get 4gbps to your backend storage with iSCSI and MPIO. You cannot say the same with NFS backed storage, no matter what you do you will still only get 1gbps from that guest OS to it's database.

edit: I suppose you could create multiple VMDKs on multiple datastores which are mapped with different IPs, and raid them in the guest to get >1gbps via NFS, but that seems a little over the top.

Moral of this story, just run 10GbE.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

ragzilla posted:

Moral of this story, just run 10GbE.
No doubt. Of course in 2 years we'll be snickering at the poor bastards who don't have 40gbe.

whaam
Mar 18, 2008

ragzilla posted:

Moral of this story, just run 10GbE.

I would have loved to, unfortunately we ran out of budget for the project. In the end we should have done the research but the sales engineers from our usual rock solid vendor assured us that 4x1Gb was possible over NFS which is clearly now incorrect, I think they were thinking of MPIO and iSCSI. We are scrambling to buy 10Gb gear now as there isn't any other options really, aside from maybe using iSCSI instead but netapp really runs best on NFS and I think a lot of their features don't work on iSCSI.

The idea of running vmdk raid on 4 different datastores is interesting but sounds like a lot of poo poo and likely would cause massive headaches with moving to different hosts in the event of a loss.

Think we will get 2 10gb switches, 2 10gb nics (one for the host where SQL lives and one for a second host in case the SQL guest needs to move) and 2 10gb modules for the netapp controllers. The sad thing is our environment is so small that aside from this one IO heavy SQL server all this infrastructure is overkill. In hindsight it may have been better to build the SQL server as a physical server on RAID10 or something and virtualize the application servers.

evil_bunnY
Apr 2, 2003

You want 2 nics per host at the least (1 to each storage switch).

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

whaam posted:

I would have loved to, unfortunately we ran out of budget for the project. In the end we should have done the research but the sales engineers from our usual rock solid vendor assured us that 4x1Gb was possible over NFS which is clearly now incorrect, I think they were thinking of MPIO and iSCSI. We are scrambling to buy 10Gb gear now as there isn't any other options really, aside from maybe using iSCSI instead but netapp really runs best on NFS and I think a lot of their features don't work on iSCSI.

The idea of running vmdk raid on 4 different datastores is interesting but sounds like a lot of poo poo and likely would cause massive headaches with moving to different hosts in the event of a loss.

Think we will get 2 10gb switches, 2 10gb nics (one for the host where SQL lives and one for a second host in case the SQL guest needs to move) and 2 10gb modules for the netapp controllers. The sad thing is our environment is so small that aside from this one IO heavy SQL server all this infrastructure is overkill. In hindsight it may have been better to build the SQL server as a physical server on RAID10 or something and virtualize the application servers.

Yeah you should be getting 2 dual port 10gb Nics, 2 10Gb switches going to your 10g ports on your netapp device.

Remember to use some VM affinity with that APP server and SQL server, as well as the VMXNET3.

Also just a question, have you done any simulations of the APP server and SQL server? Some companies love love love to completely over-spec products requirements, when in production they will never utilize anywhere close to it.

Dilbert As FUCK fucked around with this message at 15:44 on Mar 14, 2013

Syano
Jul 13, 2005

whaam posted:

I would have loved to, unfortunately we ran out of budget for the project. In the end we should have done the research but the sales engineers from our usual rock solid vendor assured us that 4x1Gb was possible over NFS which is clearly now incorrect, I think they were thinking of MPIO and iSCSI. We are scrambling to buy 10Gb gear now as there isn't any other options really, aside from maybe using iSCSI instead but netapp really runs best on NFS and I think a lot of their features don't work on iSCSI.

The idea of running vmdk raid on 4 different datastores is interesting but sounds like a lot of poo poo and likely would cause massive headaches with moving to different hosts in the event of a loss.

Think we will get 2 10gb switches, 2 10gb nics (one for the host where SQL lives and one for a second host in case the SQL guest needs to move) and 2 10gb modules for the netapp controllers. The sad thing is our environment is so small that aside from this one IO heavy SQL server all this infrastructure is overkill. In hindsight it may have been better to build the SQL server as a physical server on RAID10 or something and virtualize the application servers.

Have you pulled any hard IO numbers from your physical environment to see if you even need 10GE? Sometimes app vendors scream about wanting this and that in a virtual environment to ensure their app gets enough horsepower and a lot of times its total overkill. I mean you very well could need it but before I broke the budget getting it I would run at least some basic perfmon

Kachunkachunk
Jun 6, 2011
Indeed you want to pull some good I/O numbers before you spend the money on the move to 10Gb, yeah.

I found that dual-port 10Gb CNAs themselves are fairly affordable now but I have yet to find an affordable 10Gb switch for prosumer or home lab setups... let alone finding transceivers and cables at a good price. I just run direct-connect between two nodes for now, which was fairly cheap in the end. I think it was $300 CAD for two Brocade dual-port 10Gb CNAs and two direct-connect active twinax Cisco cables.

Adbot
ADBOT LOVES YOU

evil_bunnY
Apr 2, 2003

Kachunkachunk posted:

I found that dual-port 10Gb CNAs themselves are fairly affordable now but I have yet to find an affordable 10Gb switch for prosumer or home lab setups... let alone finding transceivers and cables at a good price.
You can twinax to the switch.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply