|
Not unless you're patching Hyper-V itself. The parent partition lives on top of the hypervisor.
|
# ? Mar 8, 2013 20:20 |
|
|
# ? May 9, 2024 01:42 |
|
MC Fruit Stripe posted:In Hyper-V do I want to shut down the VMs when patching the physical server? You should enter maintenance mode so yes. If its a single hyper-v host then that means shutting them down. If its a cluster then just migrate the work loads to another host. ^^Poster above me is technically correct but best practice is to go maintenance mode
|
# ? Mar 8, 2013 20:24 |
|
The yes and the no of it was basically the same conversation I had with my coworkers, where we all stood around and went, uh I think you should but I think it doesn't matter but I think it's best practice but I think it's irrelevant. I think we will, just to be safe.
|
# ? Mar 8, 2013 20:37 |
|
MC Fruit Stripe posted:The yes and the no of it was basically the same conversation I had with my coworkers, where we all stood around and went, uh I think you should but I think it doesn't matter but I think it's best practice but I think it's irrelevant. The really weird part is that even though the host windows installation basically jumps up into 'dom0' so to speak, theres a small chance that whatever patch you install is going to update a service on the host install and on the hypervisor and cause your machines to shutdown anyways or something. Theres no way to know really unless you comb through the update notes to see what is actually updating. The more common scenario though is the person who wrote the update flagged it to require restart, or it actually does require restart, meaning even though your host OS is riding above the hypervisor, youre still rebooting youre hypervisor. Of course the best way to solve all these problems, provided you are lucky enough to be in this position, is to install just the hyper-v hypervisor or the server core version of windows.
|
# ? Mar 8, 2013 20:44 |
|
The Hyper-V virtual switch frequently failing, is that a common issue? Once in a while (varying between once an hour to once every six) network communication just cuts out completely for 1-2 minutes. Nothing goes between the host, the guests and the external network.
|
# ? Mar 11, 2013 10:25 |
|
Combat Pretzel posted:The Hyper-V virtual switch frequently failing, is that a common issue?
|
# ? Mar 11, 2013 11:06 |
|
The host is Windows 8, aka Windows Server 2012. You'd think that something like this wouldn't turn up as a regression.
|
# ? Mar 11, 2013 13:08 |
|
It shouldn't be. Ive put my hands on more hyper-v set ups than I care to remember and I have never seen this. Are you sure its the virtual switch that's eating it? Are you doing any sort of nic teaming on the host?
|
# ? Mar 11, 2013 13:41 |
|
Combat Pretzel posted:The Hyper-V virtual switch frequently failing, is that a common issue? Once in a while (varying between once an hour to once every six) network communication just cuts out completely for 1-2 minutes. Nothing goes between the host, the guests and the external network. I had this issue happen to us on a few older Dell PowerEdge 2970 machines that used the inbuilt Broadcom NICs, a firmware and driver update resolved the issue.
|
# ? Mar 11, 2013 17:21 |
|
Combat Pretzel posted:The Hyper-V virtual switch frequently failing, is that a common issue? Once in a while (varying between once an hour to once every six) network communication just cuts out completely for 1-2 minutes. Nothing goes between the host, the guests and the external network. My coworker saved the day, there's actually a Microsoft hotfix for it, which we had applied and hadn't fixed it, but digging a bit more into it, it looks like you then need to go to each VM on the box, reinsert the integration services disk, and let it update the driver. I just shot him an email asking for the link to the article we found which discusses it. It fixed our issue, that's for sure. Infact, this is the same issue and reason that I was asking if we needed to shut down the VMs to patch the box itself earlier on this page. We've come full circle. e: But then I read more and you're on 8/2012, oh bleh bleh bleh. Ours is 2008 - I'll get you the link but I may be heading in the wrong direction. This post gets worse every time I edit it. vvv Freakin seriously. I don't want to play my platform is better than your platform, but half of our environments are vSphere, the other half Hyper-V - one amazes me, the other just annoys me. e: Here is the hotfix we applied: http://support.microsoft.com/kb/974909 - there was an article he found which discusses reinserting the integrated services disk but my google abilities are failing me at the moment. MC Fruit Stripe fucked around with this message at 17:58 on Mar 11, 2013 |
# ? Mar 11, 2013 17:45 |
|
MC Fruit Stripe posted:Infact, this is the same issue and reason that I was asking if we needed to shut down the VMs to patch the box itself earlier on this page. We've come full circle.
|
# ? Mar 11, 2013 17:47 |
|
I guess I'll be giving VMware Workstation another good look. Hyper-V was handy because it's integrated and I need to keep it running for the Windows Phone 8 emulator. --edit: That said, the only third party OS VM with integration components is the Linux one, which runs on a 3.5 kernel. Maybe it ain't patched for this. The FreeBSD ones don't have them yet. Combat Pretzel fucked around with this message at 18:12 on Mar 11, 2013 |
# ? Mar 11, 2013 18:08 |
|
To follow up though, here's the article about the issue, I am not sure this applies to Windows 8 / Server 2012 though, but you can chalk it up to knowledge gained. http://blog.compnology.com/2011/09/netvsc-error-with-hyper-v-guest.html
|
# ? Mar 11, 2013 18:33 |
|
Hyper-V certainly isn't the only hypervisor to have networking issues before. http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2019944 http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2006277
|
# ? Mar 12, 2013 14:22 |
|
Well Mysoginist's thread fell off the cliff of the forums so I guess I will post it here. The school I help out at was looking for a way to re-invent the lab environment as the current way has many complications in it, firstly limited IP addresses, secondly someone always puts a vmkernel to the same IP as the NetApp box(crashing everything), thirdly we couldn't really offer it online with the setup we have, and lastly resetting the environment left residue from previous classes. To account for this I talked with a few of the teachers I work with and created a nice little vApp that is fully deploy-able, and addresses most all the issues(some we can compensate for with some minor design changes). It's very similar to AutoLab, however it really doesn't use as many powershell/batch scripts to create the environment. not shown 2 Domain Controllers, SQL 2008 R2 Server, vCenter Server, outside of VAPP for Management of physical hosts and clusters I am trying to keep simplicity in mind here, and still cover all the course objectives as designated by the blueprint and VMware. Which the vApp does in it's current state, however I think there are improvements that can be made. One of the problems I am having is how to integrate this with VMware view while maintaining a level of simplisity. Basically, I want to put a connection and security server into the environment and have it link up to the Jump-Box within the vApp for online availability. I realize I could probably just install the agent on Desktops, and push the login permissions down every 8-16 weeks, however the classes have accommodation for 25 students at a time, so upto 75 students isn't terrible, but I would rather not have to. I could just do linked clones, refresh the desktop image every 8-16 weeks, the only fallback is this vAPP will most likely be redeployed every semester and the used labs deleted, which means I would still need to go and add them back to the vApp's networks. The more I think about it I probably could just write some powercli scripts to deploy the vAPP(removing the jump-box) while adding linked clones to the vApp's network, and use linked clones for all this. However, I would really like to keep it as simplistic as possible without utilizing too many scripts to do the work, but the more simplistic and flat I make it the more manual work I do... Or I could entirely be over-thinking this whole thing and missing the easiest solution. So yeah I am trying to find a nice middle ground but am open to suggestions Dilbert As FUCK fucked around with this message at 16:01 on Mar 12, 2013 |
# ? Mar 12, 2013 15:55 |
|
Corvettefisher posted:secondly someone always puts a vmkernel to the same IP as the NetApp box(crashing everything) Haha, this is great. I actually kind of like this as an opportunity to learn by example. "Grats, you just took down the entire production cluster and your company has ground to a halt. NOW do you see why storage should get its own network, shitlords?"
|
# ? Mar 12, 2013 16:09 |
|
Docjowles posted:Haha, this is great. I actually kind of like this as an opportunity to learn by example. "Grats, you just took down the entire production cluster and your company has ground to a halt. NOW do you see why storage should get its own network, shitlords?" Yeah the switches we currently have after unmanaged switches... It always sucks when class ends early. Utilizing the router and virtual NAS we localize it to a vApp and can fix it relatively easily. My teacher likes to call it a "resume generating event" Dilbert As FUCK fucked around with this message at 16:17 on Mar 12, 2013 |
# ? Mar 12, 2013 16:12 |
|
Corvettefisher posted:Well Mysoginist's thread fell off the cliff of the forums so I guess I will post it here. My most recent project belongs there as well, really.
|
# ? Mar 12, 2013 16:33 |
|
I need to P2V a domain controller onto a hyper-v 2012 host. Is there a way to do this with free or cheap tools? I'm assuming the best way to do this is offline but I having trouble finding a way to do it that doesn't involve VMM.
|
# ? Mar 13, 2013 22:08 |
|
Crossbar posted:I need to P2V a domain controller onto a hyper-v 2012 host. Is there a way to do this with free or cheap tools? Don't do it. Just build a second virtual dc and transfer your fsmo roles. Will probably actually be faster than a p2v
|
# ? Mar 13, 2013 22:27 |
|
On the bright side, you'll be able to P2V your domain controllers when they're Windows 2012!
|
# ? Mar 13, 2013 23:06 |
|
Crossbar posted:I need to P2V a domain controller onto a hyper-v 2012 host. Is there a way to do this with free or cheap tools? The only way I know is using a legacy cold boot ISO from vmware and P2V, but for it to work you have to power off ALL domain controllers, and bring them up all together and NEVER EVER touch the physical domain controllers again. I can't say I know any time it is reccommended to do a P2V of a DC. Just do a normal domain upgrade to a new server
|
# ? Mar 13, 2013 23:11 |
|
If your hosts/storage are decent it'll be faster too.
|
# ? Mar 13, 2013 23:18 |
|
Hearing mixed things on this but is it possible to attain 4Gb bandwidth between shared storage and a host using 4 1gb nics and LACP on stackable switches? My vendor says yes, some forums say no.
|
# ? Mar 14, 2013 00:27 |
|
whaam posted:Hearing mixed things on this but is it possible to attain 4Gb bandwidth between shared storage and a host using 4 1gb nics and LACP on stackable switches? My vendor says yes, some forums say no.
|
# ? Mar 14, 2013 00:50 |
|
adorai posted:It is, but not in a single stream. With round robin MPIO and multiple iscsi VMKs you can get ~4gbps, though obviously there will be some loss. If using NFS you will not get above 1gbps. We are using NFS, looks like we got some bad info from our vendor. Thats a raw deal because I doubt we will get the IO we need on 1Gbit.
|
# ? Mar 14, 2013 01:05 |
|
whaam posted:We are using NFS, looks like we got some bad info from our vendor. Thats a raw deal because I doubt we will get the IO we need on 1Gbit.
|
# ? Mar 14, 2013 01:13 |
|
adorai posted:Do you need more than 1gbps on a single datastore? You can always assign multiple IPs to your storage, that way 4 different VMs could theoretically each get 1gbps on NFS. We have an IO heavy SQL application we are installing that according to the software company needs more than that kind of speed.
|
# ? Mar 14, 2013 01:25 |
|
whaam posted:We have an IO heavy SQL application we are installing that according to the software company needs more than that kind of speed.
|
# ? Mar 14, 2013 01:28 |
|
Also keep in mind if you put your app and the SQL server on the same host using VMXNET3 you can talk at 10Gb/s. Granted it will still have to go to disk for a good amount of things but anything in the SQL memory will be very fast. It will also help you lighten the burden on your network.
|
# ? Mar 14, 2013 02:12 |
|
adorai posted:Can you use iSCSI inside the guest? That's how we get around this issue. You'd still get limited by the src/dst load balancing schemes unless the storage had multiple IPs and you had multiple LUNs.
|
# ? Mar 14, 2013 03:00 |
|
ragzilla posted:You'd still get limited by the src/dst load balancing schemes unless the storage had multiple IPs and you had multiple LUNs. edit: I suppose you could create multiple VMDKs on multiple datastores which are mapped with different IPs, and raid them in the guest to get >1gbps via NFS, but that seems a little over the top.
|
# ? Mar 14, 2013 04:04 |
|
adorai posted:Well, yes. In the described scenario, you can put 4 interfaces on the storage backend and get 4gbps to your backend storage with iSCSI and MPIO. You cannot say the same with NFS backed storage, no matter what you do you will still only get 1gbps from that guest OS to it's database. Moral of this story, just run 10GbE.
|
# ? Mar 14, 2013 04:40 |
|
ragzilla posted:Moral of this story, just run 10GbE.
|
# ? Mar 14, 2013 05:42 |
|
ragzilla posted:Moral of this story, just run 10GbE. I would have loved to, unfortunately we ran out of budget for the project. In the end we should have done the research but the sales engineers from our usual rock solid vendor assured us that 4x1Gb was possible over NFS which is clearly now incorrect, I think they were thinking of MPIO and iSCSI. We are scrambling to buy 10Gb gear now as there isn't any other options really, aside from maybe using iSCSI instead but netapp really runs best on NFS and I think a lot of their features don't work on iSCSI. The idea of running vmdk raid on 4 different datastores is interesting but sounds like a lot of poo poo and likely would cause massive headaches with moving to different hosts in the event of a loss. Think we will get 2 10gb switches, 2 10gb nics (one for the host where SQL lives and one for a second host in case the SQL guest needs to move) and 2 10gb modules for the netapp controllers. The sad thing is our environment is so small that aside from this one IO heavy SQL server all this infrastructure is overkill. In hindsight it may have been better to build the SQL server as a physical server on RAID10 or something and virtualize the application servers.
|
# ? Mar 14, 2013 14:12 |
|
You want 2 nics per host at the least (1 to each storage switch).
|
# ? Mar 14, 2013 15:21 |
|
whaam posted:I would have loved to, unfortunately we ran out of budget for the project. In the end we should have done the research but the sales engineers from our usual rock solid vendor assured us that 4x1Gb was possible over NFS which is clearly now incorrect, I think they were thinking of MPIO and iSCSI. We are scrambling to buy 10Gb gear now as there isn't any other options really, aside from maybe using iSCSI instead but netapp really runs best on NFS and I think a lot of their features don't work on iSCSI. Yeah you should be getting 2 dual port 10gb Nics, 2 10Gb switches going to your 10g ports on your netapp device. Remember to use some VM affinity with that APP server and SQL server, as well as the VMXNET3. Also just a question, have you done any simulations of the APP server and SQL server? Some companies love love love to completely over-spec products requirements, when in production they will never utilize anywhere close to it. Dilbert As FUCK fucked around with this message at 15:44 on Mar 14, 2013 |
# ? Mar 14, 2013 15:42 |
|
whaam posted:I would have loved to, unfortunately we ran out of budget for the project. In the end we should have done the research but the sales engineers from our usual rock solid vendor assured us that 4x1Gb was possible over NFS which is clearly now incorrect, I think they were thinking of MPIO and iSCSI. We are scrambling to buy 10Gb gear now as there isn't any other options really, aside from maybe using iSCSI instead but netapp really runs best on NFS and I think a lot of their features don't work on iSCSI. Have you pulled any hard IO numbers from your physical environment to see if you even need 10GE? Sometimes app vendors scream about wanting this and that in a virtual environment to ensure their app gets enough horsepower and a lot of times its total overkill. I mean you very well could need it but before I broke the budget getting it I would run at least some basic perfmon
|
# ? Mar 14, 2013 16:21 |
|
Indeed you want to pull some good I/O numbers before you spend the money on the move to 10Gb, yeah. I found that dual-port 10Gb CNAs themselves are fairly affordable now but I have yet to find an affordable 10Gb switch for prosumer or home lab setups... let alone finding transceivers and cables at a good price. I just run direct-connect between two nodes for now, which was fairly cheap in the end. I think it was $300 CAD for two Brocade dual-port 10Gb CNAs and two direct-connect active twinax Cisco cables.
|
# ? Mar 14, 2013 16:36 |
|
|
# ? May 9, 2024 01:42 |
|
Kachunkachunk posted:I found that dual-port 10Gb CNAs themselves are fairly affordable now but I have yet to find an affordable 10Gb switch for prosumer or home lab setups... let alone finding transceivers and cables at a good price.
|
# ? Mar 14, 2013 16:37 |