Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
movax
Aug 30, 2008

Anyone know a USB Ethernet adapter that does work OOB on Hyper-V Server 2019? My DeskMini has a I219V which as far as I can tell, has been intentionally segmented from "official" Server 2019 support. I'm pretty sure I know how to get past this by just grabbing the driver INF and could probably do it in like 5 minutes if I had the regular Windows GUI...which of course I don't. Was thinking that with at least one working Ethernet link, I can use any of the remote management tools Windows has to at least fix that so I can get started installing VMs.

Or, could I toss pfSense ISO onto a USB stick, manually create / configure / pass-thru the I219 via command-line, get that up and running, connect Hyper-V to a virtual switch port, and then finally get the full remote admin tools?

e: I kind of XY'd this — I put Hyper-V Server 2019 (goddamned near impossible to find) on my mini-PC and its Intel NIC is not "officially" supported. How can I get the drivers properly installed?

Adbot
ADBOT LOVES YOU

SlowBloke
Aug 14, 2017

movax posted:

Anyone know a USB Ethernet adapter that does work OOB on Hyper-V Server 2019? My DeskMini has a I219V which as far as I can tell, has been intentionally segmented from "official" Server 2019 support. I'm pretty sure I know how to get past this by just grabbing the driver INF and could probably do it in like 5 minutes if I had the regular Windows GUI...which of course I don't. Was thinking that with at least one working Ethernet link, I can use any of the remote management tools Windows has to at least fix that so I can get started installing VMs.

Or, could I toss pfSense ISO onto a USB stick, manually create / configure / pass-thru the I219 via command-line, get that up and running, connect Hyper-V to a virtual switch port, and then finally get the full remote admin tools?

e: I kind of XY'd this — I put Hyper-V Server 2019 (goddamned near impossible to find) on my mini-PC and its Intel NIC is not "officially" supported. How can I get the drivers properly installed?

there are two major brands of usb nics, realtek and asix. Both of those have their drivers in windows update and have inf readily available. Also there is no driver gating AFAIK on intel nics(some edge cases require to use the 21xLM rather than 21xV driver), server 2019 should require a driver 25.x build to run.

also if you need to install a driver you don't need any gui, just a usb stick with the inf files and this oneliner http://jaredheinrichs.com/how-to-install-network-driver-in-hyper-v-core-or-microsoft-hyper-v-server.html

SlowBloke fucked around with this message at 21:46 on Aug 27, 2020

movax
Aug 30, 2008

SlowBloke posted:

there are two major brands of usb nics, realtek and asix. Both of those have their drivers in windows update and have inf readily available. Also there is no driver gating AFAIK on intel nics(some edge cases require to use the 21xLM rather than 21xV driver), server 2019 should require a driver 25.x build to run.

also if you need to install a driver you don't need any gui, just a usb stick with the inf files and this oneliner http://jaredheinrichs.com/how-to-install-network-driver-in-hyper-v-core-or-microsoft-hyper-v-server.html

I really like the ASIX chipsets, they're in all of my USB Ethernet dongles.

The stock PROSet installer refused to run on the machine ("no compatible adapters found"), and I don't know why that single line INF didn't show up in any of my searching — I will give that a try, thanks.

SlowBloke
Aug 14, 2017

movax posted:

I really like the ASIX chipsets, they're in all of my USB Ethernet dongles.

The stock PROSet installer refused to run on the machine ("no compatible adapters found"), and I don't know why that single line INF didn't show up in any of my searching — I will give that a try, thanks.

A quick search on stackoverflow shows some users with your issue doing this:

1. Extract the content of the driver package (latest is 25.2) on a gui-equipped machine
2. Copy the content of the extracted folder on a usb stick and connect it to the server
3. Point the driver installation wizard(or in your case cli) to PRO1000\Winx64\NDIS68
4. Force installation of 219lm driver

Also a device manager equivalent can be installed using this guide https://social.technet.microsoft.com/wiki/contents/articles/182.how-to-obtain-the-current-version-of-device-console-utility-devcon-exe.aspx

SlowBloke fucked around with this message at 22:40 on Aug 27, 2020

Pablo Bluth
Sep 7, 2007

I've made a huge mistake.
KVM is so much nicer than Virtualbox. Spent ages wondering why I couldn't get a Windows client VM to do more than a tiny resolution to eventually discover the host Maximum Guest Screen Size setting of automatic was the problem (whereas the Linux VMs were working fine). I'll be glad when I finally upgrade my linux box to the point of it being good enough to take over as my VM playground.

Pablo Bluth fucked around with this message at 13:50 on Sep 18, 2020

movax
Aug 30, 2008

SlowBloke posted:

there are two major brands of usb nics, realtek and asix. Both of those have their drivers in windows update and have inf readily available. Also there is no driver gating AFAIK on intel nics(some edge cases require to use the 21xLM rather than 21xV driver), server 2019 should require a driver 25.x build to run.

also if you need to install a driver you don't need any gui, just a usb stick with the inf files and this oneliner http://jaredheinrichs.com/how-to-install-network-driver-in-hyper-v-core-or-microsoft-hyper-v-server.html

So it looks like there might be some kind of driver signing issue... I219-V driver is in 'e1d68x64.inf' and running the pnputil command gives 'Failed to install the driver : No more data is available.'

Working out how to check logs (man, cmdline only Windows is weird) to see what the actual root cause is.

SlowBloke
Aug 14, 2017

movax posted:

So it looks like there might be some kind of driver signing issue... I219-V driver is in 'e1d68x64.inf' and running the pnputil command gives 'Failed to install the driver : No more data is available.'

Working out how to check logs (man, cmdline only Windows is weird) to see what the actual root cause is.

force windows to use 219lm drivers if it still cries about driver signing, use coreconfig if pnputil still throws a shitfit about it.

Dr. Poz
Sep 8, 2003

Dr. Poz just diagnosed you with a serious case of being a pussy. Now get back out there and hit them till you can't remember your kid's name.

Pillbug
I have a handful of public facing services in docker on a default bridge and behind a Traefik reverse proxy. Two of them need to be able to connect to a service on the host network. I'd put this service in docker if I could, but I'm not able to right now. What are my options?

Dr. Poz fucked around with this message at 04:32 on Oct 24, 2020

Nohearum
Nov 2, 2013
After many years in Linux land I finally have a Windows 10 desktop again. Work restrictions are going to make dual-booting a pain so I've decided to give a Linux VM a chance. I've been trying to set up a local Fedora 33 VM through Hyper-V but the performance from a network/graphics perspective seems to be subpar. Even basic things like scrolling through a text editor has significant lag and a wget download will max out at like half my regular connection speed. The quick-setup option in Hyper-V has an Ubuntu 20.04 image that seems to run much better (network performance is fixed but and the graphics performance is a bit better), but I feel like I'm missing something. I'm assigning 4 cores and 8 gb of memory to the VM.

Has anyone had luck setting up a local Linux guest through Hyper-V recently? Are there additional packages that I need to install for hardware acceleration or something?

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

Nohearum posted:

After many years in Linux land I finally have a Windows 10 desktop again. Work restrictions are going to make dual-booting a pain so I've decided to give a Linux VM a chance. I've been trying to set up a local Fedora 33 VM through Hyper-V but the performance from a network/graphics perspective seems to be subpar. Even basic things like scrolling through a text editor has significant lag and a wget download will max out at like half my regular connection speed. The quick-setup option in Hyper-V has an Ubuntu 20.04 image that seems to run much better (network performance is fixed but and the graphics performance is a bit better), but I feel like I'm missing something. I'm assigning 4 cores and 8 gb of memory to the VM.

Has anyone had luck setting up a local Linux guest through Hyper-V recently? Are there additional packages that I need to install for hardware acceleration or something?

Try 1 or 2 cores and see how it runs

Nohearum
Nov 2, 2013

Bob Morales posted:

Try 1 or 2 cores and see how it runs

Still lots of ui lag. I downloaded the VMware trial and that seems to run much better. Hyper-v sounded like a KVM/Qemu equivalent but I guess not it's not quite there for linux guests yet.

TheFace
Oct 4, 2004

Fuck anyone that doesn't wanna be this beautiful

Nohearum posted:

After many years in Linux land I finally have a Windows 10 desktop again. Work restrictions are going to make dual-booting a pain so I've decided to give a Linux VM a chance. I've been trying to set up a local Fedora 33 VM through Hyper-V but the performance from a network/graphics perspective seems to be subpar. Even basic things like scrolling through a text editor has significant lag and a wget download will max out at like half my regular connection speed. The quick-setup option in Hyper-V has an Ubuntu 20.04 image that seems to run much better (network performance is fixed but and the graphics performance is a bit better), but I feel like I'm missing something. I'm assigning 4 cores and 8 gb of memory to the VM.

Has anyone had luck setting up a local Linux guest through Hyper-V recently? Are there additional packages that I need to install for hardware acceleration or something?

I don't think Red Hat workstation versions are officially supported under Hyper-V though I don't know what difference that would make.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

TheFace posted:

I don't think Red Hat workstation versions are officially supported under Hyper-V though I don't know what difference that would make.

I mean, RH has official guides on how to install on HyperV

https://developers.redhat.com/rhel8/install-rhel8-hyperv

Nohearum
Nov 2, 2013
Some follow-up on my previous post. I had a bit more luck with Ubuntu in Hyper-V (I'm 90% satisfied with the performance), but it involved the following:

  • Install from 20.04 Ubuntu iso (not the prebuilt images shown in Hyper-v quick start menu)
  • Run install.sh from this fork of the Microsoft linux-vm-tools that supports 20.04 (not sure why Microsoft hasn't pulled this yet): https://github.com/Hinara/linux-vm-tools
  • Enable enhanced session from Powershell admin: Set-VM -VMName namehere -EnhancedSessionTransportType HvSocket
  • Reboot windows machine

If you are successful you will be greeted with an xrdp login and a slider to specify the screen resolution for the VM

Maneki Neko
Oct 27, 2000

Nohearum posted:

Some follow-up on my previous post. I had a bit more luck with Ubuntu in Hyper-V (I'm 90% satisfied with the performance), but it involved the following:

  • Install from 20.04 Ubuntu iso (not the prebuilt images shown in Hyper-v quick start menu)
  • Run install.sh from this fork of the Microsoft linux-vm-tools that supports 20.04 (not sure why Microsoft hasn't pulled this yet): https://github.com/Hinara/linux-vm-tools
  • Enable enhanced session from Powershell admin: Set-VM -VMName namehere -EnhancedSessionTransportType HvSocket
  • Reboot windows machine

If you are successful you will be greeted with an xrdp login and a slider to specify the screen resolution for the VM

Just to be "that guy" have you looked at WSL2 as a comparison point?

SlowBloke
Aug 14, 2017

Maneki Neko posted:

Just to be "that guy" have you looked at WSL2 as a comparison point?

Given that he added the enhanced console he would require a gui, with WSL it's not on by default so maybe that's the reason for hyperv (but there are plenty of workaround like this one for instance https://www.nextofwindows.com/how-to-enable-wsl2-ubuntu-gui-and-use-rdp-to-remote)

Nohearum
Nov 2, 2013

Maneki Neko posted:

Just to be "that guy" have you looked at WSL2 as a comparison point?


SlowBloke posted:

Given that he added the enhanced console he would require a gui, with WSL it's not on by default so maybe that's the reason for hyperv (but there are plenty of workaround like this one for instance https://www.nextofwindows.com/how-to-enable-wsl2-ubuntu-gui-and-use-rdp-to-remote)

I use WSL2 a decent amount of terminal based stuff but right now the graphical performance wasn't great when I tried it earlier this year. I'm planning on revisiting WSL2 once Microsoft released their built-in gui support (supposedly planned for release by holiday 2020 per this article: https://devblogs.microsoft.com/commandline/the-windows-subsystem-for-linux-build-2020-summary/#wsl-gui). It looks pretty promising with Wayland support included.

Nohearum fucked around with this message at 03:58 on Dec 7, 2020

lol internet.
Sep 4, 2007
the internet makes you stupid
Just learning about distributed switches in VMware. Just wondering what everyone is doing as best practice/standard practice. (I work with a Hyper-V environment.)

Are you using distributed switch config for all 3 management network, vmotion, and vm port group networks? If not, how do you configure distributed switches in your environment?

I assume team Management, iSCSI, and VM port group. Does anyone bother teaming vmotion?

When moving a VM port group from standard switch to distributed switch, I assume there is a brief interruption on the VM?

Generally would you have vmotion, management, and port groups on separate switches? Assuming you had something like 6 NICs on the hosts.

lol internet. fucked around with this message at 18:25 on Dec 13, 2020

SlowBloke
Aug 14, 2017

lol internet. posted:

Just learning about distributed switches in VMware. Just wondering what everyone is doing as best practice/standard practice. (I work with a Hyper-V environment.)

Are you using distributed switch config for all 3 management network, vmotion, and vm port group networks? If not, how do you configure distributed switches in your environment?

I assume team Management, iSCSI, and VM port group. Does anyone bother teaming vmotion?

When moving a VM port group from standard switch to distributed switch, I assume there is a brief interruption on the VM?

Generally would you have vmotion, management, and port groups on separate switches? Assuming you had something like 6 NICs on the hosts.

IMHO with vDS it’s beneficial to have one single switch running the show, lots of switches add more complexity with minimal advantages.

You can do hot standard to distributed migration with a couple of heartbeats/ping losses.

TheFace
Oct 4, 2004

Fuck anyone that doesn't wanna be this beautiful

lol internet. posted:

Just learning about distributed switches in VMware. Just wondering what everyone is doing as best practice/standard practice. (I work with a Hyper-V environment.)

Are you using distributed switch config for all 3 management network, vmotion, and vm port group networks? If not, how do you configure distributed switches in your environment?

I assume team Management, iSCSI, and VM port group. Does anyone bother teaming vmotion?

When moving a VM port group from standard switch to distributed switch, I assume there is a brief interruption on the VM?

Generally would you have vmotion, management, and port groups on separate switches? Assuming you had something like 6 NICs on the hosts.

Don't team iSCSI. With iSCSI, on a server where you want to use two NICs for iSCSI, you'd have two port groups and two VMK ports, one NIC assigned to each ONLY, and then use MPIO with the software iSCSI HBA.

By best practice you should team all other traffic (for redundancy and resiliency sake). How you do so as far as distributing the NICs is your choice, but for management ease I tend to do one dVS and distribute NICs by port group.

In most of my environments this means the following:
iSCSI-A: 1 NIC assigned, 1 vmk port, on iSCSI vlan
iSCSI-B: 1 NIC assigned, 1 vmk port, on iSCSI vlan
Assign both to software HBA and configure MPIO policy

Management & vMotion & VM traffic tend to share two NICs
Unless VM traffic is intensive in which case:
Management & vMotion share two NICs, VM traffic shares two other NICs

if vSAN is involved it should always have its own 2 NICs

some kinda jackal
Feb 25, 2003

 
 
Did VMware change something with ESXi 7's ISO? I can't seem to make a bootable image to save my life. I used to just dd to a USB drive but that doesn't seem to work now, and balenaetcher says something about the ISO not being bootable.

Just trying to do a temp esxi install on a spare Optiplex 9020 to test a theory but I'm kind of at a loss right now.

e: Oh nevermind it's an EFI boot image.

some kinda jackal fucked around with this message at 18:35 on Jan 6, 2021

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
How much benefit would UEFI and Secure boot provide on ESXi 6.7 with Linux virtual machines? I doubt we would be making proper secure boot working with these machines and these won't have large boot disks, so that's two biggest benefits out. And I haven't found other benefits that might be worth any extra hassle with PXE or CD-image boots.

SlowBloke
Aug 14, 2017

Saukkis posted:

How much benefit would UEFI and Secure boot provide on ESXi 6.7 with Linux virtual machines? I doubt we would be making proper secure boot working with these machines and these won't have large boot disks, so that's two biggest benefits out. And I haven't found other benefits that might be worth any extra hassle with PXE or CD-image boots.

https://wiki.ubuntu.com/UEFI/SecureBoot

Ubuntu 20.04 has a mostly working stack so using secure boot means that nobody has tinkered with the core os payload. If you use older builds there is little to no point as the feature wasn’t complete.

Woof Blitzer
Dec 29, 2012

[-]

CommieGIR posted:

I mean, RH has official guides on how to install on HyperV

https://developers.redhat.com/rhel8/install-rhel8-hyperv

Wow I just got the .iso for 8, what a lucky coincidence you posted this. Thanks!

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Saukkis posted:

How much benefit would UEFI and Secure boot provide on ESXi 6.7 with Linux virtual machines? I doubt we would be making proper secure boot working with these machines and these won't have large boot disks, so that's two biggest benefits out. And I haven't found other benefits that might be worth any extra hassle with PXE or CD-image boots.

secureboot on a linux guest will enforce kernel driver signing, so its mostly a rootkit and unsigned code mitigation

Dans Macabre
Apr 24, 2004


haaaay I'm on my first time doing virtualbox headless and I'm deploying my first guest VM!

I followed this tutorial and everything worked as in: no errors, and listing the running vms shows the vm running. However, I didn't get the "VRDE server is listening on..." message, and trying to connect to ip:port using mstsc from another computer on the lan says no answer.

What did I miss? What's the best website for troubleshooting virtualbox?

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
What is the most convenient way to automatically power on a VM after it has powered of in VMware? We just raised the EVC mode on our cluster and now have to power off hundreds of VMs. Currently I'm running a Powershell script that checks every hour which servers have maintenance window and then waits for them to power off and starts them right after. At least the script also fixes guest OS and other stuff, so it's not completely useless work.

For a long time I've wished the scheduled tasks had a trigger "after power off". I have never used scheduled tasks and that's probably the only feature that would get me to use them.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

NevergirlsOFFICIAL posted:

haaaay I'm on my first time doing virtualbox headless and I'm deploying my first guest VM!

I followed this tutorial and everything worked as in: no errors, and listing the running vms shows the vm running. However, I didn't get the "VRDE server is listening on..." message, and trying to connect to ip:port using mstsc from another computer on the lan says no answer.

What did I miss? What's the best website for troubleshooting virtualbox?

Is there any reason you aren't using Vagrant for this?

Dans Macabre
Apr 24, 2004


Matt Zerella posted:

Is there any reason you aren't using Vagrant for this?

Yeah, the main reason is I didn't know about it until just now.

Dans Macabre
Apr 24, 2004


so this is just like docker but for vms?
cool

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

NevergirlsOFFICIAL posted:

so this is just like docker but for vms?
cool

Yeah its a nice fast way to spin up VMs.

SlowBloke
Aug 14, 2017

Saukkis posted:

What is the most convenient way to automatically power on a VM after it has powered of in VMware? We just raised the EVC mode on our cluster and now have to power off hundreds of VMs. Currently I'm running a Powershell script that checks every hour which servers have maintenance window and then waits for them to power off and starts them right after. At least the script also fixes guest OS and other stuff, so it's not completely useless work.

For a long time I've wished the scheduled tasks had a trigger "after power off". I have never used scheduled tasks and that's probably the only feature that would get me to use them.

Why not use tags? You could mark which vm has a certain service window, which vm has been shut down already, etc...

Dans Macabre
Apr 24, 2004


Matt Zerella posted:

Yeah its a nice fast way to spin up VMs.

Turns out Vagrant gave me different errors :-/

I gave up and just went to the data center team to make it someone else's problem

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

SlowBloke posted:

Why not use tags? You could mark which vm has a certain service window, which vm has been shut down already, etc...

I'm not sure if that would help with the scheduling. In the simplest way I want vCenter to autostart VMs right after they have powered off, without needing to run a custom script on a separate management server. Tags might be useful to determine whether VM should be autostarted or not. Scheduled tasks sound like the right tool, but it's lacking convenient trigger. Run once at $time is close, but we can't know the precise time in advance. A VM starts running 'yum update' at 12:01, but we don't know if it finishes at 12:03 or 12:23.

I would think this should be a common desire, everyone needs the option to power on a VM soon after it powers off. EVC upgrades, Meltdown/Spectre, E1000->VMXNET3 conversions, guest OS fixes. After Spectre we scripted the power off for our VMs, but we didn't yet have a script for powering on, so every hour someone was staring at vCenter, waiting for a new VM to show up on the "powered off" list. And be careful not to start any of the VMs that were supposed to be powered off.

ihafarm
Aug 12, 2004

Saukkis posted:

I'm not sure if that would help with the scheduling. In the simplest way I want vCenter to autostart VMs right after they have powered off, without needing to run a custom script on a separate management server. Tags might be useful to determine whether VM should be autostarted or not. Scheduled tasks sound like the right tool, but it's lacking convenient trigger. Run once at $time is close, but we can't know the precise time in advance. A VM starts running 'yum update' at 12:01, but we don't know if it finishes at 12:03 or 12:23.

I would think this should be a common desire, everyone needs the option to power on a VM soon after it powers off. EVC upgrades, Meltdown/Spectre, E1000->VMXNET3 conversions, guest OS fixes. After Spectre we scripted the power off for our VMs, but we didn't yet have a script for powering on, so every hour someone was staring at vCenter, waiting for a new VM to show up on the "powered off" list. And be careful not to start any of the VMs that were supposed to be powered off.

Why are you shutting down vms after updates, or is this also related to moving them into the evc cluster?

SlowBloke
Aug 14, 2017

Saukkis posted:

I'm not sure if that would help with the scheduling. In the simplest way I want vCenter to autostart VMs right after they have powered off, without needing to run a custom script on a separate management server. Tags might be useful to determine whether VM should be autostarted or not. Scheduled tasks sound like the right tool, but it's lacking convenient trigger. Run once at $time is close, but we can't know the precise time in advance. A VM starts running 'yum update' at 12:01, but we don't know if it finishes at 12:03 or 12:23.

I would think this should be a common desire, everyone needs the option to power on a VM soon after it powers off. EVC upgrades, Meltdown/Spectre, E1000->VMXNET3 conversions, guest OS fixes. After Spectre we scripted the power off for our VMs, but we didn't yet have a script for powering on, so every hour someone was staring at vCenter, waiting for a new VM to show up on the "powered off" list. And be careful not to start any of the VMs that were supposed to be powered off.

What I mean is to run a powercli script on a always on host every hour with a mapping of service window tag to time to execute along with a done/todo tag. Get-vm * -> filter on todo tag -> check which vm has a service window tag that is coherent with the script execution time -> gracefully shut down vm that are in the service window -> wait 2 minutes -> power up every shut down vm that has the service window tag -> update done tag on vms that have been executed on.

Scheduled tasks haven’t got that much intelligence to do what you are asking.

SlowBloke
Aug 14, 2017

ihafarm posted:

Why are you shutting down vms after updates, or is this also related to moving them into the evc cluster?

EVC changes require a full shutdown to be committed, simple restarts won’t have an effect.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

ihafarm posted:

Why are you shutting down vms after updates, or is this also related to moving them into the evc cluster?

We decommissioned our oldest hosts and we could then upgrade the EVC mode from Sandy Bridge to Haswell. But that is only one of many operations that require shutting them down. This round we also have a large number of RHEL8 servers that are listed as RHEL7 in VMware and we can now fix those after upgrading to 6.7.

SlowBloke posted:

What I mean is to run a powercli script on a always on host every hour with a mapping of service window tag to time to execute along with a done/todo tag. Get-vm * -> filter on todo tag -> check which vm has a service window tag that is coherent with the script execution time -> gracefully shut down vm that are in the service window -> wait 2 minutes -> power up every shut down vm that has the service window tag -> update done tag on vms that have been executed on.

Scheduled tasks haven’t got that much intelligence to do what you are asking.

Oh yeah, that's basically what we do. Every time the hour changes the script checks a website listing the servers that use that window, checks which of them are VMs, waits for them to power off, does any planned operations and then starts them up within seconds. But it feels cumbersome compared to KVM's autostart setting.

SlowBloke
Aug 14, 2017

Saukkis posted:

We decommissioned our oldest hosts and we could then upgrade the EVC mode from Sandy Bridge to Haswell. But that is only one of many operations that require shutting them down. This round we also have a large number of RHEL8 servers that are listed as RHEL7 in VMware and we can now fix those after upgrading to 6.7.


Oh yeah, that's basically what we do. Every time the hour changes the script checks a website listing the servers that use that window, checks which of them are VMs, waits for them to power off, does any planned operations and then starts them up within seconds. But it feels cumbersome compared to KVM's autostart setting.

The official way is using vRealize automation, it’s just that I’m not skilled enough on it to provide insight on how to execute on it.

Adbot
ADBOT LOVES YOU

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Is KVM VMM and Looking Glass the go-to right now for doing pass-through GPU for something like a windows VM in a Linux environment to get (near) bare metal performance? I'm switching over to Linux for my main OS, but there's still a couple of work apps that don't work (well) in Linux and the the windows experience is not super great using its virtual video card.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply