Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
SlowBloke
Aug 14, 2017

bobfather posted:

Veeam Endpoint does, I believe.

Endpoint works for clients/physical servers not vm-hosts, https://hyperv.veeam.com/free-hyper-v-backup/ does hyper-v backup but features are limited(no scheduling for instance)

Adbot
ADBOT LOVES YOU

SlowBloke
Aug 14, 2017

bobfather posted:

I think he's just looking to backup his Hyper-V guests.

As a free solution Endpoint Backup would work fine for him. He'd just have to be willing to install it on all his Windows guests.

Nothing stops him from installing veeam backup free on the hyper-v hosts, if it's a homelab it's certainly less hassle than multiple veeam endpoint installs(I wouldn't do it on a prod enviroment).

SlowBloke
Aug 14, 2017

stevewm posted:

Maybe someone here can check my results...

Setting up a new 2 host Hyper-V cluster. Each host will have a 2 processors with 8 cores each.

There will be 6 Windows server VMs, and I want the ability to move them between hosts as needed.

Everything I can find says I need 48 core licenses for 6 VMs of 2016 on a single host of those specifications. And if I want to run those VMs on a second host, I need to license them all over again, thus 96 cores.

Am I right on this?

Depending on your budget you may be better off with a Windows Server datacenter license for each core but in any case if you want to do live migration you need Software Assurance too

SlowBloke
Aug 14, 2017

BangersInMyKnickers posted:

I'm not sure if it still works this way still but with some low-density configurations it was better to license with enterprise which entitled you to 4 guest VMs per license.

If we look at Microsoft official page (https://www.microsoft.com/en-us/cloud-platform/windows-server-pricing) the standard license now covers only two vm, so up to three sets of standard licenses would be required for each host(Unless live migration is disabled). Depending on his vm layout it may be cheaper to just license win srv datacenter.

SlowBloke
Aug 14, 2017

stevewm posted:

From what I have been told, this is only true if each host is not fully licensed for the max amount of VMs you will ever run.

When using Standard, I have been told that if you license each host for the total amount of VMs you will ever run (in this case 6) then you can move VMs around as much as needed. This is also what I have managed to understand with reading the documentation from MS.. Since host #1 would be licensed for 6 VMs, and host #2 for 6 VMs, but we will never run more than 6 Windows VMs total, then we would be OK. The SA benefit would only apply if each host was only licensed for 4 VMs each, and I needed to move all the VMs to one host. Without SA you can do this once every 90 days, unless it is a hardware failure.

I was going with Standard because I cannot see any situation where we will ever run more then 6 Windows VMs, given what this cluster will be used for. Standard should be cheaper from what I can see. The break-even point for Datacenter is commonly stated as 13 VMs.

Since everyone I ask seems to have different interpretations on this (including some VARs!) , I figured I would ask.

Hmm our VARs and Microsoft itself stated that you can migrate the vm every 90 days or have SA, making it kinda kludgy unless you lock the VM in place but as you said Microsoft position on the topic is very fluid. The latest virt licensing guide can be found here http://download.microsoft.com/download/3/D/4/3D42BDC2-6725-4B29-B75A-A5B04179958B/WindowsServer2016VirtualTech_VLBrief.pdf if you want to check the latest golden standard.
Dunno about your licensing status(business/gov/edu/etc.) but our Datacenter licenses costs about as two sets of standard licenses, a three set of standard might be more expensive. Check at least a couple of quotes before committing to standard.

SlowBloke
Aug 14, 2017

stevewm posted:

I've seen that document... And I think a key line is this:


Which is what I am basing my interpretation on.

I double checked the document(I won't deny going by memory rather than reading it before posting:shobon:), it looks like SA is no longer required for live migration. If the vm number is not going to increase, your license BOM looks fine. I checked one of our Microsoft resellers and the license breakpoint is currently six(after six sets of standard licenses it's cheaper to run Datacenter) if you wanted a exact number for future reference.

SlowBloke
Aug 14, 2017

Boris Galerkin posted:

I have a Fedora/Linux host running on a ThinkPad with a Windows 10 guest for Excel/Office. I’ve been using Excel more to the point where I just need to keep the Windows VM spun up all the time.

What options/settings can I change to make this VM experience as smooth as possible? This is running on a quad core Haswell i7. I’ve given it 2 cores so that the host has two dedicated cores still. I’ve also given it 8GB of RAM and the 3D video RAM is maxed out and 2D/3D acceleration are turned on. The most graphics intensive app I’d run is PowerPoint so do I really need those enabled?

I’m also not adverse to switching from VirtualBox to something else if the performance gain is trivially easy. Or also just switching from Windows 10 to 8.1 if that works out better too.

If it's a Windows 8 or 10 host i would suggest moving to the integrated hyper-v instance which is quite faster than virtualbox for Windows guests.

SlowBloke
Aug 14, 2017

Saukkis posted:

In VMware vSphere, is there a way to schedule a virtual machine to start automatically after it has been powered off from the guest OS? My coworker has interpreted that powering off is necessary for the VM to get access to the new CPU flags and the Spectre mitigations to become effective. It would be most convenient to schedule startup and then during the normal update cycle power off the VMs instead of rebooting them, and not having to go start them up manually.

You could do some gymnastics with powercli to shutdown a list of vm, wait for completion and then powering the same list of vms.

SlowBloke
Aug 14, 2017

Moey posted:

This is what I was thinking as well. I know of nothing in vCenter that will power a VM back on. Horizon View can do it.

If you hate yourself VERY strongly you could create a vSphere orchestrator workflow to shutdown a vm, wait for the vm status to change and turn it back on to be applied on a cluster or vm folder(s), but it would take an awful lot of time to setup compared to a handful of strings of ps1 batches.

SlowBloke
Aug 14, 2017

Spring Heeled Jack posted:

Thanks for the feedback buys, I also have calls with Starwind, Tegile, and CDW (our usual VAR) on a few of the storage products they are offering. Our HPE hosts are only a year old (compared to ~7 for the SAN) otherwise we could probably swing throwing more money at a hyper converged solution.

My managers concern with getting another SAN is the upside-down pyramid we get with all of our data sitting on this one appliance, no matter how much redundancy is built into the unit itself. It's been a while since I've looked at the current technologies but it seems there are some solutions out there to alleviate this without breaking the bank.

If you guys are evaluating starwind, add DataCore to the mix, they are not cheap but my experience with them has been flawless.

SlowBloke
Aug 14, 2017

Wicaeed posted:

Anyone remember what that Powershell PowerCLI script that you can use to gather a whole bunch of information (hba's, vmkernel IPs, etc) across your entire vCenter environment?

My brain is failing me right now.

Maybe you are thinking about vcheck -> http://www.virtu-al.net/vcheck-pluginsheaders/vcheck/

SlowBloke
Aug 14, 2017

Happiness Commando posted:

We're running a 6.5 U2 across two sites with external PSCs. Now that 6.7 U1 has been announced, it's time to start thinking of upgrading.

I'm pretty sure I want to do the equivalent of a greenfield deployment and stand up new 6.7 U1 VCSAs and then migrate my hosts from the old VCSAs to the new ones.

Granted it's a little early, but do any of you experienced folk think that's a poor decision? It seems to me like it would add just a bit extra redundancy/safety during the move and I don't see any downsides...

Two things that you need to keep in mind
- 6.5 and 6.7 have different host support (vcsa 6.7 cuts away esxi 5.5 support)
- unless your db data is hosed up or you don't have any idea if there is any custom settings modified by someone else, there is little gain to do a full wipe rather than upgrading the vcsa payload using ami.

If you are afraid of data loss, just do a full snaphot-based backup of each site vcsa before upgrading(you need to upgrade psc before the vcenter services node)

SlowBloke fucked around with this message at 22:25 on Sep 6, 2018

SlowBloke
Aug 14, 2017

Happiness Commando posted:

Do you know if there's a good and easy way to transition from external back to embedded PSCs as part of the upgrade? They seem like an extra added infra annoyance that I don't want to keep around if I don't have to

The migration wizard or a VAMI update would keep them separate anyway. If you want to move from a External psc to a Embedded one I'm afraid starting from scratch would be required, all VMware docs i've found explain how to move to Embedded to External but not viceversa :(

SlowBloke
Aug 14, 2017

YOLOsubmarine posted:

VCenter 6.7u1 has a tool to migrate external to embedded called the VCenter Server Convergence Tool.

https://www.virtualizationhowto.com/2018/08/vmware-vsphere-platinum-and-vsphere-6-7-update-1-released-new-features/

Good to know, VMware docs are stuck at 6.7 RTM so there is no mention of that tool there.

SlowBloke
Aug 14, 2017

Goonerousity posted:

This is exactly what I needed, thank you. I’ll stick to windows 10 and full screen virtual machines when I need to write c

Also is there a free version of VMware’s virtualization software? I remember reading through google theres like a VMware workstation but got devestatingly lost on VMware’s website

https://www.vmware.com/products/workstation-player/workstation-player-evaluation.html

This is the free type-2 software(run it on a Windows or linux computer)

https://www.vmware.com/products/vsphere-hypervisor.html

This is the free type-1 software(run it as the os)

SlowBloke
Aug 14, 2017

Sad Panda posted:

Is there a particular idiots guide to ESXi that people recommend? Maybe something to tell me which of the 1000 options are useful to toggle to something other than the default?

For some background, I own a Mac and got a new computer with ESXI that has three Win 10 VMs on it. The immediate questions...

1 Is there a better way of editing the Datastore? I made my first VM and to create the 2nd + 3rd used what looks like Baby's First File Manager (datastore -> datastore browser) which doesn't even include options to rename. Am I supposed to just be using the CLI?

2. Secondly, being on a Mac it seems I'm meant to use VMWare Remote Console is there a way to make it take in all the commands? I ask because at the moment if I press the keyboard shortcut to close a tab, it often closes VMWare Remote Console, and that's going to be very frustrating.

3. How do I make the web client default to opening in VMWare Remote Console? If I double click it likes to open them in the browser window itself which I don't want.

1. You are not supposed to handle the vm using the datastore browser but rather the host pane, datastore browser is to remove data or import isos in your case

2. Keyboard macros on the remote are Win centric sadly, you just need to use the remote console to get into the vm and enable remote desktop or vnc. Using remote console for day to day interactive usage is far from optimal.

3. On the vSphere Web Client virtual machine Summary page, click the gear icon in the lower right corner of the console thumbnail. -> Select Change Default Console. -> Select VMware Remote Console and click OK.

SlowBloke
Aug 14, 2017

Sad Panda posted:

OK, I guess I should have cloned them in a different way. The friend who introduced me to ESXi said that the easiest way to do it is simply to load up datastore browser copy + paste the folder each time you want a clone. I did that, change the VM name in the settings and then ensured they had a different MAC.

What advantage does Remote Desktop/VNC have over the VMWare Remote Console? I'm so new to this stuff that it's marginally overwhelming. My only prior remoting experience is that I've got RealVNC installed on my Pi so I can access that from the network or via the Cloud.

Doing a "ghetto template" like you did was a sure fire way to get the same sid on Windows machines(very bad on old windows, no longer a thing). Windows RDP don't have the same issues VMware remote console has with keyboards for one, you should treat VMRC as a maintainance tool instead of a everyday interface. If you want to learn the ins and outs of vSphere I'd suggest you source a copy of Mastering VMware vSphere 6 by Nick Marshall with supervision from Scott Lowe(Mastering VMware vSphere 6.5 is by another group, I have no idea if it's good or not) which explains pretty much the basics of VMware vSphere and ESXi.

SlowBloke fucked around with this message at 21:46 on Sep 24, 2018

SlowBloke
Aug 14, 2017

snackcakes posted:

Can anyone with a good grasp on resource pools explain this to me? I found the question on a Reddit thread but everyone on the thread argued with each other so I had no idea whose explanation of the answer was correct. Resource pools seem so simple in theory but they murdered me when I took the VCP



Expandable memory reservation means that if you run out of memory in the pool, drs will use more memory then the limit you set up, taking it from the resource pool higher in hierarchy(until exhaustion of resources or a threshold that you can set up if you set up ha admission limits). You can turn on vms as big a 8gb before running out of ram in your exam sample.

SlowBloke fucked around with this message at 07:12 on Nov 6, 2018

SlowBloke
Aug 14, 2017

anthonypants posted:

So why isn't C true?

HMM, all three scenarios are technically true. A gives you a little leeway in case there is other resources required within the resource pool tree(the turned off vm in TestDev) but IMHO i would pick any out of ABC if i ran this question on a test.

SlowBloke
Aug 14, 2017

Agrikk posted:

Is there a way I can wipe a disk from within ESXi?

I have shoved a previously used disk into an ESXi 6.5 box for more storage, but it turns out that I've previously used the disk for an ESXi installation so there are old partitions on it.

code:
partedUtil getptbl /vmfs/devices/disks/t10.ATA_____WDC_WD6400AAKS2D00A7B2________________________WD2DWCASYE288039 gpt
77825 255 63 1250263728

5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0
2 1843200 10229759 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
3 10229760 1250263694 AA31E02A400F11DB9590000C2911D1B8 vmfs 0
Any attempt to partedUtil delete fails with

code:
Error: Read-only file system during write on /dev/disks/t10.ATA_____WDC_WD6400AAKS2D00A7B2________________________WD2DWCASYE288039
Unable to delete partition 3 from device /vmfs/devices/disks/t10.ATA_____WDC_WD6400AAKS2D00A7B2________________________WD2DWCASYE288039
Goddamn ESX and Linux for recognizing old partitions and making them goddamn bulletproof.


I tried

code:
dd if=/dev/null of=/vmfs/devices/disks/t10.ATA_____WDC_WD6400AAKS2D00A7B2________________________WD2DWCASYE288039 bs=512 count=1
and received
code:
dd: can't open '/vmfs/devices/disks/t10.ATA_____WDC_WD6400AAKS2D00A7B2________________________WD2DWCASYE288039': Function not implemented

I used diskpart on a win workstation to clean the partition table on esxi disks without major issues

Open an admin CMD shell

-diskpart
-list disk
-select disk N (where N is the number of the esxi disk)
-clean

It won't guarantee safe data removal/wipe but the partition layout will be wiped out clean.

SlowBloke
Aug 14, 2017
I never considered FC to be interesting until i started working with it. Unlike iSCSI it either works perfectly or everything is hosed. The native multipathing is a nice extra.

SlowBloke
Aug 14, 2017

Zorak of Michigan posted:

Are other VMware shops using DHCP, with or without reseved addresses, for the vMotion and NFS interfaces on their ESXi hosts? I have been pushing for this at work, on the theory that doing IPAM for those addresses is boring. People, including our TAM, pushing back because Change is Bad. I can't believe that using DHCP is problematic in 2019 but that's the impression I get.

I think it all depends on where is the dhcp server located compared to the cluster. Is the dhcp host contained in the cluster? If the answer is no i see minimal issues, if yes no loving way.

SlowBloke fucked around with this message at 18:06 on Aug 15, 2019

SlowBloke
Aug 14, 2017

kiwid posted:

Anyone know why I can't add a VM to the same port group that a VMkernel port is on using ESXi 6.7 free?

I know in our vCenter 6.7 I can do this.


edit: err maybe I can't. I seem to be confused.

There is nothing stopping you from creating a new port group with the same vlan id and a different name from the problematic one and associating that vm

SlowBloke
Aug 14, 2017

TooLShack posted:

So I'm about to buy some VMware licenses, do you guys just buy them directly from VMware, or is there a cheaper reseller option?

EU/US? Commercial/Academic/Government?

SlowBloke
Aug 14, 2017

BangersInMyKnickers posted:

In-place upgrades from 2008r2 to 2012r2 went very smoothly the few times I had to run them. Since you can snapshot, might be a viable low-effort route to buy time and stay inside the support matrix until you can get the hypervisor upgraded

NEVER do inplace upgrades on domain controllers, all FSMO roles will get hosed up. Upgrade the 2012r2 VM host (if you cannot allow downtime there is this option https://docs.microsoft.com/en-us/windows-server/failover-clustering/cluster-operating-system-rolling-upgrade) and then make new DCs with a modern OS.

SlowBloke
Aug 14, 2017

Empress Brosephine posted:

Is there any books or video series you folks recommend for learning virtjalization? OP has one but not sure if it's dated or not.

I’ve always considered Lowe “mastering vsphere” series of books to be a good starting point. The last one has been written by another author but it is still decent

SlowBloke
Aug 14, 2017

NewFatMike posted:

I was gifted a GRID K2 at my new job, and I'm not sure what I want to do with it. Maybe set up a VDI for remote access? Anyone have any suggestions for fun stuff to do with it?

Grid k2 are the last nvidia card to not require licensing for vgpu. Those cards are nice for homelabs with esxi 6.5(last supported version for K2s) as in that case you just need to insert the card, install a vib and you get hardware 3d accelleration on that host.

SlowBloke fucked around with this message at 08:42 on Jan 17, 2020

SlowBloke
Aug 14, 2017

Martytoof posted:

Oops, that reminds me -- I've been running ESXi off redundant SD's on my 620 in my homelab for like .. two years now, and haven't sent logging off-machine yet. I feel like I'm just playing with fire at this point.

I mean granted it doesn't see a lot of use, but I expect that it still logs a non-insignificant amount of random garbage that I never really look at. I should just feed logs to the void.

Depending on the sd size it might keep a minimal part of the logs and discard the rest. Same on small usb sticks.

SlowBloke
Aug 14, 2017
We have started using long endurance SD/microsd from sandisk to cover embedded hypervisor cases, conventional sd would get fried every two-three years.

SlowBloke
Aug 14, 2017

Wicaeed posted:

I have somewhat stupidly volunteered myself for a VMware upgrade Project of our aged vCenter 6.0 installation.

The advisor recommendations are saying we should install the 6.5.0 GA version of vCenter, but I don't see any mention of vCenter 6.7.

We do have some older hosts that can only go to 6.0.0 U2 version of VMware, however these should be compatible with vCenter 6.7 according to the VMware docs.

Am I missing anything super obvious as to why 6.7 wouldn't be showing as a recommended upgrade for us?

I do have a VMW support ticket created as well, just figured SA may have a quicker turnaround than VMware support nowadays...

There is no major issue going 6.0 to 6.7(unless you went external PSC or you are running the vcenter install on windows), there are issues going directly from 5.5 to 6.7(you need an intermediate 6.5 step to keep host compatibility). The current vmware upgrade path can be found at https://www.vmware.com/resources/compatibility/sim/interop_matrix.php#upgrade&solution=2 (insert vcenter in the text field).
Also never do an upgrade with a GA build, always at least on Update1. Current VCSA 6.7 build is Update 3g.

SlowBloke
Aug 14, 2017

movax posted:

Anyone know a USB Ethernet adapter that does work OOB on Hyper-V Server 2019? My DeskMini has a I219V which as far as I can tell, has been intentionally segmented from "official" Server 2019 support. I'm pretty sure I know how to get past this by just grabbing the driver INF and could probably do it in like 5 minutes if I had the regular Windows GUI...which of course I don't. Was thinking that with at least one working Ethernet link, I can use any of the remote management tools Windows has to at least fix that so I can get started installing VMs.

Or, could I toss pfSense ISO onto a USB stick, manually create / configure / pass-thru the I219 via command-line, get that up and running, connect Hyper-V to a virtual switch port, and then finally get the full remote admin tools?

e: I kind of XY'd this — I put Hyper-V Server 2019 (goddamned near impossible to find) on my mini-PC and its Intel NIC is not "officially" supported. How can I get the drivers properly installed?

there are two major brands of usb nics, realtek and asix. Both of those have their drivers in windows update and have inf readily available. Also there is no driver gating AFAIK on intel nics(some edge cases require to use the 21xLM rather than 21xV driver), server 2019 should require a driver 25.x build to run.

also if you need to install a driver you don't need any gui, just a usb stick with the inf files and this oneliner http://jaredheinrichs.com/how-to-install-network-driver-in-hyper-v-core-or-microsoft-hyper-v-server.html

SlowBloke fucked around with this message at 21:46 on Aug 27, 2020

SlowBloke
Aug 14, 2017

movax posted:

I really like the ASIX chipsets, they're in all of my USB Ethernet dongles.

The stock PROSet installer refused to run on the machine ("no compatible adapters found"), and I don't know why that single line INF didn't show up in any of my searching — I will give that a try, thanks.

A quick search on stackoverflow shows some users with your issue doing this:

1. Extract the content of the driver package (latest is 25.2) on a gui-equipped machine
2. Copy the content of the extracted folder on a usb stick and connect it to the server
3. Point the driver installation wizard(or in your case cli) to PRO1000\Winx64\NDIS68
4. Force installation of 219lm driver

Also a device manager equivalent can be installed using this guide https://social.technet.microsoft.com/wiki/contents/articles/182.how-to-obtain-the-current-version-of-device-console-utility-devcon-exe.aspx

SlowBloke fucked around with this message at 22:40 on Aug 27, 2020

SlowBloke
Aug 14, 2017

movax posted:

So it looks like there might be some kind of driver signing issue... I219-V driver is in 'e1d68x64.inf' and running the pnputil command gives 'Failed to install the driver : No more data is available.'

Working out how to check logs (man, cmdline only Windows is weird) to see what the actual root cause is.

force windows to use 219lm drivers if it still cries about driver signing, use coreconfig if pnputil still throws a shitfit about it.

SlowBloke
Aug 14, 2017

Maneki Neko posted:

Just to be "that guy" have you looked at WSL2 as a comparison point?

Given that he added the enhanced console he would require a gui, with WSL it's not on by default so maybe that's the reason for hyperv (but there are plenty of workaround like this one for instance https://www.nextofwindows.com/how-to-enable-wsl2-ubuntu-gui-and-use-rdp-to-remote)

SlowBloke
Aug 14, 2017

lol internet. posted:

Just learning about distributed switches in VMware. Just wondering what everyone is doing as best practice/standard practice. (I work with a Hyper-V environment.)

Are you using distributed switch config for all 3 management network, vmotion, and vm port group networks? If not, how do you configure distributed switches in your environment?

I assume team Management, iSCSI, and VM port group. Does anyone bother teaming vmotion?

When moving a VM port group from standard switch to distributed switch, I assume there is a brief interruption on the VM?

Generally would you have vmotion, management, and port groups on separate switches? Assuming you had something like 6 NICs on the hosts.

IMHO with vDS it’s beneficial to have one single switch running the show, lots of switches add more complexity with minimal advantages.

You can do hot standard to distributed migration with a couple of heartbeats/ping losses.

SlowBloke
Aug 14, 2017

Saukkis posted:

How much benefit would UEFI and Secure boot provide on ESXi 6.7 with Linux virtual machines? I doubt we would be making proper secure boot working with these machines and these won't have large boot disks, so that's two biggest benefits out. And I haven't found other benefits that might be worth any extra hassle with PXE or CD-image boots.

https://wiki.ubuntu.com/UEFI/SecureBoot

Ubuntu 20.04 has a mostly working stack so using secure boot means that nobody has tinkered with the core os payload. If you use older builds there is little to no point as the feature wasn’t complete.

SlowBloke
Aug 14, 2017

Saukkis posted:

What is the most convenient way to automatically power on a VM after it has powered of in VMware? We just raised the EVC mode on our cluster and now have to power off hundreds of VMs. Currently I'm running a Powershell script that checks every hour which servers have maintenance window and then waits for them to power off and starts them right after. At least the script also fixes guest OS and other stuff, so it's not completely useless work.

For a long time I've wished the scheduled tasks had a trigger "after power off". I have never used scheduled tasks and that's probably the only feature that would get me to use them.

Why not use tags? You could mark which vm has a certain service window, which vm has been shut down already, etc...

SlowBloke
Aug 14, 2017

Saukkis posted:

I'm not sure if that would help with the scheduling. In the simplest way I want vCenter to autostart VMs right after they have powered off, without needing to run a custom script on a separate management server. Tags might be useful to determine whether VM should be autostarted or not. Scheduled tasks sound like the right tool, but it's lacking convenient trigger. Run once at $time is close, but we can't know the precise time in advance. A VM starts running 'yum update' at 12:01, but we don't know if it finishes at 12:03 or 12:23.

I would think this should be a common desire, everyone needs the option to power on a VM soon after it powers off. EVC upgrades, Meltdown/Spectre, E1000->VMXNET3 conversions, guest OS fixes. After Spectre we scripted the power off for our VMs, but we didn't yet have a script for powering on, so every hour someone was staring at vCenter, waiting for a new VM to show up on the "powered off" list. And be careful not to start any of the VMs that were supposed to be powered off.

What I mean is to run a powercli script on a always on host every hour with a mapping of service window tag to time to execute along with a done/todo tag. Get-vm * -> filter on todo tag -> check which vm has a service window tag that is coherent with the script execution time -> gracefully shut down vm that are in the service window -> wait 2 minutes -> power up every shut down vm that has the service window tag -> update done tag on vms that have been executed on.

Scheduled tasks haven’t got that much intelligence to do what you are asking.

SlowBloke
Aug 14, 2017

ihafarm posted:

Why are you shutting down vms after updates, or is this also related to moving them into the evc cluster?

EVC changes require a full shutdown to be committed, simple restarts won’t have an effect.

Adbot
ADBOT LOVES YOU

SlowBloke
Aug 14, 2017

Saukkis posted:

We decommissioned our oldest hosts and we could then upgrade the EVC mode from Sandy Bridge to Haswell. But that is only one of many operations that require shutting them down. This round we also have a large number of RHEL8 servers that are listed as RHEL7 in VMware and we can now fix those after upgrading to 6.7.


Oh yeah, that's basically what we do. Every time the hour changes the script checks a website listing the servers that use that window, checks which of them are VMs, waits for them to power off, does any planned operations and then starts them up within seconds. But it feels cumbersome compared to KVM's autostart setting.

The official way is using vRealize automation, it’s just that I’m not skilled enough on it to provide insight on how to execute on it.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply