|
Anyone know a USB Ethernet adapter that does work OOB on Hyper-V Server 2019? My DeskMini has a I219V which as far as I can tell, has been intentionally segmented from "official" Server 2019 support. I'm pretty sure I know how to get past this by just grabbing the driver INF and could probably do it in like 5 minutes if I had the regular Windows GUI...which of course I don't. Was thinking that with at least one working Ethernet link, I can use any of the remote management tools Windows has to at least fix that so I can get started installing VMs. Or, could I toss pfSense ISO onto a USB stick, manually create / configure / pass-thru the I219 via command-line, get that up and running, connect Hyper-V to a virtual switch port, and then finally get the full remote admin tools? e: I kind of XY'd this — I put Hyper-V Server 2019 (goddamned near impossible to find) on my mini-PC and its Intel NIC is not "officially" supported. How can I get the drivers properly installed?
|
# ? Aug 27, 2020 17:44 |
|
|
# ? May 9, 2024 06:13 |
|
movax posted:Anyone know a USB Ethernet adapter that does work OOB on Hyper-V Server 2019? My DeskMini has a I219V which as far as I can tell, has been intentionally segmented from "official" Server 2019 support. I'm pretty sure I know how to get past this by just grabbing the driver INF and could probably do it in like 5 minutes if I had the regular Windows GUI...which of course I don't. Was thinking that with at least one working Ethernet link, I can use any of the remote management tools Windows has to at least fix that so I can get started installing VMs. there are two major brands of usb nics, realtek and asix. Both of those have their drivers in windows update and have inf readily available. Also there is no driver gating AFAIK on intel nics(some edge cases require to use the 21xLM rather than 21xV driver), server 2019 should require a driver 25.x build to run. also if you need to install a driver you don't need any gui, just a usb stick with the inf files and this oneliner http://jaredheinrichs.com/how-to-install-network-driver-in-hyper-v-core-or-microsoft-hyper-v-server.html SlowBloke fucked around with this message at 21:46 on Aug 27, 2020 |
# ? Aug 27, 2020 21:39 |
|
SlowBloke posted:there are two major brands of usb nics, realtek and asix. Both of those have their drivers in windows update and have inf readily available. Also there is no driver gating AFAIK on intel nics(some edge cases require to use the 21xLM rather than 21xV driver), server 2019 should require a driver 25.x build to run. I really like the ASIX chipsets, they're in all of my USB Ethernet dongles. The stock PROSet installer refused to run on the machine ("no compatible adapters found"), and I don't know why that single line INF didn't show up in any of my searching — I will give that a try, thanks.
|
# ? Aug 27, 2020 22:11 |
|
movax posted:I really like the ASIX chipsets, they're in all of my USB Ethernet dongles. A quick search on stackoverflow shows some users with your issue doing this: 1. Extract the content of the driver package (latest is 25.2) on a gui-equipped machine 2. Copy the content of the extracted folder on a usb stick and connect it to the server 3. Point the driver installation wizard(or in your case cli) to PRO1000\Winx64\NDIS68 4. Force installation of 219lm driver Also a device manager equivalent can be installed using this guide https://social.technet.microsoft.com/wiki/contents/articles/182.how-to-obtain-the-current-version-of-device-console-utility-devcon-exe.aspx SlowBloke fucked around with this message at 22:40 on Aug 27, 2020 |
# ? Aug 27, 2020 22:29 |
|
KVM is so much nicer than Virtualbox. Spent ages wondering why I couldn't get a Windows client VM to do more than a tiny resolution to eventually discover the host Maximum Guest Screen Size setting of automatic was the problem (whereas the Linux VMs were working fine). I'll be glad when I finally upgrade my linux box to the point of it being good enough to take over as my VM playground.
Pablo Bluth fucked around with this message at 13:50 on Sep 18, 2020 |
# ? Sep 18, 2020 13:27 |
|
SlowBloke posted:there are two major brands of usb nics, realtek and asix. Both of those have their drivers in windows update and have inf readily available. Also there is no driver gating AFAIK on intel nics(some edge cases require to use the 21xLM rather than 21xV driver), server 2019 should require a driver 25.x build to run. So it looks like there might be some kind of driver signing issue... I219-V driver is in 'e1d68x64.inf' and running the pnputil command gives 'Failed to install the driver : No more data is available.' Working out how to check logs (man, cmdline only Windows is weird) to see what the actual root cause is.
|
# ? Sep 18, 2020 21:47 |
|
movax posted:So it looks like there might be some kind of driver signing issue... I219-V driver is in 'e1d68x64.inf' and running the pnputil command gives 'Failed to install the driver : No more data is available.' force windows to use 219lm drivers if it still cries about driver signing, use coreconfig if pnputil still throws a shitfit about it.
|
# ? Sep 21, 2020 18:44 |
|
I have a handful of public facing services in docker on a default bridge and behind a Traefik reverse proxy. Two of them need to be able to connect to a service on the host network. I'd put this service in docker if I could, but I'm not able to right now. What are my options?
Dr. Poz fucked around with this message at 04:32 on Oct 24, 2020 |
# ? Oct 24, 2020 02:40 |
|
After many years in Linux land I finally have a Windows 10 desktop again. Work restrictions are going to make dual-booting a pain so I've decided to give a Linux VM a chance. I've been trying to set up a local Fedora 33 VM through Hyper-V but the performance from a network/graphics perspective seems to be subpar. Even basic things like scrolling through a text editor has significant lag and a wget download will max out at like half my regular connection speed. The quick-setup option in Hyper-V has an Ubuntu 20.04 image that seems to run much better (network performance is fixed but and the graphics performance is a bit better), but I feel like I'm missing something. I'm assigning 4 cores and 8 gb of memory to the VM. Has anyone had luck setting up a local Linux guest through Hyper-V recently? Are there additional packages that I need to install for hardware acceleration or something?
|
# ? Nov 30, 2020 06:22 |
|
Nohearum posted:After many years in Linux land I finally have a Windows 10 desktop again. Work restrictions are going to make dual-booting a pain so I've decided to give a Linux VM a chance. I've been trying to set up a local Fedora 33 VM through Hyper-V but the performance from a network/graphics perspective seems to be subpar. Even basic things like scrolling through a text editor has significant lag and a wget download will max out at like half my regular connection speed. The quick-setup option in Hyper-V has an Ubuntu 20.04 image that seems to run much better (network performance is fixed but and the graphics performance is a bit better), but I feel like I'm missing something. I'm assigning 4 cores and 8 gb of memory to the VM. Try 1 or 2 cores and see how it runs
|
# ? Nov 30, 2020 14:09 |
|
Bob Morales posted:Try 1 or 2 cores and see how it runs Still lots of ui lag. I downloaded the VMware trial and that seems to run much better. Hyper-v sounded like a KVM/Qemu equivalent but I guess not it's not quite there for linux guests yet.
|
# ? Dec 1, 2020 03:52 |
|
Nohearum posted:After many years in Linux land I finally have a Windows 10 desktop again. Work restrictions are going to make dual-booting a pain so I've decided to give a Linux VM a chance. I've been trying to set up a local Fedora 33 VM through Hyper-V but the performance from a network/graphics perspective seems to be subpar. Even basic things like scrolling through a text editor has significant lag and a wget download will max out at like half my regular connection speed. The quick-setup option in Hyper-V has an Ubuntu 20.04 image that seems to run much better (network performance is fixed but and the graphics performance is a bit better), but I feel like I'm missing something. I'm assigning 4 cores and 8 gb of memory to the VM. I don't think Red Hat workstation versions are officially supported under Hyper-V though I don't know what difference that would make.
|
# ? Dec 4, 2020 22:57 |
|
TheFace posted:I don't think Red Hat workstation versions are officially supported under Hyper-V though I don't know what difference that would make. I mean, RH has official guides on how to install on HyperV https://developers.redhat.com/rhel8/install-rhel8-hyperv
|
# ? Dec 5, 2020 00:42 |
|
Some follow-up on my previous post. I had a bit more luck with Ubuntu in Hyper-V (I'm 90% satisfied with the performance), but it involved the following:
If you are successful you will be greeted with an xrdp login and a slider to specify the screen resolution for the VM
|
# ? Dec 5, 2020 03:18 |
|
Nohearum posted:Some follow-up on my previous post. I had a bit more luck with Ubuntu in Hyper-V (I'm 90% satisfied with the performance), but it involved the following: Just to be "that guy" have you looked at WSL2 as a comparison point?
|
# ? Dec 5, 2020 20:19 |
|
Maneki Neko posted:Just to be "that guy" have you looked at WSL2 as a comparison point? Given that he added the enhanced console he would require a gui, with WSL it's not on by default so maybe that's the reason for hyperv (but there are plenty of workaround like this one for instance https://www.nextofwindows.com/how-to-enable-wsl2-ubuntu-gui-and-use-rdp-to-remote)
|
# ? Dec 6, 2020 17:42 |
|
Maneki Neko posted:Just to be "that guy" have you looked at WSL2 as a comparison point? SlowBloke posted:Given that he added the enhanced console he would require a gui, with WSL it's not on by default so maybe that's the reason for hyperv (but there are plenty of workaround like this one for instance https://www.nextofwindows.com/how-to-enable-wsl2-ubuntu-gui-and-use-rdp-to-remote) I use WSL2 a decent amount of terminal based stuff but right now the graphical performance wasn't great when I tried it earlier this year. I'm planning on revisiting WSL2 once Microsoft released their built-in gui support (supposedly planned for release by holiday 2020 per this article: https://devblogs.microsoft.com/commandline/the-windows-subsystem-for-linux-build-2020-summary/#wsl-gui). It looks pretty promising with Wayland support included. Nohearum fucked around with this message at 03:58 on Dec 7, 2020 |
# ? Dec 7, 2020 03:55 |
|
Just learning about distributed switches in VMware. Just wondering what everyone is doing as best practice/standard practice. (I work with a Hyper-V environment.) Are you using distributed switch config for all 3 management network, vmotion, and vm port group networks? If not, how do you configure distributed switches in your environment? I assume team Management, iSCSI, and VM port group. Does anyone bother teaming vmotion? When moving a VM port group from standard switch to distributed switch, I assume there is a brief interruption on the VM? Generally would you have vmotion, management, and port groups on separate switches? Assuming you had something like 6 NICs on the hosts. lol internet. fucked around with this message at 18:25 on Dec 13, 2020 |
# ? Dec 13, 2020 18:22 |
|
lol internet. posted:Just learning about distributed switches in VMware. Just wondering what everyone is doing as best practice/standard practice. (I work with a Hyper-V environment.) IMHO with vDS it’s beneficial to have one single switch running the show, lots of switches add more complexity with minimal advantages. You can do hot standard to distributed migration with a couple of heartbeats/ping losses.
|
# ? Dec 13, 2020 20:54 |
|
lol internet. posted:Just learning about distributed switches in VMware. Just wondering what everyone is doing as best practice/standard practice. (I work with a Hyper-V environment.) Don't team iSCSI. With iSCSI, on a server where you want to use two NICs for iSCSI, you'd have two port groups and two VMK ports, one NIC assigned to each ONLY, and then use MPIO with the software iSCSI HBA. By best practice you should team all other traffic (for redundancy and resiliency sake). How you do so as far as distributing the NICs is your choice, but for management ease I tend to do one dVS and distribute NICs by port group. In most of my environments this means the following: iSCSI-A: 1 NIC assigned, 1 vmk port, on iSCSI vlan iSCSI-B: 1 NIC assigned, 1 vmk port, on iSCSI vlan Assign both to software HBA and configure MPIO policy Management & vMotion & VM traffic tend to share two NICs Unless VM traffic is intensive in which case: Management & vMotion share two NICs, VM traffic shares two other NICs if vSAN is involved it should always have its own 2 NICs
|
# ? Dec 18, 2020 19:28 |
|
Did VMware change something with ESXi 7's ISO? I can't seem to make a bootable image to save my life. I used to just dd to a USB drive but that doesn't seem to work now, and balenaetcher says something about the ISO not being bootable. Just trying to do a temp esxi install on a spare Optiplex 9020 to test a theory but I'm kind of at a loss right now. e: Oh nevermind it's an EFI boot image. some kinda jackal fucked around with this message at 18:35 on Jan 6, 2021 |
# ? Jan 6, 2021 18:30 |
|
How much benefit would UEFI and Secure boot provide on ESXi 6.7 with Linux virtual machines? I doubt we would be making proper secure boot working with these machines and these won't have large boot disks, so that's two biggest benefits out. And I haven't found other benefits that might be worth any extra hassle with PXE or CD-image boots.
|
# ? Jan 27, 2021 14:42 |
|
Saukkis posted:How much benefit would UEFI and Secure boot provide on ESXi 6.7 with Linux virtual machines? I doubt we would be making proper secure boot working with these machines and these won't have large boot disks, so that's two biggest benefits out. And I haven't found other benefits that might be worth any extra hassle with PXE or CD-image boots. https://wiki.ubuntu.com/UEFI/SecureBoot Ubuntu 20.04 has a mostly working stack so using secure boot means that nobody has tinkered with the core os payload. If you use older builds there is little to no point as the feature wasn’t complete.
|
# ? Jan 27, 2021 18:59 |
|
CommieGIR posted:I mean, RH has official guides on how to install on HyperV Wow I just got the .iso for 8, what a lucky coincidence you posted this. Thanks!
|
# ? Jan 29, 2021 03:56 |
|
Saukkis posted:How much benefit would UEFI and Secure boot provide on ESXi 6.7 with Linux virtual machines? I doubt we would be making proper secure boot working with these machines and these won't have large boot disks, so that's two biggest benefits out. And I haven't found other benefits that might be worth any extra hassle with PXE or CD-image boots. secureboot on a linux guest will enforce kernel driver signing, so its mostly a rootkit and unsigned code mitigation
|
# ? Jan 29, 2021 22:33 |
|
haaaay I'm on my first time doing virtualbox headless and I'm deploying my first guest VM! I followed this tutorial and everything worked as in: no errors, and listing the running vms shows the vm running. However, I didn't get the "VRDE server is listening on..." message, and trying to connect to ip:port using mstsc from another computer on the lan says no answer. What did I miss? What's the best website for troubleshooting virtualbox?
|
# ? Feb 3, 2021 15:30 |
|
What is the most convenient way to automatically power on a VM after it has powered of in VMware? We just raised the EVC mode on our cluster and now have to power off hundreds of VMs. Currently I'm running a Powershell script that checks every hour which servers have maintenance window and then waits for them to power off and starts them right after. At least the script also fixes guest OS and other stuff, so it's not completely useless work. For a long time I've wished the scheduled tasks had a trigger "after power off". I have never used scheduled tasks and that's probably the only feature that would get me to use them.
|
# ? Feb 3, 2021 15:54 |
|
NevergirlsOFFICIAL posted:haaaay I'm on my first time doing virtualbox headless and I'm deploying my first guest VM! Is there any reason you aren't using Vagrant for this?
|
# ? Feb 3, 2021 15:56 |
|
Matt Zerella posted:Is there any reason you aren't using Vagrant for this? Yeah, the main reason is I didn't know about it until just now.
|
# ? Feb 3, 2021 16:04 |
|
so this is just like docker but for vms? cool
|
# ? Feb 3, 2021 16:06 |
|
NevergirlsOFFICIAL posted:so this is just like docker but for vms? Yeah its a nice fast way to spin up VMs.
|
# ? Feb 3, 2021 16:13 |
|
Saukkis posted:What is the most convenient way to automatically power on a VM after it has powered of in VMware? We just raised the EVC mode on our cluster and now have to power off hundreds of VMs. Currently I'm running a Powershell script that checks every hour which servers have maintenance window and then waits for them to power off and starts them right after. At least the script also fixes guest OS and other stuff, so it's not completely useless work. Why not use tags? You could mark which vm has a certain service window, which vm has been shut down already, etc...
|
# ? Feb 3, 2021 22:04 |
|
Matt Zerella posted:Yeah its a nice fast way to spin up VMs. Turns out Vagrant gave me different errors :-/ I gave up and just went to the data center team to make it someone else's problem
|
# ? Feb 4, 2021 13:43 |
|
SlowBloke posted:Why not use tags? You could mark which vm has a certain service window, which vm has been shut down already, etc... I'm not sure if that would help with the scheduling. In the simplest way I want vCenter to autostart VMs right after they have powered off, without needing to run a custom script on a separate management server. Tags might be useful to determine whether VM should be autostarted or not. Scheduled tasks sound like the right tool, but it's lacking convenient trigger. Run once at $time is close, but we can't know the precise time in advance. A VM starts running 'yum update' at 12:01, but we don't know if it finishes at 12:03 or 12:23. I would think this should be a common desire, everyone needs the option to power on a VM soon after it powers off. EVC upgrades, Meltdown/Spectre, E1000->VMXNET3 conversions, guest OS fixes. After Spectre we scripted the power off for our VMs, but we didn't yet have a script for powering on, so every hour someone was staring at vCenter, waiting for a new VM to show up on the "powered off" list. And be careful not to start any of the VMs that were supposed to be powered off.
|
# ? Feb 4, 2021 15:48 |
|
Saukkis posted:I'm not sure if that would help with the scheduling. In the simplest way I want vCenter to autostart VMs right after they have powered off, without needing to run a custom script on a separate management server. Tags might be useful to determine whether VM should be autostarted or not. Scheduled tasks sound like the right tool, but it's lacking convenient trigger. Run once at $time is close, but we can't know the precise time in advance. A VM starts running 'yum update' at 12:01, but we don't know if it finishes at 12:03 or 12:23. Why are you shutting down vms after updates, or is this also related to moving them into the evc cluster?
|
# ? Feb 4, 2021 17:29 |
|
Saukkis posted:I'm not sure if that would help with the scheduling. In the simplest way I want vCenter to autostart VMs right after they have powered off, without needing to run a custom script on a separate management server. Tags might be useful to determine whether VM should be autostarted or not. Scheduled tasks sound like the right tool, but it's lacking convenient trigger. Run once at $time is close, but we can't know the precise time in advance. A VM starts running 'yum update' at 12:01, but we don't know if it finishes at 12:03 or 12:23. What I mean is to run a powercli script on a always on host every hour with a mapping of service window tag to time to execute along with a done/todo tag. Get-vm * -> filter on todo tag -> check which vm has a service window tag that is coherent with the script execution time -> gracefully shut down vm that are in the service window -> wait 2 minutes -> power up every shut down vm that has the service window tag -> update done tag on vms that have been executed on. Scheduled tasks haven’t got that much intelligence to do what you are asking.
|
# ? Feb 4, 2021 17:29 |
|
ihafarm posted:Why are you shutting down vms after updates, or is this also related to moving them into the evc cluster? EVC changes require a full shutdown to be committed, simple restarts won’t have an effect.
|
# ? Feb 4, 2021 17:30 |
|
ihafarm posted:Why are you shutting down vms after updates, or is this also related to moving them into the evc cluster? We decommissioned our oldest hosts and we could then upgrade the EVC mode from Sandy Bridge to Haswell. But that is only one of many operations that require shutting them down. This round we also have a large number of RHEL8 servers that are listed as RHEL7 in VMware and we can now fix those after upgrading to 6.7. SlowBloke posted:What I mean is to run a powercli script on a always on host every hour with a mapping of service window tag to time to execute along with a done/todo tag. Get-vm * -> filter on todo tag -> check which vm has a service window tag that is coherent with the script execution time -> gracefully shut down vm that are in the service window -> wait 2 minutes -> power up every shut down vm that has the service window tag -> update done tag on vms that have been executed on. Oh yeah, that's basically what we do. Every time the hour changes the script checks a website listing the servers that use that window, checks which of them are VMs, waits for them to power off, does any planned operations and then starts them up within seconds. But it feels cumbersome compared to KVM's autostart setting.
|
# ? Feb 4, 2021 18:04 |
|
Saukkis posted:We decommissioned our oldest hosts and we could then upgrade the EVC mode from Sandy Bridge to Haswell. But that is only one of many operations that require shutting them down. This round we also have a large number of RHEL8 servers that are listed as RHEL7 in VMware and we can now fix those after upgrading to 6.7. The official way is using vRealize automation, it’s just that I’m not skilled enough on it to provide insight on how to execute on it.
|
# ? Feb 4, 2021 18:12 |
|
|
# ? May 9, 2024 06:13 |
Is KVM VMM and Looking Glass the go-to right now for doing pass-through GPU for something like a windows VM in a Linux environment to get (near) bare metal performance? I'm switching over to Linux for my main OS, but there's still a couple of work apps that don't work (well) in Linux and the the windows experience is not super great using its virtual video card.
|
|
# ? Feb 8, 2021 16:27 |