|
only took a full year of observing nightly 1000+ms storage latencies for our vmware team to take my suggestion of turning SIOC on seriously. great job, guys
|
# ? Apr 5, 2019 14:07 |
|
|
# ? May 8, 2024 21:49 |
|
Did VMware fix their poo poo where LACP connections default to slow PDU rates and the only way to fix it is via a startup script or scheduled esxcli job against the vdswitch after startup?
|
# ? Apr 5, 2019 16:20 |
|
BangersInMyKnickers posted:Did VMware fix their poo poo where LACP connections default to slow PDU rates and the only way to fix it is via a startup script or scheduled esxcli job against the vdswitch after startup? VMware really wants you to manage load balancing and redundancy exclusively through the vSwitch so I doubt they’ll be doing much to improve switch based LAG functionality.
|
# ? Apr 6, 2019 02:00 |
|
I had to do some searching for a PC supporting PCI and the HP Zstation 280 was the only thing I could find that was presently available for sale from a big name.
|
# ? Apr 6, 2019 02:57 |
|
Wibla posted:Sigh, I need a compact esxi host with room for 4-6 3.5" drives and 16ish GB ram, but the microserver gen8 is out of production and the gen10 is apparently garbage? Find a SuperMicro system, you can get a 1U with a dual Xeon or AMD Opteron for fairly cheap. I run a Quad AMD Opteron system for my Virtual Lab running Xenserver and PfSense for virtual switch routing/vlans.
|
# ? Apr 6, 2019 03:23 |
|
Wibla posted:Sigh, I need a compact esxi host with room for 4-6 3.5" drives and 16ish GB ram, but the microserver gen8 is out of production and the gen10 is apparently garbage? We’ve been looking for this and haven’t found it, if you do stumble across something good, post it.
|
# ? Apr 6, 2019 05:34 |
|
Maneki Neko posted:We’ve been looking for this and haven’t found it, if you do stumble across something good, post it. https://www.supermicro.com/products/system/midtower/5028/SYS-5028D-TN4T.cfm
|
# ? Apr 6, 2019 05:44 |
|
I have a two VMWare ESXi 6.7 host setup with shared storage and I want to setup an EVC cluster using a VCSA that lives on these hosts. Is there a better way of moving the appliance into the cluster that I don't know about? I would think you move all VMs off host 1, move it into cluster, then shutdown the VCSA, remove from inventory on host 2 and then add it to inventory on host 1. I've done this in the past no problem. Now when you boot the VCSA back up something goes horribly wrong because you can no longer log into the vCenter web client. Management is fine. Shell is fine. I tried to do some reading and it sounds like some database corruption. This wouldn't be a big deal because I can just rebuild the appliance and start over, except then the same thing happens.
|
# ? Apr 6, 2019 13:52 |
|
CommieGIR posted:Find a SuperMicro system, you can get a 1U with a dual Xeon or AMD Opteron for fairly cheap. Yeah I already have a colocated DL360 G7 with dual X5675 CPUs and 120GB ram running most of the stuff that requires heavy lifting, and I'm keeping that system as-is. I'm looking for something smaller that can do double duty as VM host + firewall + storage appliance in my apartment, preferably while being quiet That Xeon config is expensive as all hell, but the chassis is interesting, one of these would probably do the job: https://www.supermicro.com/products/system/midtower/5029/SYS-5029S-TN2.cfm Wibla fucked around with this message at 14:08 on Apr 6, 2019 |
# ? Apr 6, 2019 14:05 |
|
YOLOsubmarine posted:VMware really wants you to manage load balancing and redundancy exclusively through the vSwitch so I doubt they’ll be doing much to improve switch based LAG functionality. Yeah, we're looking at a solution for an Imperva waf that needs it though. I think the upstream switches are cisco, maybe etherchannel will work better.
|
# ? Apr 6, 2019 20:47 |
|
Wibla posted:Yeah I already have a colocated DL360 G7 with dual X5675 CPUs and 120GB ram running most of the stuff that requires heavy lifting, and I'm keeping that system as-is. Yeah, the Gen8 Microservers were a screaming deal for what you got, we weren’t paying much more than this bare Supermicro for fully populated Microservers with 3 years HP on-site support and full iLo. Processor was weak, but did what we needed.
|
# ? Apr 7, 2019 04:35 |
|
snackcakes posted:I have a two VMWare ESXi 6.7 host setup with shared storage and I want to setup an EVC cluster using a VCSA that lives on these hosts. Is there a better way of moving the appliance into the cluster that I don't know about? Build Host #1 Install vCenter Appliance Build Host #2 - Add to vCenter management. Create Cluster Move Host #2 into cluster. Live migrate vCenter appliance onto Host #2 Add Host #1 to cluster. Just built a farm 2 weeks ago starting fresh on 6.7 and that worked fine for me. Basically just don't shut down the appliance. If your other VMs are offline theres no real risk of the host in the cluster and the one not yet migrated sharing a backend storage connection since they aren't really accessing the same disk simultaneously. E: I find the new requirement that a host has to go into maintenance mode to join a cluster annoying but I'm sure someone way smarter than me had a good reason for that change over at VMware. Digital_Jesus fucked around with this message at 04:42 on Apr 7, 2019 |
# ? Apr 7, 2019 04:38 |
|
Digital_Jesus posted:E: I find the new requirement that a host has to go into maintenance mode to join a cluster annoying but I'm sure someone way smarter than me had a good reason for that change over at VMware. Ive seen some odd stuff moving hosts into and out of clusters without maintenance mode. NSX in particular seems to trip up.
|
# ? Apr 7, 2019 06:10 |
|
Digital_Jesus posted:Build Host #1 You are my hero. I think where I went wrong is that I created the cluster and configured HA and EVC before moving hosts into it. I would get an error when I tried to migrate the appliance in. If I move hosts into the cluster before configuring HA and EVC I have no issues.
|
# ? Apr 7, 2019 16:14 |
|
I have an ESXi server with a variety of Windows 10 VMs on it. I have a question about the different ways of connecting to the VM and how they work. If it matters, I am using a Mac and therefore the Mac version of each of these softwares to access the VMs. Some background about my situation. I have a script that uses Python to continuously look for images in an area of the screen and react to them. I can start that script either on the VM itself or remotely and it goes on its merry way, in 2 of 3 situations. 1. If I have the VM open in ESXi, Microsoft Remote Desktop or VNC Viewer - It works perfectly. It goes for hours on end looking and clicking where it is meant to. 2. If I have the VM open in VNC Viewer and then close the VNC Viewer window it keeps going too. 3. If I have the VM open in Remote Desktop and then close the Remote Desktop window, it stops. The script throws an error as it is unable to take a screenshot. I've not actually tested what happens if I have the VM open in the ESXi browser console window and close it. My question is why does this happen? My guess is that there's a fundamental difference between the way that Remote Desktop and VNC Viewer react to closing the window, but given you're all much more knowledgable than me in the field I'd love some advice.
|
# ? Apr 8, 2019 09:58 |
|
Sad Panda posted:My question is why does this happen? My guess is that there's a fundamental difference between the way that Remote Desktop and VNC Viewer react to closing the window, but given you're all much more knowledgable than me in the field I'd love some advice. You're correct. I believe Remote Desktop essentially uses a virtual graphics card/monitor, and when you disconnect your session by closing that window those virtual devices are removed.
|
# ? Apr 8, 2019 17:18 |
|
When you created your Windows VM, ESXi attached a virtual monitor. This virtual monitor doesn't ever display anything, but by having it attached the Windows OS will do its part and compose images to be displayed to this fake monitor. Both the ESXi viewer and VNC Viewer use the framebuffer used for the fake monitor to compose their own images for you to see what's going on. This fake monitor is never detached from the OS, so even when you close VNC Viewer the images are still being composed so your script can still monitor for changes. RDP doesn't use any pre-existing framebuffers. As H2SO4 said, it creates its own virtual devices when you initiate the session, then removes them once you end the session. When the RDP monitor is removed, Windows stops creating video for that video device, so your script can't run anymore.
|
# ? Apr 8, 2019 17:53 |
|
Sad Panda posted:My question is why does this happen? My guess is that there's a fundamental difference between the way that Remote Desktop and VNC Viewer react to closing the window, but given you're all much more knowledgable than me in the field I'd love some advice. Actuarial Fables posted:When you created your Windows VM, ESXi attached a virtual monitor. This virtual monitor doesn't ever display anything, but by having it attached the Windows OS will do its part and compose images to be displayed to this fake monitor. Both the ESXi viewer and VNC Viewer use the framebuffer used for the fake monitor to compose their own images for you to see what's going on. This fake monitor is never detached from the OS, so even when you close VNC Viewer the images are still being composed so your script can still monitor for changes.
|
# ? Apr 10, 2019 10:57 |
|
Digital_Jesus posted:Theres no real advantage of having a gpu attached to your vdi instance unless youre gonna do somethinf that requires a gpu. Depends on the display protocol. There's a definite advantage to using a GPU with Citrix ICA/HDX, Horizon Blast or even PCoIP (anything that uses h.264 encoding VM side for the protocol) as the protocol encoding can be done in GPU instead of CPU, and GPU is much more efficient at it. Also just general use Windows 10 leverages a GPU more than Windows 7 ever did though the benefit there can probably be written off. How much the encoding matters depends on resolution really, but I know my no GPU Win10, Horizon VM, eats a ton of CPU for just Blast protocol when at 4K with a relatively active screen. Same VM with even just a 1 GB vGPU (1 GB required for server side encoding) and CPU is next to nothing for Blast, and GPU is also barely taxed. That said, what someone else said about using GRID for vGPU is true, if you actually want to slice up the GPU between multiple VMs you'll have to go GRID (or AMD has a solution now too, intel does as well but I'm pretty sure it only works in XenServer and maybe KVM at this point). You can still pass the entire GPU through to a VM if you want without a GRID card. If you go GRID (or AMD's solution) even if you can snag a cheap card off Ebay there's a pretty significant licensing cost now.
|
# ? Apr 11, 2019 21:07 |
|
Sure but none of that applies to some dude asking about building a Win10 VM on his home test server, which is who I was replying to at the time. E: Thats all good info BTW. Digital_Jesus fucked around with this message at 15:34 on Apr 12, 2019 |
# ? Apr 12, 2019 15:31 |
|
Digital_Jesus posted:Sure but none of that applies to some dude asking about building a Win10 VM on his home test server, which is who I was replying to at the time. You are not passing a GPU through your whitebox hole ESXi server to your Plex VM for hardware transcoding?
|
# ? Apr 12, 2019 20:02 |
|
Digital_Jesus posted:Sure but none of that applies to some dude asking about building a Win10 VM on his home test server, which is who I was replying to at the time. It's all good information and whether I'm doing a good thing or a stupid thing it's all good fun messing around with this stuff. Currently creating a Win10 VM in ESXi with a GTX 1060 6GB passed through to it and I've given it fairly minimal hardware at the moment (2 cores and 8GB RAM). I've got a long HDMI cable going right across the room from the ESXi box and once it's set up I'm gonna try removing the HDMI and playing using NVIDIA Gamestream on a Linux PC that's next to the TV. Using my LAN as a transport for the game to a 'dumb' PC connected to the TV. If this works out well then it may influence what I buy for my ESXi host in future. I fancy building a Ryzen system for ESXi with loads of cores and making a gaming VM on it, alongside more work-oriented VM's. The benefits I could see here are that my ESXi host is in an old Fractal R5 case and can be kept away from the TV, so that even if you have a GPU that makes noise under load you can effectively be gaming in a quiet room. And also the fact that you can have a living room without a beast of a gaming PC sat next to your TV. I think I was a little off when I asked about VDI, which is more workload oriented.
|
# ? Apr 14, 2019 17:18 |
|
TheFace posted:Depends on the display protocol. There's a definite advantage to using a GPU with Citrix ICA/HDX, Horizon Blast or even PCoIP (anything that uses h.264 encoding VM side for the protocol) as the protocol encoding can be done in GPU instead of CPU, and GPU is much more efficient at it. Also just general use Windows 10 leverages a GPU more than Windows 7 ever did though the benefit there can probably be written off. If you have an older GRID card (K1/K2) you don't have any licensing to deal with. They only added that on the newer cards.
|
# ? Apr 15, 2019 01:15 |
|
TheFace posted:If you go GRID (or AMD's solution) even if you can snag a cheap card off Ebay there's a pretty significant licensing cost now. How does the licensing server work? Does it block the card if you don't have a licence? And what does 'significant' actually mean? I tried finding it but was unable to google I guess. Just curious because while my current setup is home based with a couple of cheap graphics card, if I scale it up massively then it might need a GRID solution one day. Sad Panda fucked around with this message at 16:00 on Apr 15, 2019 |
# ? Apr 15, 2019 15:44 |
|
Sad Panda posted:How does the licensing server work? Does it block the card if you don't have a licence? And what does 'significant' actually mean? I tried finding it but was unable to google I guess. For GRID it is licensed per VM basically and the driver software has to be configured to point to the licensing server to check in. If there is no license it basically just turns off the GPU for that VM (VM will still work it just wont get the benefit of GPU, it'll also complain about licensing on login), there is a grace period on licensing so if you can borrow a card to slap in you can test without any licensing investment. I'm not surprised you didn't find much via google... their documentation for setup of the GRID stuff is pretty poo poo. I've only seen demo's of AMDs stuff but it looks like it's also per VM and controlled by a plugin to vCenter. H2SO4 posted:If you have an older GRID card (K1/K2) you don't have any licensing to deal with. They only added that on the newer cards. I haven't dealt with K1 or K2 cards in a while so I didn't know if they added licensing to them, since it's configured VM side by the driver software I figured they probably did. Good to know.
|
# ? Apr 15, 2019 17:27 |
|
I've been messing around with vSAN v6.7 and I'm having a hard time determining the memory requirements for the witness host for a vSAN Stretched cluster. Can anyone point me or tell me a good rule of thumb for memory requirements? I currently have two ESXi 6.7 hosts with 16 cores and 96GB each booting off of thumb drives and they use a FreeNAS server for their shared iSCSIstorage. the FreeNAS box has identical hardware (mobo/ram/mem) but has 32GB RAM and a bunch of SSD configured in passthough mode. I'm thinking about reconfiguring the hardware so that each server has the identical amount of SSD disk and setting up vSAN using the old iSCSI box as a witness server. I run about two dozen VMs currently. What processes require memory on the witness node that would require RAM? Will 32 gigs be enough considering that each ESX host will have approximately three times the amount?
|
# ? Apr 15, 2019 19:56 |
|
TheFace posted:
Nah, I'm sure they wanted to but that would be a legal issue. I finally got an R720 that could accept my K2 and it worked just fine with ESXi 6.5.
|
# ? Apr 15, 2019 21:18 |
|
Agrikk posted:I've been messing around with vSAN v6.7 and I'm having a hard time determining the memory requirements for the witness host for a vSAN Stretched cluster. Can anyone point me or tell me a good rule of thumb for memory requirements? 32gb is enough good lord You need to go spend $10 ($5?) on the Yellowbricks VSAN deep dive ebook. Read the intro chapter and the installation/requirements chapter. Seriously, there's so much insight in just those two short chapters you never knew you needed until you...well, know you need it It will take you an hour at most
|
# ? Apr 16, 2019 17:28 |
|
I figured that for a cluster quorum box it should be sufficient. But the FreeNAS box that is currently running had so many problems under my workload with my original setup (Opteron 6000 8-core and 16GB RAM) that I'm leery of all hardware-related bottlenecks. Thanks for the tip on the Deep Dive book. It looks interesting.
|
# ? Apr 16, 2019 20:03 |
|
I am running ESXi 6.7 and have a pair of Geforce 710s plugged into my motherboard. The Asus is connected to 0000:01b:00.0. The MSI is connected to 0000:01c:00.0. I set up VM1 for 1b, and it works just fine. I set up a VM2 for 1c, followed the same procedure and get...quote:
If I turn off VM1 and try to attach 1b to VM2 it works just fine. What could be causing this? That graphics card was connected to a different VM before I wiped them all. Sad Panda fucked around with this message at 23:38 on Apr 17, 2019 |
# ? Apr 17, 2019 23:26 |
|
Some number of motherboards can only engage one 16x PCIe slot at a time and you have to change something in bios to have it split it up in to 2 8x slots that run at the same time
|
# ? Apr 18, 2019 01:00 |
|
BangersInMyKnickers posted:Some number of motherboards can only engage one 16x PCIe slot at a time and you have to change something in bios to have it split it up in to 2 8x slots that run at the same time I have an MSI B450-A Pro (https://www.msi.com/Motherboard/B450-A-PRO.html) and it doesn't seem to have that issue. I changed the one thing I could see in the BIOS about that but it doesn't seem to have resolved it. To ensure it wasn't down to the VM itself, I tried to create a new VM using default options and adding the PCIe card. Same error. Current suspicion is somehow the graphics card has got damaged (although it works to output the POST messages) so I've ordered a new Asus card identical to the other one to see if that resolves the issue. If not my next idea is to wipe ESXi and start from scratch. I'll need to work out how to backup my VMs, but here's hoping the new card fixes it. edit : Shut down the machine, moved the Asus card into the MSI slot. Booted up, same issue with just that one. Seems like ESXi might have got broken? Or there's an issue with the physical slot itself on the motherboard. edit 2 : Tried using the built-in Reset System Configuration but problem persists. My next option is to wipe the whole thing and see if I can get it to work from scratch. The problem is I've no idea how to backup my existing VMs. Any advice would be great as nothing that I read makes sense. To confirm my setup, I have a single datastore (my 500GB SSD) and access to an external USB drive. While I can plug that in and see it on passthrough in a VM, I don't see anyway to backup VMs to the USB. I see things about making OVF files but these options all seem to be related to vSphere Client. I only have the ESXi 6.7 hypervisor installed. I tried downloading + setting up the trial version of vSphere but that's just making my head hurt even more and isn't connected to my ESXi setup so sees none of my VMs. Sad Panda fucked around with this message at 18:13 on Apr 18, 2019 |
# ? Apr 18, 2019 11:21 |
|
Just to post it all in one as opposed to that mess.. I have ESXi 6.7 and am trying to passthrough a GPU to a fresh VM. I get the following... quote:State I have VMkernel.Boot.disableACSCheck to True, but that doesn't seem to help. The motherboard (an MSI B450-A Pro) has two PCIe slots, the 16x that the GPU is plugged into now and the other one. If I plug it into the 16x then it gives that error. If I plug it into the other one, it works. This makes me suspect it's either the motherboard or ESXi. The card shows my POST/BIOS/ESXi stuff when it is plugged in, which is why I think an ESXi problem is more likely. I have tried using Reset System Configuration to go back to defaults but it doesn't seem to help at all. I can't find anything useful about how to fix it online. If there are no ideas, then my next step is to try to wipe the whole server and start from scratch. However, I've only got a single Datastore (my 500GB SSD) which currently houses my VMs. I'm obviously reluctant to lose them all. I've tried to work out how to back them up, but I'm at a loss. I've got an external USB drive that I can plug in. I can connect it to a VM and it will act as a passthrough, but that doesn't seem like it is going to work to back up the VM. I set up a trial version of vSphere Server to try to help, but as an absolute newbie it is just making my head hurt. I tried the obvious 'click VM and select export' but it has been sitting there for the last 15 minutes running and not doing anything so... I'm confused. Sad Panda fucked around with this message at 18:46 on Apr 18, 2019 |
# ? Apr 18, 2019 18:23 |
|
You can do a reinstall of ESXi but preserve the datastore, even if it's all on the same disk As for backing up the VMs, VMs are just a bunch of files, mount the USB drive and copy VM files out (with VMs shutdown). But again you don't have to do that because you can preserve the datastore. TheFace fucked around with this message at 19:44 on Apr 18, 2019 |
# ? Apr 18, 2019 19:36 |
|
TheFace posted:You can do a reinstall of ESXi but preserve the datastore, even if it's all on the same disk Managed that through a painfully slow NFS setup to an external USB. I did a re-install and a fresh install both while preserving the datastore. Still nothing. GPU works just fine in one slot and throws quote:Errors Module 'DevicePowerOn' power on failed. Failed to register the device pciPassthru0 for 28:0.0 due to unavailable hardware or software support.
|
# ? Apr 19, 2019 21:38 |
|
Sad Panda posted:Managed that through a painfully slow NFS setup to an external USB. I did a re-install and a fresh install both while preserving the datastore. Still nothing. GPU works just fine in one slot and throws You mentioned having 2 GPUs, does the other one work in the other slot when you flip them? Wondering if your PCI-e slot is bad?
|
# ? Apr 19, 2019 21:47 |
|
So I picked up an ASRock B450 Pro4 and a Ryzen 1700 this morning to replace my old ESXi box. I found I was using ESXi more and more for homelab stuff and tinkering and I wanted those beautiful 8 cores of the Ryzen on a budget. My old i5 just wasn't cutting it anymore for running more than a couple of Windows VM's and a few Linuxes. I stripped the Intel system out of my Fractal R5 case tonight (the old board is also an ASRock Z270 Pro4) and rebuilt using the Ryzen mobo and CPU. It was my first Ryzen build and went smoothly. Now, the ESXi 6.7 installer doesn't recognise the NIC on my B450 motherboard. It's plugged into exactly the same port on my router as the old Intel board, so it my be a driver package thing that doesn't come bundled with ESXi 6.7 and my new board (which is pretty much the exact equivalent status to the Intel one) must have a different LAN chipset. It's after 10pm and I can't be bothered to sort this out tonight. Any advice for rolling my own network driver into ESXi?
|
# ? Apr 19, 2019 22:13 |
|
TheFace posted:You mentioned having 2 GPUs, does the other one work in the other slot when you flip them? Wondering if your PCI-e slot is bad? Neither GPU works in that slot. I can't think why the slot would be bad but it's starting to feel like the realistic option. The one thing though is if the GPU is put in that PCI-e slot it can't boot up a VM in ESXi but it can output enough for me to see the BIOS screen and ESXi installer/console stuff (my mobo has no onboard graphics). If it makes any sense to anyone, here's vmkernel.log from the most recent attempt. quote:2019-04-19T22:10:51.783Z cpu1:2099330 opID=edc4a7fc)World: 11943: VC opID esxui-e0f6-1d30 maps to vmkernel opID edc4a7fc
|
# ? Apr 19, 2019 22:23 |
|
Issue resolved. The solution? passthru.map had somehow got nonsense data in it that wasn't resetting with a fresh install. It was storing the device ID as ffff instead of what it should have been and instead of d3d0 (to reset?) it was set to bridge. I don't know what caused it, but it's fixed and I have both of my GPUs passing through again as hoped for. Thank you for the help!
|
# ? Apr 20, 2019 23:21 |
|
|
# ? May 8, 2024 21:49 |
|
Agrikk posted:I've been messing around with vSAN v6.7 and I'm having a hard time determining the memory requirements for the witness host for a vSAN Stretched cluster. Can anyone point me or tell me a good rule of thumb for memory requirements? You can find the specs of the three appliance sizes here: https://www.virtualizationhowto.com/2018/01/install-and-configure-vmware-vsan-witness-appliance-in-vmware-workstation/ You’d likely be a medium, so you just need to be able to run a 16GB VM. There are no hypervisor processes to the witness like on the VSAN storage nodes so no additional memory beyond what the virtual appliance requires.
|
# ? Apr 20, 2019 23:40 |