Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit

Perplx posted:

It's cleaner if you do one discreet video card and usb controller per vm, for a while I booted fedora on intel igpu, windows 10 vm with a 2080 and a osx vm on a Rx 570 and I could use all 3 at once. It's also possible to share your 2080ti with multiple vms but you'd lose video out, you'd have to use a bunch of steamlinks or something for that.

Now I proxmox because its basically built for edge cases like this.

this is so absurd I love it

Adbot
ADBOT LOVES YOU

PremiumSupport
Aug 17, 2015
Currently having an odd issue with a ESXi 5.5 machine. It's been running fine for years but suddenly over the last week heartbeats to Datastore 2 have been timing out causing the VMs to lock up and the host to need a reboot. My google-fu has failed me as all the results I seem to find refer to SAN setups and tell me to check my networking. Datastore 2 is an internal SATA Raid10 storage array on a LSI controller. I am getting no warnings from the RAID health monitors and no other abnormal indication of an issue.

Am I on the right track in thinking that my RAID controller needs replacing?

I know it's well out of support but replacing it requires funds and being a not-for-profit funds don't come easily.

KS
Jun 10, 2003
Outrageous Lumpwad
It can be a lot of things, including a single disk that is on its way out. Some disks do long timeouts instead of throwing immediate read errors like enterprise disks do.

Any unusual read or write activity correlated with it?

Wibla
Feb 16, 2011

And all the logs look fine otherwise?

How old is the server? Is the BBU on the raid card OK, if equipped?

Also unrelated: switched from ESXi to proxmox for my small home setup, absolutely no regrets doing so so far :sun:

PremiumSupport
Aug 17, 2015
I see nothing out of the ordinary in the logs, everything is running fine then the heartbeat starts getting a delayed response a couple times, then fails to get a response. The raid monitor indicates everything is normal with the array and I'm not seeing any unusual read/write activity that correlates with it. In fact, it usually goes out in periods where there is little to no disk load being put on the server. The most common time is just after 10pm local.

Edit to add: There's no BBU on this RAID controller card and I do not have the optional add-on.

PremiumSupport fucked around with this message at 16:45 on Aug 13, 2021

lol internet.
Sep 4, 2007
the internet makes you stupid
Moving from hyper v to VMware.

Does VMware still have a converter? Can I do a live conversion turn schedule a shut down old server turn on new server?

I assume I'm hosed for vms that have shared storage in a cluster.

Ffycchi
Jun 4, 2014

Sigh...challenge accepted...shitty photoshop incoming.

lol internet. posted:

Moving from hyper v to VMware.

Does VMware still have a converter? Can I do a live conversion turn schedule a shut down old server turn on new server?

I assume I'm hosed for vms that have shared storage in a cluster.

Yeah they still have VMware vcenter convert

As for the cluster bit, I'm sure there is a way, don't ask me how though. I'd have to be familiar with the environment.

SlowBloke
Aug 14, 2017

lol internet. posted:

Moving from hyper v to VMware.

Does VMware still have a converter? Can I do a live conversion turn schedule a shut down old server turn on new server?

I assume I'm hosed for vms that have shared storage in a cluster.

https://www.vmware.com/it/products/converter.html

You can set the vm to be shutdown once the new vm is validated as ok.

For the shared storage, the agent should be able to fetch the blocks and move them to a new virtual disk. If you planned to simply expose a lun as a disk, it’s not really viable(you can but I would strongly go against).

SlowBloke fucked around with this message at 08:37 on Aug 14, 2021

movax
Aug 30, 2008

Why does VMware make it such a pain in the rear end to download free ESXi?!! "Dear user, content not available." :getout:

I noticed in the release notes for basically everything after 7.0, they have an errata that is basically "lol no work around for a race condition on USB". Does this basically break USB booting? Seems like kind of big deal.

quote:

NEW: If you use a USB as a boot device for ESXi 7.0 Update 2a, ESXi hosts might become unresponsive and you see host not-responding and boot bank is not found alerts
USB devices have a small queue depth and due to a race condition in the ESXi storage stack, some I/O operations might not get to the device. Such I/Os queue in the ESXi storage stack and ultimately time out. As a result, ESXi hosts become unresponsive.
In the vSphere Client, you see alerts such as Alert: /bootbank not to be found at path '/bootbank' and Host not-responding.
In vmkernel logs, you see errors such as:
2021-04-12T04:47:44.940Z cpu0:2097441)ScsiPath: 8058: Cancelled Cmd(0x45b92ea3fd40) 0xa0, cmdId.initiator=0x4538c859b8f8 CmdSN 0x0 from world 0 to path "vmhba32:C0:T0:L0". Cmd count Active:0 Queued:1.
2021-04-12T04:48:50.527Z cpu2:2097440)ScsiDeviceIO: 4315: Cmd(0x45b92ea76d40) 0x28, cmdId.initiator=0x4305f74cc780 CmdSN 0x1279 from world 2099370 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x5 D:0x0 P:0x0 Cancelled from path layer. Cmd count Active:1
2021-04-12T04:48:50.527Z cpu2:2097440)Queued:4

Workaround: None.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
Esxi seems like it's pushing away hobbyists and homelab users more and more lately

Wibla
Feb 16, 2011

I installed proxmox because I got pissed at the vmware bullshit and I like it a lot more than I ever did esxi post-5.5.

(I say that as one of the people who've been bitten by the auto-boot bullshit in 6.5, and also trying to deal with just loving getting the ISOs without jumping through a thousand hoops as a homelab user).

GrandMaster
Aug 15, 2004
laidback

movax posted:

Why does VMware make it such a pain in the rear end to download free ESXi?!! "Dear user, content not available." :getout:

I noticed in the release notes for basically everything after 7.0, they have an errata that is basically "lol no work around for a race condition on USB". Does this basically break USB booting? Seems like kind of big deal.

Have been dealing with this one at work (sd card on hpe blades). The sd drops out, esx host stops responding and it's been hanging vms on the host too. Reboot the host and all comes back, but usually dies again a few days later.

It's been fixed, I tested a beta build in our test env and it hasn't reoccurred in over a month now. GA release is supposed to be on the 25th (U3?), but it's already been delayed once so fingers crossed they make the date.

A similar issue happened back in the 6.5 days until it was patched, but I'd only ever lost management agents then, it never hung the VMs

SlowBloke
Aug 14, 2017

CommieGIR posted:

Esxi seems like it's pushing away hobbyists and homelab users more and more lately

ESXi has stopped being hobbist friendly at 5.5, everything else has been a race to gently caress up non standard setups. With William Lam using nucs as test beds, those are your best bet if you want to homelab, making your own server is currently an exercise in frustration IMHO.

Wibla
Feb 16, 2011

SlowBloke posted:

ESXi has stopped being hobbist friendly at 5.5, everything else has been a race to gently caress up non standard setups. With William Lam using nucs as test beds, those are your best bet if you want to homelab, making your own server is currently an exercise in frustration IMHO.

Or just install proxmox.

some kinda jackal
Feb 25, 2003

 
 
Even older affordable enterprise equipment starting to fall off official support lists is going to start being weird. I don't really plan to move up from 6.7u3 any time soon. Not that I'm worried there'll be any major incompatibility but I guess the further along we go the more likely it will be. Though admittedly with this kind of hardware I suppose the biggest issue is lack of vendor support if anything goes wrong, which is a non-issue for all but an infinitely small subset of homelabbers.

Proxmox seemed to do the job but I just couldn't adapt to the "proxmox way" of doing things. Honestly if I was coming in fresh it would probably be fine, but at this point I want to spend as little time learning the underlying infrastructure or getting "used" to a new way of doing things and just do things. So muscle memory is my enemy here I guess, until something forces me off of VMware's platform.

Tev
Aug 13, 2008

movax posted:

Why does VMware make it such a pain in the rear end to download free ESXi?!! "Dear user, content not available." :getout:

I noticed in the release notes for basically everything after 7.0, they have an errata that is basically "lol no work around for a race condition on USB". Does this basically break USB booting? Seems like kind of big deal.

This issue was crippling my lab hosts, but the vmtools workaround explained here has me back running stable.

https://vninja.net/2021/05/18/esxi-7.0-u2a-killing-usb-and.sd-drives/

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

SlowBloke posted:

ESXi has stopped being hobbist friendly at 5.5, everything else has been a race to gently caress up non standard setups. With William Lam using nucs as test beds, those are your best bet if you want to homelab, making your own server is currently an exercise in frustration IMHO.

Just use XCP-NG or Proxmox.

And no, Nucs are very expensive, just purchase Dell/HP USFF machines for half the cost and upgrade them.

As it is, I'll stick with XCP-NG, they haven't cut legacy support and are unlikely to do so, and work happily on my Dell Bladeservers with Opterons and Xeons. You can get out the door with a Dell R720 for the same price as a Nuc.

CommieGIR fucked around with this message at 17:41 on Aug 19, 2021

rufius
Feb 27, 2011

Clear alcohols are for rich women on diets.

CommieGIR posted:

Just use XCP-NG or Proxmox.

And no, Nucs are very expensive, just purchase Dell/HP USMFF machines for half the cost and upgrade them.

As it is, I'll stick with XCP-NG, they haven't cut legacy support and are unlikely to do so, and work happily on my Dell Bladeservers with Opterons and Xeons. You can get out the door with a Dell R720 for the same price as a Nuc.

Alternatively, the little NUC I bought (https://simplynuc.com/ruby/) sips power.

I'm not saying you should equivalence class them, but there are benefits especially in the realm of power:performance.

If electricity is cheap where you are, then never mind.

Thanks Ants
May 21, 2004

#essereFerrari


NUCs are nice but most of them have mobile CPUs in which limits their usefulness.

Being able to put up with something slightly larger and grabbing a Dell/HP/Lenovo SFF system (OptiPlex Micro, ProDesk Mini, ThinkCentre Tiny) off eBay that is only 8 months old despite being sold for second-user prices, and has a warranty left and a desktop CPU in is a good way to go.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

rufius posted:

Alternatively, the little NUC I bought (https://simplynuc.com/ruby/) sips power.

I'm not saying you should equivalence class them, but there are benefits especially in the realm of power:performance.

If electricity is cheap where you are, then never mind.

Okay that's Ryzen powered so I do want that. drat you, rufius.
For perspective: My current workstation is a Asus ROG laptop with a Ryzen 7 4800H 8c/16t and I use it mostly for virtualization when I'm not playing games.

Looks like its got both SODIMM slots available too, so yeah you should be able to crap 64GB of DDR4 into that too.

rufius
Feb 27, 2011

Clear alcohols are for rich women on diets.

CommieGIR posted:

Okay that's Ryzen powered so I do want that. drat you, rufius.
For perspective: My current workstation is a Asus ROG laptop with a Ryzen 7 4800H 8c/16t and I use it mostly for virtualization when I'm not playing games.

Looks like its got both SODIMM slots available too, so yeah you should be able to crap 64GB of DDR4 into that too.

I know... I bought one because the desktop I have (i7-9700k) uses a lot more power than the NUC and actually has less power than that Ryzen 7.

That little NUC powers my WireGuard VM, a basic Kubernetes cluster running some smaller deployments, my Prom/Grafana server, and a couple other test servers.

I loaded it up with 64GB of RAM and it's been great.

Also - because I'm a heretic, it runs Windows Server because I prefer Hyper-V (I worked at MSFT for a long time - I'm used to Hyper-V).

BlankSystemDaemon
Mar 13, 2009



You can get used Ivy-Bridge Xeon-based servers with ECC memory for the same price as NUCs, if not cheaper.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

rufius posted:

Alternatively, the little NUC I bought (https://simplynuc.com/ruby/) sips power.

I'm not saying you should equivalence class them, but there are benefits especially in the realm of power:performance.

If electricity is cheap where you are, then never mind.

As someone who somehow got on their spam marketing lists despite never interacting with them or any related area and have been unable to get off of it, gently caress them.

Clark Nova
Jul 18, 2004

BlankSystemDaemon posted:

You can get used Ivy-Bridge Xeon-based servers with ECC memory for the same price as NUCs, if not cheaper.


I'm pretty sure people are passing on these due to noise and power consumption. I've been waiting a while for a really good deal on something tiny with two NICs or a PCIE slot with no luck :argh:

Clark Nova fucked around with this message at 18:07 on Aug 19, 2021

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Clark Nova posted:

II'm pretty sure people are passing on these due to noise and power consumption. I've been waiting a while for a really good deal on something tiny with two NICs or a PCIE slot with no luck :argh:

The Ivy Bridge runs actually consume not much for a 1U/2U, I have one that idled at about 300 watts.

rufius posted:

Also - because I'm a heretic, it runs Windows Server because I prefer Hyper-V (I worked at MSFT for a long time - I'm used to Hyper-V).

I use HyperV on my workstation because it just works and I don't really have to install anything. Works great.

BlankSystemDaemon
Mar 13, 2009



Clark Nova posted:

I'm pretty sure people are passing on these due to noise and power consumption. I've been waiting a while for a really good deal on something tiny with two NICs or a PCIE slot with no luck :argh:
I have a HPE Proliant DL380p Gen8 with 2x Xeon E5-2667v2 at 3.3GHz (boosting to 4GHz on a single core, and configured properly with the CPUs going to C2 state when idle (because I don't wanna lose cache coherency)), with 264GB memory, and 8x 10k RPM 300GB drives, along with a couple HP-branded LSI SAS-HBA controllers and HP-branded Intel X520-DA2 10G SFP+ NIC.
It idles at ~160W and while it's not exactly quiet, a closed single wooden door between me and it is enough to muffle it so I can't hear it when sitting 2m from the door. Mind you, this is only achievable because of the HP-branded SAS controllers and NIC, as otherwise the ILO configuration will automatically turn the fans up to 40%.

BlankSystemDaemon fucked around with this message at 21:42 on Aug 19, 2021

SlowBloke
Aug 14, 2017

Clark Nova posted:

I'm pretty sure people are passing on these due to noise and power consumption. I've been waiting a while for a really good deal on something tiny with two NICs or a PCIE slot with no luck :argh:

The current tall nuc have two nics if you buy the 2.5G+usb riser

https://williamlam.com/2021/01/esxi-on-11th-gen-intel-nuc-panther-canyon-tiger-canyon.html

Wibla
Feb 16, 2011

There's also minisforum, they have some neat small form factor machines, like this: https://store.minisforum.com/products/minisforum-hm80?variant=39960884117665

some kinda jackal
Feb 25, 2003

 
 

CommieGIR posted:

The Ivy Bridge runs actually consume not much for a 1U/2U, I have one that idled at about 300 watts.

Agreed. My R620 with idle VMs,, two 20-threaded E5-2660 Xeons, 128 gigs of ram, six or eight spinning 10k 2.5” drives and it is currently idling at 168W. I’m pretty sure I have it in ultra conservative power usage mode but it’s never felt slow or lacking. My workloads are all idle right now so I’m sure it bounces up but I’ll take that 168W idle any day.

I’m wondering whether switching to SSDs would have a negligible effect on the idle wattage. Those motors have to account for a bit of that, right? :haw:

some kinda jackal fucked around with this message at 21:42 on Aug 19, 2021

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Martytoof posted:

Agreed. My R620 with idle VMs, six spinning 10k disks, two 20-threaded E5-2660 Xeons, 128 gigs of ram, six or eight spinning 10k 2.5” drives and it is currently idling at 168W. I’m pretty sure I have it in ultra conservative power usage mode but it’s never felt slow or lacking. My workloads are all idle right now so I’m sure it bounces up but I’ll take that 168W idle any day.

I’m wondering whether switching to SSDs would have a negligible effect on the idle wattage. Those motors have to account for a bit of that, right? :haw:

My main NAS is an R720 with 2 x 12 core Xeons and 128GB of RAM + Spinning disks and under load maybe hits ~350-400 watts.

It hosts the storage for my M1000e Bladecenter

BlankSystemDaemon
Mar 13, 2009



CommieGIR posted:

My main NAS is an R720 with 2 x 12 core Xeons and 128GB of RAM + Spinning disks and under load maybe hits ~350-400 watts.

It hosts the storage for my M1000e Bladecenter
That's not under full load, right? I've seen my DL380p Gen8 hit over 1000W during full load using both PSUs, but that was with a GPU that I'm no longer using.

Wibla
Feb 16, 2011

My DL360p gen8 with dual E5-2650 v0, 256GB ram, 1TB NVMe, 2x240GB SSDs and 2x3000GB 10k RPM drives uses (on average) about 170W. The G7 it replaced used well over 250W with the same workload, but it had 2x240GB SSDs and 6x300GB SAS drives, along with some pretty thirsty X5675 CPUs.

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit
I switched to Unraid this week to try something different from Proxmox and I can’t decide which I like better. I need some amalgamation of the two. Someone make an Unmox.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

BlankSystemDaemon posted:

That's not under full load, right? I've seen my DL380p Gen8 hit over 1000W during full load using both PSUs, but that was with a GPU that I'm no longer using.

Nope, like 400 watts if I'm doing intensive IO

Internet Explorer
Jun 1, 2005





I have a Synology NAS that I run some containers on. I just really don't need a home lab these days.

Do I need to hand in my nerd card? Am I getting old?

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Internet Explorer posted:

I have a Synology NAS that I run some containers on. I just really don't need a home lab these days.

Do I need to hand in my nerd card? Am I getting old?

Nah. I just have personal issues.

Pikehead
Dec 3, 2006

Looking for WMDs, PM if you have A+ grade stuff
Fun Shoe

GrandMaster posted:

Have been dealing with this one at work (sd card on hpe blades). The sd drops out, esx host stops responding and it's been hanging vms on the host too. Reboot the host and all comes back, but usually dies again a few days later.

It's been fixed, I tested a beta build in our test env and it hasn't reoccurred in over a month now. GA release is supposed to be on the 25th (U3?), but it's already been delayed once so fingers crossed they make the date.

A similar issue happened back in the 6.5 days until it was patched, but I'd only ever lost management agents then, it never hung the VMs

VMware have been flagging for at least two years that SD cards and usb flash aren't fit for booting esxi from.

From all the KBs etc I've read, they're just sick and tired of dealing with the corruption/low performance from write IO that the cards and usb devices can't handle.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Pikehead posted:

VMware have been flagging for at least two years that SD cards and usb flash aren't fit for booting esxi from.

From all the KBs etc I've read, they're just sick and tired of dealing with the corruption/low performance from write IO that the cards and usb devices can't handle.

Except it's a new issue. Esxi has run fine from both devices for years, something they did changed it.

Pile Of Garbage
May 28, 2007



Internet Explorer posted:

I have a Synology NAS that I run some containers on. I just really don't need a home lab these days.

Do I need to hand in my nerd card? Am I getting old?

I just got a new big beefy QNAP NAS and am probably going to transition to just running containers on it. I've got an IBM x3550 M2 with dual Xeon X5570 CPUs, 128GB RAM and four 10k SAS HDDs in RAID5 that I've been running ESXi 6.5 on for a while now. It's pretty power hungry and I can't upgrade to any newer version of ESXi because they dropped support for those CPUs. Also I think the RAID controller has started failing, storage is getting flaky. Time to put it out to pasture.

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



CommieGIR posted:

Nah. I just have personal issues.
Heck, :same:

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply