|
Perplx posted:It's cleaner if you do one discreet video card and usb controller per vm, for a while I booted fedora on intel igpu, windows 10 vm with a 2080 and a osx vm on a Rx 570 and I could use all 3 at once. It's also possible to share your 2080ti with multiple vms but you'd lose video out, you'd have to use a bunch of steamlinks or something for that. this is so absurd I love it
|
# ? Jul 22, 2021 10:11 |
|
|
# ? May 8, 2024 06:55 |
|
Currently having an odd issue with a ESXi 5.5 machine. It's been running fine for years but suddenly over the last week heartbeats to Datastore 2 have been timing out causing the VMs to lock up and the host to need a reboot. My google-fu has failed me as all the results I seem to find refer to SAN setups and tell me to check my networking. Datastore 2 is an internal SATA Raid10 storage array on a LSI controller. I am getting no warnings from the RAID health monitors and no other abnormal indication of an issue. Am I on the right track in thinking that my RAID controller needs replacing? I know it's well out of support but replacing it requires funds and being a not-for-profit funds don't come easily.
|
# ? Aug 12, 2021 18:16 |
|
It can be a lot of things, including a single disk that is on its way out. Some disks do long timeouts instead of throwing immediate read errors like enterprise disks do. Any unusual read or write activity correlated with it?
|
# ? Aug 13, 2021 01:08 |
|
And all the logs look fine otherwise? How old is the server? Is the BBU on the raid card OK, if equipped? Also unrelated: switched from ESXi to proxmox for my small home setup, absolutely no regrets doing so so far
|
# ? Aug 13, 2021 01:24 |
|
I see nothing out of the ordinary in the logs, everything is running fine then the heartbeat starts getting a delayed response a couple times, then fails to get a response. The raid monitor indicates everything is normal with the array and I'm not seeing any unusual read/write activity that correlates with it. In fact, it usually goes out in periods where there is little to no disk load being put on the server. The most common time is just after 10pm local. Edit to add: There's no BBU on this RAID controller card and I do not have the optional add-on. PremiumSupport fucked around with this message at 16:45 on Aug 13, 2021 |
# ? Aug 13, 2021 16:41 |
|
Moving from hyper v to VMware. Does VMware still have a converter? Can I do a live conversion turn schedule a shut down old server turn on new server? I assume I'm hosed for vms that have shared storage in a cluster.
|
# ? Aug 13, 2021 18:23 |
|
lol internet. posted:Moving from hyper v to VMware. Yeah they still have VMware vcenter convert As for the cluster bit, I'm sure there is a way, don't ask me how though. I'd have to be familiar with the environment.
|
# ? Aug 13, 2021 19:09 |
|
lol internet. posted:Moving from hyper v to VMware. https://www.vmware.com/it/products/converter.html You can set the vm to be shutdown once the new vm is validated as ok. For the shared storage, the agent should be able to fetch the blocks and move them to a new virtual disk. If you planned to simply expose a lun as a disk, it’s not really viable(you can but I would strongly go against). SlowBloke fucked around with this message at 08:37 on Aug 14, 2021 |
# ? Aug 14, 2021 06:48 |
|
Why does VMware make it such a pain in the rear end to download free ESXi?!! "Dear user, content not available." I noticed in the release notes for basically everything after 7.0, they have an errata that is basically "lol no work around for a race condition on USB". Does this basically break USB booting? Seems like kind of big deal. quote:NEW: If you use a USB as a boot device for ESXi 7.0 Update 2a, ESXi hosts might become unresponsive and you see host not-responding and boot bank is not found alerts
|
# ? Aug 19, 2021 04:07 |
|
Esxi seems like it's pushing away hobbyists and homelab users more and more lately
|
# ? Aug 19, 2021 13:48 |
|
I installed proxmox because I got pissed at the vmware bullshit and I like it a lot more than I ever did esxi post-5.5. (I say that as one of the people who've been bitten by the auto-boot bullshit in 6.5, and also trying to deal with just loving getting the ISOs without jumping through a thousand hoops as a homelab user).
|
# ? Aug 19, 2021 14:11 |
|
movax posted:Why does VMware make it such a pain in the rear end to download free ESXi?!! "Dear user, content not available." Have been dealing with this one at work (sd card on hpe blades). The sd drops out, esx host stops responding and it's been hanging vms on the host too. Reboot the host and all comes back, but usually dies again a few days later. It's been fixed, I tested a beta build in our test env and it hasn't reoccurred in over a month now. GA release is supposed to be on the 25th (U3?), but it's already been delayed once so fingers crossed they make the date. A similar issue happened back in the 6.5 days until it was patched, but I'd only ever lost management agents then, it never hung the VMs
|
# ? Aug 19, 2021 14:25 |
|
CommieGIR posted:Esxi seems like it's pushing away hobbyists and homelab users more and more lately ESXi has stopped being hobbist friendly at 5.5, everything else has been a race to gently caress up non standard setups. With William Lam using nucs as test beds, those are your best bet if you want to homelab, making your own server is currently an exercise in frustration IMHO.
|
# ? Aug 19, 2021 14:44 |
|
SlowBloke posted:ESXi has stopped being hobbist friendly at 5.5, everything else has been a race to gently caress up non standard setups. With William Lam using nucs as test beds, those are your best bet if you want to homelab, making your own server is currently an exercise in frustration IMHO. Or just install proxmox.
|
# ? Aug 19, 2021 14:44 |
|
Even older affordable enterprise equipment starting to fall off official support lists is going to start being weird. I don't really plan to move up from 6.7u3 any time soon. Not that I'm worried there'll be any major incompatibility but I guess the further along we go the more likely it will be. Though admittedly with this kind of hardware I suppose the biggest issue is lack of vendor support if anything goes wrong, which is a non-issue for all but an infinitely small subset of homelabbers. Proxmox seemed to do the job but I just couldn't adapt to the "proxmox way" of doing things. Honestly if I was coming in fresh it would probably be fine, but at this point I want to spend as little time learning the underlying infrastructure or getting "used" to a new way of doing things and just do things. So muscle memory is my enemy here I guess, until something forces me off of VMware's platform.
|
# ? Aug 19, 2021 14:47 |
|
movax posted:Why does VMware make it such a pain in the rear end to download free ESXi?!! "Dear user, content not available." This issue was crippling my lab hosts, but the vmtools workaround explained here has me back running stable. https://vninja.net/2021/05/18/esxi-7.0-u2a-killing-usb-and.sd-drives/
|
# ? Aug 19, 2021 14:59 |
|
SlowBloke posted:ESXi has stopped being hobbist friendly at 5.5, everything else has been a race to gently caress up non standard setups. With William Lam using nucs as test beds, those are your best bet if you want to homelab, making your own server is currently an exercise in frustration IMHO. Just use XCP-NG or Proxmox. And no, Nucs are very expensive, just purchase Dell/HP USFF machines for half the cost and upgrade them. As it is, I'll stick with XCP-NG, they haven't cut legacy support and are unlikely to do so, and work happily on my Dell Bladeservers with Opterons and Xeons. You can get out the door with a Dell R720 for the same price as a Nuc. CommieGIR fucked around with this message at 17:41 on Aug 19, 2021 |
# ? Aug 19, 2021 16:10 |
|
CommieGIR posted:Just use XCP-NG or Proxmox. Alternatively, the little NUC I bought (https://simplynuc.com/ruby/) sips power. I'm not saying you should equivalence class them, but there are benefits especially in the realm of power:performance. If electricity is cheap where you are, then never mind.
|
# ? Aug 19, 2021 16:57 |
|
NUCs are nice but most of them have mobile CPUs in which limits their usefulness. Being able to put up with something slightly larger and grabbing a Dell/HP/Lenovo SFF system (OptiPlex Micro, ProDesk Mini, ThinkCentre Tiny) off eBay that is only 8 months old despite being sold for second-user prices, and has a warranty left and a desktop CPU in is a good way to go.
|
# ? Aug 19, 2021 17:09 |
|
rufius posted:Alternatively, the little NUC I bought (https://simplynuc.com/ruby/) sips power. Okay that's Ryzen powered so I do want that. drat you, rufius. For perspective: My current workstation is a Asus ROG laptop with a Ryzen 7 4800H 8c/16t and I use it mostly for virtualization when I'm not playing games. Looks like its got both SODIMM slots available too, so yeah you should be able to crap 64GB of DDR4 into that too.
|
# ? Aug 19, 2021 17:39 |
|
CommieGIR posted:Okay that's Ryzen powered so I do want that. drat you, rufius. I know... I bought one because the desktop I have (i7-9700k) uses a lot more power than the NUC and actually has less power than that Ryzen 7. That little NUC powers my WireGuard VM, a basic Kubernetes cluster running some smaller deployments, my Prom/Grafana server, and a couple other test servers. I loaded it up with 64GB of RAM and it's been great. Also - because I'm a heretic, it runs Windows Server because I prefer Hyper-V (I worked at MSFT for a long time - I'm used to Hyper-V).
|
# ? Aug 19, 2021 17:44 |
You can get used Ivy-Bridge Xeon-based servers with ECC memory for the same price as NUCs, if not cheaper.
|
|
# ? Aug 19, 2021 17:45 |
|
rufius posted:Alternatively, the little NUC I bought (https://simplynuc.com/ruby/) sips power. As someone who somehow got on their spam marketing lists despite never interacting with them or any related area and have been unable to get off of it, gently caress them.
|
# ? Aug 19, 2021 17:48 |
|
BlankSystemDaemon posted:You can get used Ivy-Bridge Xeon-based servers with ECC memory for the same price as NUCs, if not cheaper. I'm pretty sure people are passing on these due to noise and power consumption. I've been waiting a while for a really good deal on something tiny with two NICs or a PCIE slot with no luck Clark Nova fucked around with this message at 18:07 on Aug 19, 2021 |
# ? Aug 19, 2021 17:56 |
|
Clark Nova posted:II'm pretty sure people are passing on these due to noise and power consumption. I've been waiting a while for a really good deal on something tiny with two NICs or a PCIE slot with no luck The Ivy Bridge runs actually consume not much for a 1U/2U, I have one that idled at about 300 watts. rufius posted:Also - because I'm a heretic, it runs Windows Server because I prefer Hyper-V (I worked at MSFT for a long time - I'm used to Hyper-V). I use HyperV on my workstation because it just works and I don't really have to install anything. Works great.
|
# ? Aug 19, 2021 18:01 |
Clark Nova posted:I'm pretty sure people are passing on these due to noise and power consumption. I've been waiting a while for a really good deal on something tiny with two NICs or a PCIE slot with no luck It idles at ~160W and while it's not exactly quiet, a closed single wooden door between me and it is enough to muffle it so I can't hear it when sitting 2m from the door. Mind you, this is only achievable because of the HP-branded SAS controllers and NIC, as otherwise the ILO configuration will automatically turn the fans up to 40%. BlankSystemDaemon fucked around with this message at 21:42 on Aug 19, 2021 |
|
# ? Aug 19, 2021 18:26 |
|
Clark Nova posted:I'm pretty sure people are passing on these due to noise and power consumption. I've been waiting a while for a really good deal on something tiny with two NICs or a PCIE slot with no luck The current tall nuc have two nics if you buy the 2.5G+usb riser https://williamlam.com/2021/01/esxi-on-11th-gen-intel-nuc-panther-canyon-tiger-canyon.html
|
# ? Aug 19, 2021 20:03 |
|
There's also minisforum, they have some neat small form factor machines, like this: https://store.minisforum.com/products/minisforum-hm80?variant=39960884117665
|
# ? Aug 19, 2021 21:22 |
|
CommieGIR posted:The Ivy Bridge runs actually consume not much for a 1U/2U, I have one that idled at about 300 watts. Agreed. My R620 with idle VMs,, two 20-threaded E5-2660 Xeons, 128 gigs of ram, six or eight spinning 10k 2.5” drives and it is currently idling at 168W. I’m pretty sure I have it in ultra conservative power usage mode but it’s never felt slow or lacking. My workloads are all idle right now so I’m sure it bounces up but I’ll take that 168W idle any day. I’m wondering whether switching to SSDs would have a negligible effect on the idle wattage. Those motors have to account for a bit of that, right? some kinda jackal fucked around with this message at 21:42 on Aug 19, 2021 |
# ? Aug 19, 2021 21:24 |
|
Martytoof posted:Agreed. My R620 with idle VMs, six spinning 10k disks, two 20-threaded E5-2660 Xeons, 128 gigs of ram, six or eight spinning 10k 2.5” drives and it is currently idling at 168W. I’m pretty sure I have it in ultra conservative power usage mode but it’s never felt slow or lacking. My workloads are all idle right now so I’m sure it bounces up but I’ll take that 168W idle any day. My main NAS is an R720 with 2 x 12 core Xeons and 128GB of RAM + Spinning disks and under load maybe hits ~350-400 watts. It hosts the storage for my M1000e Bladecenter
|
# ? Aug 19, 2021 21:31 |
CommieGIR posted:My main NAS is an R720 with 2 x 12 core Xeons and 128GB of RAM + Spinning disks and under load maybe hits ~350-400 watts.
|
|
# ? Aug 19, 2021 21:43 |
|
My DL360p gen8 with dual E5-2650 v0, 256GB ram, 1TB NVMe, 2x240GB SSDs and 2x3000GB 10k RPM drives uses (on average) about 170W. The G7 it replaced used well over 250W with the same workload, but it had 2x240GB SSDs and 6x300GB SAS drives, along with some pretty thirsty X5675 CPUs.
|
# ? Aug 19, 2021 21:47 |
|
I switched to Unraid this week to try something different from Proxmox and I can’t decide which I like better. I need some amalgamation of the two. Someone make an Unmox.
|
# ? Aug 19, 2021 21:50 |
|
BlankSystemDaemon posted:That's not under full load, right? I've seen my DL380p Gen8 hit over 1000W during full load using both PSUs, but that was with a GPU that I'm no longer using. Nope, like 400 watts if I'm doing intensive IO
|
# ? Aug 19, 2021 22:07 |
|
I have a Synology NAS that I run some containers on. I just really don't need a home lab these days. Do I need to hand in my nerd card? Am I getting old?
|
# ? Aug 19, 2021 23:15 |
|
Internet Explorer posted:I have a Synology NAS that I run some containers on. I just really don't need a home lab these days. Nah. I just have personal issues.
|
# ? Aug 19, 2021 23:17 |
|
GrandMaster posted:Have been dealing with this one at work (sd card on hpe blades). The sd drops out, esx host stops responding and it's been hanging vms on the host too. Reboot the host and all comes back, but usually dies again a few days later. VMware have been flagging for at least two years that SD cards and usb flash aren't fit for booting esxi from. From all the KBs etc I've read, they're just sick and tired of dealing with the corruption/low performance from write IO that the cards and usb devices can't handle.
|
# ? Aug 20, 2021 03:26 |
|
Pikehead posted:VMware have been flagging for at least two years that SD cards and usb flash aren't fit for booting esxi from. Except it's a new issue. Esxi has run fine from both devices for years, something they did changed it.
|
# ? Aug 20, 2021 03:37 |
|
Internet Explorer posted:I have a Synology NAS that I run some containers on. I just really don't need a home lab these days. I just got a new big beefy QNAP NAS and am probably going to transition to just running containers on it. I've got an IBM x3550 M2 with dual Xeon X5570 CPUs, 128GB RAM and four 10k SAS HDDs in RAID5 that I've been running ESXi 6.5 on for a while now. It's pretty power hungry and I can't upgrade to any newer version of ESXi because they dropped support for those CPUs. Also I think the RAID controller has started failing, storage is getting flaky. Time to put it out to pasture.
|
# ? Aug 20, 2021 04:24 |
|
|
# ? May 8, 2024 06:55 |
CommieGIR posted:Nah. I just have personal issues.
|
|
# ? Aug 20, 2021 10:25 |