|
Potato Salad posted:I just don't see a big migration to Hyper-V without a Microsoft vCenter. Microsoft "vCenter" already exists, it's called System Center Virtual Machine Manager and it's absolute poo poo.
|
# ? Jan 28, 2024 19:49 |
|
|
# ? May 9, 2024 07:45 |
|
Potato Salad posted:I just don't see a big migration to Hyper-V without a Microsoft vCenter. There's always Azure Stack HCI
|
# ? Jan 28, 2024 19:54 |
|
There's always taking up carpentry or plumbing or something
|
# ? Jan 28, 2024 19:55 |
|
SlowBloke posted:Microsoft "vCenter" already exists, it's called System Center Virtual Machine Manager and it's absolute poo poo. yeah vmm doesn't count
|
# ? Jan 28, 2024 20:06 |
|
IBM sez “Come back to momma…” and is waving you towards a mainframe
|
# ? Jan 28, 2024 20:10 |
|
fresh_cheese posted:IBM sez “Come back to momma…” and is waving you towards a mainframe "By the way we're renting out the rest of it to """the cloud"""" " SlowBloke posted:Microsoft "vCenter" already exists, it's called System Center Virtual Machine Manager and it's absolute poo poo. It seems like HyperV just gets worse and worse as time goes on.
|
# ? Jan 29, 2024 01:18 |
|
j3rkstore posted:There's always Azure Stack HCI I'd rather pay for vmware than deal with that poo poo. Hell, I just sunk a 18 month IT project at work where they tried to push ASHCI on us for OT / SCADA poo poo. It did not survive the IEC 62443-based risk assessment.
|
# ? Jan 29, 2024 01:22 |
Is there any virtualization on x86 or its derivatives, other than that of VMware, that doesn't use hardware-accelerated virtualization with SLAT (aka AMD Vi/Intel VT-x)? I know VMware can also use it, but their original software did it all in software and somehow managed very low overhead. EDIT: Oh, right - XenServer is a thing. I forgot orz
|
|
# ? Jan 29, 2024 18:34 |
|
Just make sure you go for XCP-NG and not the actual Citrix Xenserver. Save yourself the headache of 'why can't I use that feature?'
|
# ? Jan 29, 2024 18:47 |
|
Internet Explorer posted:Finally, XenServer's time to shine!
|
# ? Jan 29, 2024 18:49 |
CommieGIR posted:Just make sure you go for XCP-NG and not the actual Citrix Xenserver. Save yourself the headache of 'why can't I use that feature?' I'm very happy with bhyve(8).
|
|
# ? Jan 29, 2024 18:55 |
|
BlankSystemDaemon posted:Is there any virtualization on x86 or its derivatives, other than that of VMware, that doesn't use hardware-accelerated virtualization with SLAT (aka AMD Vi/Intel VT-x)? VirtualBox runs 32 bit guests without hardware extensions
|
# ? Jan 29, 2024 19:35 |
|
BlankSystemDaemon posted:Is there any virtualization on x86 or its derivatives, other than that of VMware, that doesn't use hardware-accelerated virtualization with SLAT (aka AMD Vi/Intel VT-x)?
|
# ? Jan 29, 2024 21:11 |
|
BlankSystemDaemon posted:Is there any virtualization on x86 or its derivatives, other than that of VMware, that doesn't use hardware-accelerated virtualization with SLAT (aka AMD Vi/Intel VT-x)? VMware ripped out the binary translator and software MMU a few years ago. I think official support for it has dropped off, or will very soon. That said, last Friday was my last day there. I noped my way out without another job even lined up.
|
# ? Jan 29, 2024 23:48 |
SamDabbers posted:VirtualBox runs 32 bit guests without hardware extensions Finding a Nehalem CPU and board combo probably isn't even that difficult, since it's not old enough to be retro, yet old enough to have been retired from almost every production deployment - but ExcessBLarg! posted:Qemu used to have a Linux kernel module, KQemu, that provided ring 3 (userspace) virtualization before KVM effectively replaced it. Must've been pretty interesting code to have a kernel module run in userspace. DevNull posted:VMware ripped out the binary translator and software MMU a few years ago. I think official support for it has dropped off, or will very soon. On the one hand, getting the gently caress out was definitely the right move - but it does feel a bit like walking a knife-edge without another job lined up.
|
|
# ? Jan 30, 2024 08:37 |
|
fresh_cheese posted:OVirt is still there, has it started withering yet since redhat killed RHV? As far as I can tell, yes it has already begun to wither if not already on life support. I spent a long time with my homelab on oVirt but I hit a wall with the reliability of the hosted HA engine setup - as well as other aspects like the Cluster/Ceph integration - and I wasn't confident that they'd be fixed any time soon. I switched to Proxmox + Ceph and I haven't looked back. I generally haven't had too much of a problem with the HA/clustering aspects but I've only got 5-6 nodes in the cluster at any one time. PCIe passthrough is also significantly easier to manage, although it still needs some finicky one-off setup on the host to clear the devices for passthrough. I'd recommend Proxmox for homelab use for sure.
|
# ? Jan 30, 2024 14:00 |
Proxmox is great, if only to make backing up your vms and transitioning to new hardware (add node to cluster, stop vm's on old hardware, transfer, start up vms on new hardware) just trivially easy compared to bare metal servers.
|
|
# ? Jan 30, 2024 14:30 |
|
BlankSystemDaemon posted:Was it part of the kernel? Because the obsolete documentation I can find seems to indicate it was a loadable module people would compile on their own.
|
# ? Jan 30, 2024 16:04 |
|
This is a weird question but I'm looking to replace my aging home VM host, it's running on a very old platform (Intel i7-7700) and the main guest VMs are split into several where two are basically fully loaded often and there's a bunch of lighter supplementary ones. Currently I have the underlying CPU cores pinned so that the heftiest guest gets two cores (4 threads), the second most used gets 1 pinned core and the last floater core does everything else. This has worked great and it's under a ton of constant load for like a year now with no complaints until now - I need more memory. Way more memory. So I'm looking at the newest Intel and AMD consumer level CPUs and they all look good but I guess the question is how do e-cores play into using CPU pinning? Obviously the upgrade from something this old is going to be huge but I guess in this scenario I could just allocate the VMs cores and not care about pinning, letting the host (Debian+KVM) and CPU do the scheduling between e- and p-cores?
|
# ? Jan 31, 2024 17:34 |
|
BlankSystemDaemon posted:On the one hand, getting the gently caress out was definitely the right move - but it does feel a bit like walking a knife-edge without another job lined up. I managed to get 5 months pay as severance out of the deal. I also have a decent amount in savings and can go on my wife's health insurance, so it wasn't as scary as it could have been.
|
# ? Feb 1, 2024 02:12 |
|
End of an era. As far back as I can remember on these forums I recall you talking about cool VMware stuff. I wish you the absolute best.
|
# ? Feb 1, 2024 02:26 |
|
Nitrousoxide posted:Proxmox is great, if only to make backing up your vms and transitioning to new hardware (add node to cluster, stop vm's on old hardware, transfer, start up vms on new hardware) just trivially easy compared to bare metal servers. I’m starting to gently caress around with proxmox (good enough to run my kid’s palworld server and a unifi controller at least) and I had a problem where I hosed up the config of a VM and it just sat in BIOS looping looking for a bootable disk, which I had rudely not provided. Neither “shutdown” nor “stop” worked to kill the VM, so I had to go kill the qemu process myself in the shell. Otherwise it’s been pretty good. Trying to figure out how best to do cloud-init in a way that keeps the image updated so I don’t have to apt-get upgrade every time when I clone a new VM. It’s also a little weird to me that it doesn’t provide a way to just slam a docker/podman application container into it. I can create a VM for all my docker stuff, but that gives me less-flexible allocation of CPUs and memory, and it means that I have to use another storage abstraction layer rather than managing the docker container volumes alongside the VM disks and LXC resources. I’m sure there’s a good reason for it, of course.
|
# ? Feb 1, 2024 02:34 |
|
Subjunctive posted:
I think you're looking at it backwards. You have more flexibility if you run docker in a vm because you can allocate resources and manage scaling. If you install docker on the base hypervisor then it will just use as much resources as it wants. I'm sure someone will respond with some elaborate method for managing resource allocation in docker but there's a bunch of other reasons to put it in a vm, like additional security segmentation and portability. Edit: I put Plex on a LXC for performance reasons but it sucks because PBS only backs up vms. RVWinkle fucked around with this message at 03:49 on Feb 1, 2024 |
# ? Feb 1, 2024 03:43 |
Subjunctive posted:I’m starting to gently caress around with proxmox (good enough to run my kid’s palworld server and a unifi controller at least) and I had a problem where I hosed up the config of a VM and it just sat in BIOS looping looking for a bootable disk, which I had rudely not provided. Neither “shutdown” nor “stop” worked to kill the VM, so I had to go kill the qemu process myself in the shell. Otherwise it’s been pretty good. You could just install docker in the proxmox level with apt install docker. It's just Debian with the proxmox sauce slathered on top. That said. I have kept my proxmox install totally stock with absolutely no additional packages installed. Anything I want to install I do through an LXC or VM on there. Like you, I have a VM with Docker on it (and two vm's with Podman on them) which I use to manage the containers.
|
|
# ? Feb 1, 2024 03:46 |
|
RVWinkle posted:I think you're looking at it backwards. You have more flexibility if you run docker in a vm because you can allocate resources and manage scaling. If you install docker on the base hypervisor then it will just use as much resources as it wants. I'll respond and say that CPU and memory limits with cgroups via docker is incredibly easy and has been mature for ages, otherwise it'd be pretty useless to try and schedule low latency heterogenous services on clusters. Security I'll give you, but the point of linux containers is that any modern container runtime can run them with no fuss, I don't see what portability advantages a VM gives you. I also wish that Proxmox had first class Docker or preferably Podman instead of LXC. LXC containers seem to combine some of the bad parts of both VMs and OCI containers together.
|
# ? Feb 1, 2024 03:52 |
|
RVWinkle posted:I think you're looking at it backwards. You have more flexibility if you run docker in a vm because you can allocate resources and manage scaling. If you install docker on the base hypervisor then it will just use as much resources as it wants. No, I have less flexibility because I need to assign a resource pool to “all things that are managed by docker”, which is not a meaningful thing, and then subdivide it between containers, rather than just giving each container the restrictions I want via cgroups and then letting proxmox do its job of mediating those things. If I want to expand a container from 16GB to 24GB of RAM or 2 cores to 8, it might be easy to do within the docker container, but no whoops I’ve allocated all the RAM/cores associated with the stub VM, and now I need to take a maintenance window on all the containers in my docker VM to reboot it with more resources—when it should only affect that one container. (Does dynamically increasing the balloon stuff work? Last time I tried that I was running xen in userspace under gdb, so maybe I should re-evaluate it.) Nitrousoxide posted:I have kept my proxmox install totally stock with absolutely no additional packages installed. Not even tailscale? You goddamned animal.
|
# ? Feb 1, 2024 03:59 |
|
RVWinkle posted:Edit: I put Plex on a LXC for performance reasons but it sucks because PBS only backs up vms. I have multiple LXCs being backed up by PBS. Twerk from Home posted:I also wish that Proxmox had first class Docker or preferably Podman instead of LXC. LXC containers seem to combine some of the bad parts of both VMs and OCI containers together. Eh, I also would like some native Docker container option in Proxmox VE, but I don't think OCI container support would totally supplant LXC. I run a Galera cluster across my Proxmox hosts using LXCs managed by Ansible. Yeah, I could probably architect something similar in Docker, but it would take a lot more effort than I care to put in, and would be harder to manage for questionable gains. LXC has definitely got some fucky parts (privilege, id mapping, etc etc) but as a low-fat alternative to a VM, it's pretty good. Cenodoxus fucked around with this message at 04:17 on Feb 1, 2024 |
# ? Feb 1, 2024 03:59 |
Subjunctive posted:Not even tailscale? You goddamned animal. Why would I have tailscale on the proxmox host? I'm not trying to build a cluster with an offsite server. All my hardware is in the same local subnet. I do have a wireguard VPN, but it's served by a container in one of my VM's, and it's for my other devices (phone/laptop/steamdeck) to connect back into my home network.
|
|
# ? Feb 1, 2024 04:12 |
|
Nitrousoxide posted:Why would I have tailscale on the proxmox host? I'm not trying to build a cluster with an offsite server. All my hardware is in the same local subnet. So that you can hit the proxmox UI and API as proxmox.thingwithtail-thingwithscale.ts.net from any device you own, wherever you are, as directly as is possible in your then-current network configuration. All my containers have tailscale, all my VMs have tailscale, all my PCs have tailscale, my Steam Deck has tailscale, my 3D printer RPi has tailscale, my phone has tailscale, etc. All my devices are directly meshed, punch through even the most horrifying hotel firewalls, and it just frickin’ works. If I could install tailscale on my wife’s vibrator I would do it just in case. Brad needs to finish with the WOL stuff so that I can stream games from my desktop to a hotel room more simply, though. E: I don’t have to copy authorized_keys around, and my private keys, because tailscale ssh takes care of that on the basis of my authenticated tailscale connection! I can let my kid’s friend connect to our LAN minecraft server without opening it up to the entire world. I have zero ports forwarded from my router so I can’t gently caress that up and end up with someone blowing open an RCE in pihole or octoprint or whatever. E2: I use the same DNS name/IP address to access everything, without concern for whether I’m on my home network or not, and it transfers at wire speed even once I make stupid upgrades to my home network. Subjunctive fucked around with this message at 04:26 on Feb 1, 2024 |
# ? Feb 1, 2024 04:17 |
Subjunctive posted:So that you can hit the proxmox UI and API as proxmox.thingwithtail-thingwithscale.ts.net from any device you own, wherever you are, as directly as is possible in your then-current network configuration. I am running a reverse proxy on my network which can resolve proxmox.internal.(domain-name).tld (node 1) and proxmox2.internal.(domain-name).tld (node 2) as long as I'm connected to my VPN. I don't want to use tailscale because when I'm connected to my VPN I want all my traffic to go through it. I don't want the split tunneling that it normally does. I don't see any advantage to rebuilding my vpn setup to be reliant on a 3rd party service when I already have a perfectly fine self-hosted version which is already utilizing wireguard.
|
|
# ? Feb 1, 2024 04:24 |
|
yeah I’ve thought about running my own control plane with headscale, but it doesn’t seem worth it while they’re doing it so well
|
# ? Feb 1, 2024 04:28 |
|
Trying to get some advice from the thread about whether there's an equivalent KVM option to what I'm currently running in my home network. I started out running ESXi for a few years until VMWare dropped support for my old MegaRAID card in version 7 if I'm not mistaken. Then I migrated to XCP-NG, primarily because I'd used XenCenter in the past and enjoyed using it. Also, because all my VMs are Rocky Linux, it made sense to use XCP-NG because it's also Enterprise Linux-based and I can sort of centralize configuration management around EL. I manage the infrastructure with a combination of Foreman+Ansible for provisioning+configuration management and Xen Orchestra (XOA) for VM backup/management. While I've been satisfied with XCP-NG, it is getting a bit old in the tooth; it's CentOS 7-based, doesn't support IPv6 for management, and hasn't received a decent-sized feature-based release in quite awhile. What I'd LIKE is something like Proxmox running on EL (Rocky Linux preferred) with the features of Xen Orchestra (VM backup required as a minimum). I suppose the cherry on-top would be provisioning support in Foreman with a plugin. Is there a hypervisor out there that meets all those requirements?
|
# ? Feb 10, 2024 20:24 |
|
Proxmox supports backups and even has a backup server that you can run. I’ve been running both XCP-ng and Proxmox over the last few months. Still prefer XCP-ng especially since it supports importing of OVA. But Proxmox is really good in its own ways.
|
# ? Feb 10, 2024 20:45 |
|
RVWinkle posted:I think you're looking at it backwards. You have more flexibility if you run docker in a vm because you can allocate resources and manage scaling. If you install docker on the base hypervisor then it will just use as much resources as it wants. PBS does backup LXC containers, though?
|
# ? Feb 11, 2024 23:37 |
|
Is this the gently caress Broadcom thread? I'm not at all looking forward to the Hyper-V and SCVMM that is without doubt in my future. Broadcom wants $FUCKYOU more each month for VMware. We have Datacenter licences for Windows already, but we'd rather pay for VMware on top and use that as our hypervisor than Hyper-V, but that's inevitably coming to an end with these stupid price increases. I don't really get the play here - if all SMBs switch to alternatives, it will not only hurt stable development of VMware (few customers = fewer testers hitting corner cases resulting in a less battle tested product), but also future adoption in general - if everyone starts out with Hyper-V, Proxmox or XCP-NG or maybe something else, then in future they'll go with what they're comfortable with. It's why Windows is so widespread; Microsoft got it in the classrooms around the world as best as it could. VMware is also learnt by many today, but that could change overnight, and probably will. Within a short time VMware will be a dead product and company. I guess they're hoping they can just coast with a few giant customers, but that's a dangerous strategy to intentionally pursue, as you become totally reliant on them. vv Yeah, it's honestly not a bad idea. IT can be an absolute shitshow at times. Or maybe it's all the time, and I'm getting less tolerant of the bullshit. HalloKitty fucked around with this message at 17:04 on Feb 12, 2024 |
# ? Feb 12, 2024 16:55 |
|
Retrain as an electrician or something
|
# ? Feb 12, 2024 16:58 |
|
HalloKitty posted:Is this the gently caress Broadcom thread? The play is that it is not a growth opportunity anymore. They’ve captured all the market they’re going to and it is not easy to switch. It is now legacy enterprise software and the pattern around that is well established.
|
# ? Feb 12, 2024 17:09 |
They are happy to burn their long term prospects for higher revenue/profit this and the coming quarter. Long term profitability doesn't factor too much into shareholder value, so there's little desire to optimize around that.
|
|
# ? Feb 12, 2024 17:42 |
|
Get your bonus, get your third home, take the golden handshake, repeat at the next firm you end up on the board of
|
# ? Feb 12, 2024 17:44 |
|
|
# ? May 9, 2024 07:45 |
|
HalloKitty posted:I don't really get the play here - if all SMBs switch to alternatives, it will not only hurt stable development of VMware (few customers = fewer testers hitting corner cases resulting in a less battle tested product), but also future adoption in general - if everyone starts out with Hyper-V, Proxmox or XCP-NG or maybe something else, then in future they'll go with what they're comfortable with. It's why Windows is so widespread; Microsoft got it in the classrooms around the world as best as it could. VMware is also learnt by many today, but that could change overnight, and probably will. Within a short time VMware will be a dead product and company. I guess they're hoping they can just coast with a few giant customers, but that's a dangerous strategy to intentionally pursue, as you become totally reliant on them. Hock Tan said over and over that their plan was to stop dealing with all the smaller businesses an focus only on the top customers. That is how they operate with hardware, and they are applying the same formula for VMware.
|
# ? Feb 12, 2024 17:50 |