|
The licenses will be valid, but you won't be able to renew support on them.
|
# ? Dec 16, 2023 18:00 |
|
|
# ? May 9, 2024 20:38 |
|
I'm really upset that the justice department complaints against the acquisition did not end up surviving political intervention What American consumers are positively served by the acquisition? None, except for a couple giant shareholders. What American businesses are positively served by the acquisition? None.
|
# ? Dec 16, 2023 18:01 |
Well, I'm happy with proxmox at least for my homelab. I don't know how different it is to vmware in scalability, but it's served my one or two node purpose just fine.
|
|
# ? Dec 16, 2023 18:10 |
|
I presume there's a point where it makes sense to employ a few XGP-ng experts who can also contribute to the codebase rather than to keep paying out for VMware licensing, do larger companies run the numbers on this and have a point planned for making the change, or do they just consider VMware a sunk cost and keep paying it?
|
# ? Dec 16, 2023 18:14 |
|
Potato Salad posted:I'm really upset that the justice department complaints against the acquisition did not end up surviving political intervention Its 2023 a merger / acquisition being blocked is extremely unlikely.
|
# ? Dec 16, 2023 19:10 |
|
For us, it would be cheaper to hire people to maintain Proxmox / XCP-NG rather than keep paying for VMware.
|
# ? Dec 17, 2023 13:44 |
It'd be nice if folks learned the lesson that having any one solution is the wrong way, and instead some focus on working on kvm, while others improve bhyve (found in FreeBSD and Illumos distributions, among others) while still others work on Xen. With enough work, and someone working on interoperability, it'd be possible to have a fleet of three (or more?) hypervisor solutions, all being able to work together. BlankSystemDaemon fucked around with this message at 14:48 on Dec 17, 2023 |
|
# ? Dec 17, 2023 14:45 |
|
Wibla posted:For us, it would be cheaper to hire people to maintain Proxmox / XCP-NG rather than keep paying for VMware. I’ll start the
|
# ? Dec 17, 2023 14:46 |
|
BlankSystemDaemon posted:It'd be nice if folks learned the lesson that having any one solution is the wrong way, and instead some focus on working on kvm, while others improve bhyve (found in FreeBSD and Illumos distributions, among others) while still others work on Xen. KVM, bhyve, and Xen are all actively developed though? The closest thing we have for a unified management plane is libvirt. There's also virtio for PV devices but only KVM and bhyve implement it AFAIK. Xen and VMware have little incentive to adopt it unfortunately.
|
# ? Dec 17, 2023 16:09 |
|
If you go too far down that path you end up at nova-compute openstack - the least common denominator of hypervisor management. It can manage almost any hypervisor as long as you dont want it to do anything cool or good.
|
# ? Dec 17, 2023 16:23 |
|
fresh_cheese posted:dont want it to do anything cool or good. You already said openstack, no need to repeat yourself.
|
# ? Dec 17, 2023 16:34 |
|
BlankSystemDaemon posted:It'd be nice if folks learned the lesson that having any one solution is the wrong way, and instead some focus on working on kvm, while others improve bhyve (found in FreeBSD and Illumos distributions, among others) while still others work on Xen. Puppet and Chef support multi-hypervisor setups and you can script it.
|
# ? Dec 17, 2023 16:41 |
|
CommieGIR posted:Puppet and Chef support multi-hypervisor setups and you can script it. Did you just tell me to go gently caress myself?
|
# ? Dec 17, 2023 16:42 |
|
in a well actually posted:Did you just tell me to go gently caress myself? Broadcom did.
|
# ? Dec 17, 2023 18:17 |
John DiFool posted:I used to get hard freezes on my host when shutting down a Windows VM under KVM et al when passing a consumer NVIDIA GPU via VFIO. I fixed those freezes by setting the card to MSI mode in the guest. This was many, many years ago though. Could still be worth checking out on your setup. yeah, ive seen this on the internet and a related old old qemu bug but unless it's a regression it's not it. Instead what I think it is is it has something to do with binding/rebinding some audio source for snd hda intel after the vm shuts down. i also have a different problem entirely: my work currently has a parametric math model that we run fairly often, and it takes 5-45minutes to run depending on model size, parameters, what laptop is used etc. I figured that a good idea to explore would be something EC2 since we really need something capable of running 21 separate threads at the same time, but it seems like the only option with that many are the massive ones with a crap ton of ram, networking etc. Are there any cloud providers who can do more fine grained options like that? Another option i was thinking of is since the parameters are completely independent, i could just spin up a tiny instance for each one separately, but i dont know anywhere near enough about this to know the feasibility.
|
|
# ? Dec 19, 2023 03:59 |
Any reason you can't do the threads in separate docker or podman containers? They would have substantially lower overhead than a full VM.
|
|
# ? Dec 19, 2023 04:08 |
|
Watermelon Daiquiri posted:yeah, ive seen this on the internet and a related old old qemu bug but unless it's a regression it's not it. Instead what I think it is is it has something to do with binding/rebinding some audio source for snd hda intel after the vm shuts down. I posted in the other thread, but it would help if you could think about each independent task individually, and then use any of the many job schedulers that are designed to fit a lot of smaller tasks onto bigger nodes efficiently. Or just use AWS Fargate, which gives you fine grained options over how much vCPU / RAM each individual task gets. You're going to get better, more cost efficient job throughput with a job scheduler, though.
|
# ? Dec 19, 2023 04:11 |
Twerk from Home posted:I posted in the other thread, but it would help if you could think about each independent task individually, and then use any of the many job schedulers that are designed to fit a lot of smaller tasks onto bigger nodes efficiently. Yeah, Docker Swarm or Kubernetes (K3s or K8s) would be your main starting point for a widely used on-prem or cloud scheduler. Unless you use some companies special sauce scheduler. If it's a really simple project you could even do something as simple as a bash script and cron job that checks a network shared directory for a folder with data needing processing and spins up a docker compose with some env variables when someone drops a new one in there.
|
|
# ? Dec 19, 2023 04:27 |
|
Nitrousoxide posted:Yeah, Docker Swarm or Kubernetes (K3s or K8s) would be your main starting point for a widely used on-prem or cloud scheduler. Unless you use some companies special sauce scheduler. Whats your 2c on kubernetes distributions that one might run on their own, physical hardware without wanting to pay VMWare licenses? Is K3s the winner for that? Canonical is out there marketing Mikrok8s wherever they can: https://microk8s.io/compare . Including in Ubuntu's default MOTD for a few years now.
|
# ? Dec 19, 2023 04:30 |
Twerk from Home posted:Whats your 2c on kubernetes distributions that one might run on their own, physical hardware without wanting to pay VMWare licenses? Is K3s the winner for that? Canonical is out there marketing Mikrok8s wherever they can: https://microk8s.io/compare . Including in Ubuntu's default MOTD for a few years now. I'm personally a fan of CoreOS and its Ignition file setup as a super lightweight platform for either simple container workflows or as part of swarm of nodes. It's super easy to deploy new instances and the keep themselves updated. If you want to orchestrate their updates they can be setup to do that too so that you never have too many going down at once for an update. But their updates are super quick anyway because they are just rebooting into an already ready image. Red Hat has some documentation on how to get it spun up on pretty much all the virtualization platforms (QEMU as an example) You can use an ignition config that bakes in all the setup scripts that you'd need to run to get them up and running. There's probably some github repos out there with ignition files that have K3s or K8s already setup that you could tweak for your needs.
|
|
# ? Dec 19, 2023 04:45 |
I'm honestly trying to keep it as simple as possible given the limited resources we have. This is so far outside my wheelhouse it isn't even funny lol but right we are dealing with running the models on lovely thinkpads which takes ages and ages to compute. Ideally I'd set up a threadripper server which could run the models for us so it can just calculate all 21 models simultaneously and the person running things would ssh in. The input to the model is a <40kb text file and the output is similarly sized (though theres a hundred megabytes of working data we still keep around for troubleshooting purposes) so it wouldn't even be hard to transfer data around. However my boss is incredibly allergic to capital purchases (which he should see his doctor about) and very much prefers the $2 an hour computer costs (since each time we need to run the models it'll take ~10 minutes tops and we only run maybe 20-30 times a week) Something like EC2 would serve us well when external people want to run models too. We won't exactly be giving them an in to our internal network.
|
|
# ? Dec 19, 2023 05:11 |
|
Maybe just put Tailscale on the various computers that travel and get that Threadripper going?
|
# ? Dec 19, 2023 05:12 |
unfortunately there's that deadly capital allergy
|
|
# ? Dec 19, 2023 05:16 |
|
Yeah, it really sounds like you should look at AWS Fargate: https://aws.amazon.com/fargate You can use it via AWS Batch or just directly, but it lets you size the compute to each task and only pay on demand without having to actually deal with a VM yourself. Go ahead and launch 21 right-sized instances at once, that way when 1 task finishes you can stop paying for the resources it's using rather than having unused cycles on a bigger instance. Also, you get to call it "Fartgate". I take it that each of these things is single threaded, but wants a lot of RAM?
|
# ? Dec 19, 2023 05:36 |
|
Nitrousoxide posted:If it's a really simple project you could even do something as simple as a bash script and cron job that checks a network shared directory for a folder with data needing processing and spins up a docker compose with some env variables when someone drops a new one in there. so that's how new, weirdly architected message queuing systems are born
|
# ? Dec 19, 2023 05:54 |
Twerk from Home posted:Yeah, it really sounds like you should look at AWS Fargate: https://aws.amazon.com/fargate nope, only like 500mb total, it's just a shittily optimized fd mesh but that fargate looks interesting. I'm just an EE who's been roped into trying to improve the run time of the models we have since I'm a computer nerd and I have so much other stuff to do, so I'll likely just say that getting everything set up will be a massive project.
|
|
# ? Dec 19, 2023 14:45 |
|
Watermelon Daiquiri posted:unfortunately there's that deadly capital allergy I’m sure Dell will lease you something quite happily!
|
# ? Dec 19, 2023 15:15 |
|
Watermelon Daiquiri posted:nope, only like 500mb total, it's just a shittily optimized fd mesh but that fargate looks interesting. I'm just an EE who's been roped into trying to improve the run time of the models we have since I'm a computer nerd and I have so much other stuff to do, so I'll likely just say that getting everything set up will be a massive project. Aw jeez. Have you tried running it on your laptops with GNU parallel or something? You don't need cloud for this, especially if the point of this is to get faster turnaround time for a batch. A $600 Dell desktop will have 24 threads and 32GB of RAM, letting you easily run 21 of these simultaneously. Hell, if your laptops are somewhat modern they probably have more than 12 threads so they can chew through them much faster than running sequentially. If you don't want to deal with the full complexity of AWS and wanted a cheap hourly rental machine that you can log into and do what you need to, then destroy and stop paying for, that workload also would fit well onto a single Virtual Private Server from any of the much cheaper vendors: https://www.vultr.com/pricing/#cloud-compute/ Vultr has a 32vCPU / 64GB RAM CPU-optimized option for 95 cents per hour.
|
# ? Dec 19, 2023 16:44 |
There are 21 parameters at the moment so if I run it on a 16 thread laptop I have it takes 5-20 minutes depending on the complexity (since each worker will need to chew through two jobs each). Given that that laptop is a 35w tsp CPU, it tops out at 3.6ghz with an external fan blowing too. I do like the vps idea, though. The only reason I was considering aws is we already use it for a web licensing app and we could integrate that into the product for future customers so they don't need to worry about loving with a big computation machine while getting improved runtimes.
|
|
# ? Dec 19, 2023 18:34 |
|
Cloud - "I don't wanna own, I wanna rent and I want a landlord is who looking to charge every cent every time I flush the toilet or turn on the lights" Cloud has its uses, but realistically a colo with a rented VM would cost you less or running it yourself on surplus hardware.
|
# ? Dec 23, 2023 21:15 |
|
largely comes down to whether your business has means and appetite for infrastructure and expanded security payroll imo
|
# ? Dec 23, 2023 21:24 |
|
well, at least in azure's case there's a lot of extra value for cloud too. this is the virtualization thread, so we focus a lot on compute and storage infrastructure. Internal, B2B, and C2B IAM on Azure is pretty slick. having your business productivity/collaboration all in one place too is one hell of a value add for doing virt in Azure. it's more than just a pure capital versus operational expenses and labor consideration
|
# ? Dec 23, 2023 21:27 |
CommieGIR posted:Cloud - "I don't wanna own, I wanna rent and I want a landlord is who looking to charge every cent every time I flush the toilet or turn on the lights" Unfortunately, storage is the one thing you can't elastify easily, and usually taking full advantage of the elasticity also means you have to fully buy into the vendor lock-in, meaning you're gonna have a bad time when you try to move away. So the end-result is that for the vast majority, the butt ends up being more expensive.
|
|
# ? Dec 23, 2023 21:34 |
|
Potato Salad posted:largely comes down to whether your business has means and appetite for infrastructure and expanded security payroll imo In most cloud cases - Security is on you too. The Shared Responsibility model tends to leave that on your side of the fence unless you pay specifically for security through your specific cloud vendor. Seriously, having seen the absolute bullshit that Cloud Engineers and Software Devs do in the cloud - woe betide anyone not letting some sort of Security guy look over their code, app, and deployments. "What do you mean I can't just open my resources to the interne via terraform or use up a whole /24 block with 2-3 microservices?!" Have fun! Potato Salad posted:well, at least in azure's case there's a lot of extra value for cloud too. this is the virtualization thread, so we focus a lot on compute and storage infrastructure. Internal, B2B, and C2B IAM on Azure is pretty slick. having your business productivity/collaboration all in one place too is one hell of a value add for doing virt in Azure. Yes, but in most cases it ends up being more expensive to do so in cloud versus on prem even including possible labor costs for having dedicated engineers, which in most cases you need dedicated cloud engineers to handle this stuff anyways. And you can do the B2B and C2B IAM linked with your on prem services anyways. I have yet to see a truly Datacenter to Cloud migration or Digital Transformation not turn into a bill that makes a colo with new hardware look less appealing. There's very few cases where even serverless and container based workloads don't slowly turn into a massive bloat that makes a monolith blush and turns into massive costs. And yeah, hostage taking with data exfil costs is hilarious. You get locked in quick. Cloud has very good use cases - the problem is its still very much the new shiney that every C suite tech exec has decided is the future of Datacenters and is doing lovely lift and shifts and letting Devs/Engineers pull startup 'Move fast and break thing' style digital transformations that massively increases risks to the biz. Keep your VMs themselves out of the cloud if you can avoid it, you should focus on serverless and containers for cloud workloads unless you like big bills. CommieGIR fucked around with this message at 22:04 on Dec 23, 2023 |
# ? Dec 23, 2023 21:44 |
|
I’m glad we’re not still running our own DCs for everything, because we would have to have a lot of hardware idle during most of the year in order to handle Black Friday/Cyber Monday.
|
# ? Dec 24, 2023 02:47 |
|
Subjunctive posted:I’m glad we’re not still running our own DCs for everything, because we would have to have a lot of hardware idle during most of the year in order to handle Black Friday/Cyber Monday. bitcoin fixes this, you just mine for the rest of the year is an argument that has probably been made unironically at some point
|
# ? Dec 24, 2023 03:20 |
|
Subjunctive posted:I’m glad we’re not still running our own DCs for everything, because we would have to have a lot of hardware idle during most of the year in order to handle Black Friday/Cyber Monday. Yes, rapid scaling is where cloud shines, but not really a worthwhile reason to move everything to cloud - and again most of that stuff would be containerized or serverless loads. But even then, if you are a large enough organization that Black Friday induces that kind of load, you are likely also not one that needs a lot of external cloud providers and can likely handle the load. And then you get the bill. CommieGIR fucked around with this message at 03:38 on Dec 24, 2023 |
# ? Dec 24, 2023 03:24 |
|
How's Proxmox for home use on a single node? I've got the free ESXi license for stuff at home (I use it at work, so I figured I might as well use what I know at home, too), but, uh, with the Broadcom acquisition, I'm feeling like I need an escape plan.
|
# ? Jan 5, 2024 22:36 |
|
you could use proxmox at home but you won't be learning any skills that are useful for your career imo
|
# ? Jan 5, 2024 22:44 |
|
|
# ? May 9, 2024 20:38 |
|
Potato Salad posted:you could use proxmox at home but you won't be learning any skills that are useful for your career imo At this point in the post acquisition stage, getting esxi skills today would be like like getting novell groupware knowledge in the early two ks. Broadcom is going to kill esxi no if no buts, it's just a matter of time.
|
# ? Jan 5, 2024 23:06 |