Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Zorak of Michigan
Jun 10, 2006

The licenses will be valid, but you won't be able to renew support on them.

Adbot
ADBOT LOVES YOU

Potato Salad
Oct 23, 2014

nobody cares


I'm really upset that the justice department complaints against the acquisition did not end up surviving political intervention

What American consumers are positively served by the acquisition? None, except for a couple giant shareholders. What American businesses are positively served by the acquisition? None.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Well, I'm happy with proxmox at least for my homelab. I don't know how different it is to vmware in scalability, but it's served my one or two node purpose just fine.

Thanks Ants
May 21, 2004

#essereFerrari


I presume there's a point where it makes sense to employ a few XGP-ng experts who can also contribute to the codebase rather than to keep paying out for VMware licensing, do larger companies run the numbers on this and have a point planned for making the change, or do they just consider VMware a sunk cost and keep paying it?

Mr. Crow
May 22, 2008

Snap City mayor for life

Potato Salad posted:

I'm really upset that the justice department complaints against the acquisition did not end up surviving political intervention

What American consumers are positively served by the acquisition? None, except for a couple giant shareholders. What American businesses are positively served by the acquisition? None.

Its 2023 a merger / acquisition being blocked is extremely unlikely.

Wibla
Feb 16, 2011

For us, it would be cheaper to hire people to maintain Proxmox / XCP-NG rather than keep paying for VMware.

BlankSystemDaemon
Mar 13, 2009



It'd be nice if folks learned the lesson that having any one solution is the wrong way, and instead some focus on working on kvm, while others improve bhyve (found in FreeBSD and Illumos distributions, among others) while still others work on Xen.

With enough work, and someone working on interoperability, it'd be possible to have a fleet of three (or more?) hypervisor solutions, all being able to work together.

BlankSystemDaemon fucked around with this message at 14:48 on Dec 17, 2023

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Wibla posted:

For us, it would be cheaper to hire people to maintain Proxmox / XCP-NG rather than keep paying for VMware.

I’ll start the wikifoundation!

SamDabbers
May 26, 2003



BlankSystemDaemon posted:

It'd be nice if folks learned the lesson that having any one solution is the wrong way, and instead some focus on working on kvm, while others improve bhyve (found in FreeBSD and Illumos distributions, among others) while still others work on Xen.

With enough work, and someone working on interoperability, it'd be possible to have a fleet of three (or more?) hypervisor solutions, all being able to work together.

KVM, bhyve, and Xen are all actively developed though? The closest thing we have for a unified management plane is libvirt. There's also virtio for PV devices but only KVM and bhyve implement it AFAIK. Xen and VMware have little incentive to adopt it unfortunately.

fresh_cheese
Jul 2, 2014

MY KPI IS HOW MANY VP NUTS I SUCK IN A FISCAL YEAR AND MY LAST THREE OFFICE CHAIRS COMMITTED SUICIDE
If you go too far down that path you end up at nova-compute openstack - the least common denominator of hypervisor management.

It can manage almost any hypervisor as long as you dont want it to do anything cool or good.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

fresh_cheese posted:

dont want it to do anything cool or good.

You already said openstack, no need to repeat yourself.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

BlankSystemDaemon posted:

It'd be nice if folks learned the lesson that having any one solution is the wrong way, and instead some focus on working on kvm, while others improve bhyve (found in FreeBSD and Illumos distributions, among others) while still others work on Xen.

With enough work, and someone working on interoperability, it'd be possible to have a fleet of three (or more?) hypervisor solutions, all being able to work together.

Puppet and Chef support multi-hypervisor setups and you can script it.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

CommieGIR posted:

Puppet and Chef support multi-hypervisor setups and you can script it.

Did you just tell me to go gently caress myself?

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

in a well actually posted:

Did you just tell me to go gently caress myself?

Broadcom did.

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.

John DiFool posted:

I used to get hard freezes on my host when shutting down a Windows VM under KVM et al when passing a consumer NVIDIA GPU via VFIO. I fixed those freezes by setting the card to MSI mode in the guest. This was many, many years ago though. Could still be worth checking out on your setup.

This is the best explanation of MSI mode that I know of: https://forums.guru3d.com/threads/windows-line-based-vs-message-signaled-based-interrupts-msi-tool.378044/ and it has a link to a tool that will switch your GPU to MSI mode if it can. Though this discussion is 10 years old at this point and one would hope that MSI mode is enabled by default on recent NVIDIA drivers.

No guarantee this is related to your issue though, especially since you don't seem to run into the issue at VM shutdown time.

yeah, ive seen this on the internet and a related old old qemu bug but unless it's a regression it's not it. Instead what I think it is is it has something to do with binding/rebinding some audio source for snd hda intel after the vm shuts down.

i also have a different problem entirely: my work currently has a parametric math model that we run fairly often, and it takes 5-45minutes to run depending on model size, parameters, what laptop is used etc. I figured that a good idea to explore would be something EC2 since we really need something capable of running 21 separate threads at the same time, but it seems like the only option with that many are the massive ones with a crap ton of ram, networking etc. Are there any cloud providers who can do more fine grained options like that? Another option i was thinking of is since the parameters are completely independent, i could just spin up a tiny instance for each one separately, but i dont know anywhere near enough about this to know the feasibility.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Any reason you can't do the threads in separate docker or podman containers? They would have substantially lower overhead than a full VM.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Watermelon Daiquiri posted:

yeah, ive seen this on the internet and a related old old qemu bug but unless it's a regression it's not it. Instead what I think it is is it has something to do with binding/rebinding some audio source for snd hda intel after the vm shuts down.

i also have a different problem entirely: my work currently has a parametric math model that we run fairly often, and it takes 5-45minutes to run depending on model size, parameters, what laptop is used etc. I figured that a good idea to explore would be something EC2 since we really need something capable of running 21 separate threads at the same time, but it seems like the only option with that many are the massive ones with a crap ton of ram, networking etc. Are there any cloud providers who can do more fine grained options like that? Another option i was thinking of is since the parameters are completely independent, i could just spin up a tiny instance for each one separately, but i dont know anywhere near enough about this to know the feasibility.

I posted in the other thread, but it would help if you could think about each independent task individually, and then use any of the many job schedulers that are designed to fit a lot of smaller tasks onto bigger nodes efficiently.

Or just use AWS Fargate, which gives you fine grained options over how much vCPU / RAM each individual task gets. You're going to get better, more cost efficient job throughput with a job scheduler, though.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Twerk from Home posted:

I posted in the other thread, but it would help if you could think about each independent task individually, and then use any of the many job schedulers that are designed to fit a lot of smaller tasks onto bigger nodes efficiently.

Or just use AWS Fargate, which gives you fine grained options over how much vCPU / RAM each individual task gets. You're going to get better, more cost efficient job throughput with a job scheduler, though.

Yeah, Docker Swarm or Kubernetes (K3s or K8s) would be your main starting point for a widely used on-prem or cloud scheduler. Unless you use some companies special sauce scheduler.

If it's a really simple project you could even do something as simple as a bash script and cron job that checks a network shared directory for a folder with data needing processing and spins up a docker compose with some env variables when someone drops a new one in there.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Nitrousoxide posted:

Yeah, Docker Swarm or Kubernetes (K3s or K8s) would be your main starting point for a widely used on-prem or cloud scheduler. Unless you use some companies special sauce scheduler.

If it's a really simple project you could even do something as simple as a bash script and cron job that checks a network shared directory for a folder with data needing processing and spins up a docker compose with some env variables when someone drops a new one in there.

Whats your 2c on kubernetes distributions that one might run on their own, physical hardware without wanting to pay VMWare licenses? Is K3s the winner for that? Canonical is out there marketing Mikrok8s wherever they can: https://microk8s.io/compare . Including in Ubuntu's default MOTD for a few years now.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Twerk from Home posted:

Whats your 2c on kubernetes distributions that one might run on their own, physical hardware without wanting to pay VMWare licenses? Is K3s the winner for that? Canonical is out there marketing Mikrok8s wherever they can: https://microk8s.io/compare . Including in Ubuntu's default MOTD for a few years now.

I'm personally a fan of CoreOS and its Ignition file setup as a super lightweight platform for either simple container workflows or as part of swarm of nodes. It's super easy to deploy new instances and the keep themselves updated. If you want to orchestrate their updates they can be setup to do that too so that you never have too many going down at once for an update. But their updates are super quick anyway because they are just rebooting into an already ready image.

Red Hat has some documentation on how to get it spun up on pretty much all the virtualization platforms (QEMU as an example)

You can use an ignition config that bakes in all the setup scripts that you'd need to run to get them up and running.

There's probably some github repos out there with ignition files that have K3s or K8s already setup that you could tweak for your needs.

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.
I'm honestly trying to keep it as simple as possible given the limited resources we have. This is so far outside my wheelhouse it isn't even funny lol but right we are dealing with running the models on lovely thinkpads which takes ages and ages to compute. Ideally I'd set up a threadripper server which could run the models for us so it can just calculate all 21 models simultaneously and the person running things would ssh in. The input to the model is a <40kb text file and the output is similarly sized (though theres a hundred megabytes of working data we still keep around for troubleshooting purposes) so it wouldn't even be hard to transfer data around. However my boss is incredibly allergic to capital purchases (which he should see his doctor about) and very much prefers the $2 an hour computer costs (since each time we need to run the models it'll take ~10 minutes tops and we only run maybe 20-30 times a week)


Something like EC2 would serve us well when external people want to run models too. We won't exactly be giving them an in to our internal network.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Maybe just put Tailscale on the various computers that travel and get that Threadripper going?

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.
unfortunately there's that deadly capital allergy :(

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
Yeah, it really sounds like you should look at AWS Fargate: https://aws.amazon.com/fargate

You can use it via AWS Batch or just directly, but it lets you size the compute to each task and only pay on demand without having to actually deal with a VM yourself. Go ahead and launch 21 right-sized instances at once, that way when 1 task finishes you can stop paying for the resources it's using rather than having unused cycles on a bigger instance. Also, you get to call it "Fartgate".

I take it that each of these things is single threaded, but wants a lot of RAM?

Potato Salad
Oct 23, 2014

nobody cares


Nitrousoxide posted:

If it's a really simple project you could even do something as simple as a bash script and cron job that checks a network shared directory for a folder with data needing processing and spins up a docker compose with some env variables when someone drops a new one in there.

so that's how new, weirdly architected message queuing systems are born

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.

Twerk from Home posted:

Yeah, it really sounds like you should look at AWS Fargate: https://aws.amazon.com/fargate

You can use it via AWS Batch or just directly, but it lets you size the compute to each task and only pay on demand without having to actually deal with a VM yourself. Go ahead and launch 21 right-sized instances at once, that way when 1 task finishes you can stop paying for the resources it's using rather than having unused cycles on a bigger instance. Also, you get to call it "Fartgate".

I take it that each of these things is single threaded, but wants a lot of RAM?

nope, only like 500mb total, it's just a shittily optimized fd mesh but that fargate looks interesting. I'm just an EE who's been roped into trying to improve the run time of the models we have since I'm a computer nerd and I have so much other stuff to do, so I'll likely just say that getting everything set up will be a massive project.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Watermelon Daiquiri posted:

unfortunately there's that deadly capital allergy :(

I’m sure Dell will lease you something quite happily!

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Watermelon Daiquiri posted:

nope, only like 500mb total, it's just a shittily optimized fd mesh but that fargate looks interesting. I'm just an EE who's been roped into trying to improve the run time of the models we have since I'm a computer nerd and I have so much other stuff to do, so I'll likely just say that getting everything set up will be a massive project.

Aw jeez. Have you tried running it on your laptops with GNU parallel or something? You don't need cloud for this, especially if the point of this is to get faster turnaround time for a batch. A $600 Dell desktop will have 24 threads and 32GB of RAM, letting you easily run 21 of these simultaneously. Hell, if your laptops are somewhat modern they probably have more than 12 threads so they can chew through them much faster than running sequentially.

If you don't want to deal with the full complexity of AWS and wanted a cheap hourly rental machine that you can log into and do what you need to, then destroy and stop paying for, that workload also would fit well onto a single Virtual Private Server from any of the much cheaper vendors: https://www.vultr.com/pricing/#cloud-compute/

Vultr has a 32vCPU / 64GB RAM CPU-optimized option for 95 cents per hour.

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.
There are 21 parameters at the moment so if I run it on a 16 thread laptop I have it takes 5-20 minutes depending on the complexity (since each worker will need to chew through two jobs each). Given that that laptop is a 35w tsp CPU, it tops out at 3.6ghz with an external fan blowing too. I do like the vps idea, though.

The only reason I was considering aws is we already use it for a web licensing app and we could integrate that into the product for future customers so they don't need to worry about loving with a big computation machine while getting improved runtimes.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
Cloud - "I don't wanna own, I wanna rent and I want a landlord is who looking to charge every cent every time I flush the toilet or turn on the lights"

Cloud has its uses, but realistically a colo with a rented VM would cost you less or running it yourself on surplus hardware.

Potato Salad
Oct 23, 2014

nobody cares


largely comes down to whether your business has means and appetite for infrastructure and expanded security payroll imo

Potato Salad
Oct 23, 2014

nobody cares


well, at least in azure's case there's a lot of extra value for cloud too. this is the virtualization thread, so we focus a lot on compute and storage infrastructure. Internal, B2B, and C2B IAM on Azure is pretty slick. having your business productivity/collaboration all in one place too is one hell of a value add for doing virt in Azure.

it's more than just a pure capital versus operational expenses and labor consideration

BlankSystemDaemon
Mar 13, 2009



CommieGIR posted:

Cloud - "I don't wanna own, I wanna rent and I want a landlord is who looking to charge every cent every time I flush the toilet or turn on the lights"

Cloud has its uses, but realistically a colo with a rented VM would cost you less or running it yourself on surplus hardware.
The one advantage that the butt has is if you've got a very spiky workload or if you're at a very particular point in the start-up curve and you set up everything to take advantage of the elasticity.
Unfortunately, storage is the one thing you can't elastify easily, and usually taking full advantage of the elasticity also means you have to fully buy into the vendor lock-in, meaning you're gonna have a bad time when you try to move away.

So the end-result is that for the vast majority, the butt ends up being more expensive.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Potato Salad posted:

largely comes down to whether your business has means and appetite for infrastructure and expanded security payroll imo

In most cloud cases - Security is on you too. The Shared Responsibility model tends to leave that on your side of the fence unless you pay specifically for security through your specific cloud vendor. Seriously, having seen the absolute bullshit that Cloud Engineers and Software Devs do in the cloud - woe betide anyone not letting some sort of Security guy look over their code, app, and deployments.

"What do you mean I can't just open my resources to the interne via terraform or use up a whole /24 block with 2-3 microservices?!"

Have fun!

Potato Salad posted:

well, at least in azure's case there's a lot of extra value for cloud too. this is the virtualization thread, so we focus a lot on compute and storage infrastructure. Internal, B2B, and C2B IAM on Azure is pretty slick. having your business productivity/collaboration all in one place too is one hell of a value add for doing virt in Azure.

it's more than just a pure capital versus operational expenses and labor consideration

Yes, but in most cases it ends up being more expensive to do so in cloud versus on prem even including possible labor costs for having dedicated engineers, which in most cases you need dedicated cloud engineers to handle this stuff anyways.

And you can do the B2B and C2B IAM linked with your on prem services anyways.

I have yet to see a truly Datacenter to Cloud migration or Digital Transformation not turn into a bill that makes a colo with new hardware look less appealing. There's very few cases where even serverless and container based workloads don't slowly turn into a massive bloat that makes a monolith blush and turns into massive costs.

And yeah, hostage taking with data exfil costs is hilarious. You get locked in quick.

Cloud has very good use cases - the problem is its still very much the new shiney that every C suite tech exec has decided is the future of Datacenters and is doing lovely lift and shifts and letting Devs/Engineers pull startup 'Move fast and break thing' style digital transformations that massively increases risks to the biz.

Keep your VMs themselves out of the cloud if you can avoid it, you should focus on serverless and containers for cloud workloads unless you like big bills.

CommieGIR fucked around with this message at 22:04 on Dec 23, 2023

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

I’m glad we’re not still running our own DCs for everything, because we would have to have a lot of hardware idle during most of the year in order to handle Black Friday/Cyber Monday.

repiv
Aug 13, 2009

Subjunctive posted:

I’m glad we’re not still running our own DCs for everything, because we would have to have a lot of hardware idle during most of the year in order to handle Black Friday/Cyber Monday.

bitcoin fixes this, you just mine for the rest of the year

is an argument that has probably been made unironically at some point

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Subjunctive posted:

I’m glad we’re not still running our own DCs for everything, because we would have to have a lot of hardware idle during most of the year in order to handle Black Friday/Cyber Monday.

Yes, rapid scaling is where cloud shines, but not really a worthwhile reason to move everything to cloud - and again most of that stuff would be containerized or serverless loads. But even then, if you are a large enough organization that Black Friday induces that kind of load, you are likely also not one that needs a lot of external cloud providers and can likely handle the load.

And then you get the bill.

CommieGIR fucked around with this message at 03:38 on Dec 24, 2023

Kreeblah
May 17, 2004

INSERT QUACK TO CONTINUE


Taco Defender
How's Proxmox for home use on a single node? I've got the free ESXi license for stuff at home (I use it at work, so I figured I might as well use what I know at home, too), but, uh, with the Broadcom acquisition, I'm feeling like I need an escape plan.

Potato Salad
Oct 23, 2014

nobody cares


you could use proxmox at home but you won't be learning any skills that are useful for your career imo

Adbot
ADBOT LOVES YOU

SlowBloke
Aug 14, 2017

Potato Salad posted:

you could use proxmox at home but you won't be learning any skills that are useful for your career imo

At this point in the post acquisition stage, getting esxi skills today would be like like getting novell groupware knowledge in the early two ks. Broadcom is going to kill esxi no if no buts, it's just a matter of time.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply