Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Subjunctive
Sep 12, 2006

✨sparkle and shine✨

I’ve got a small Ubuntu home server running Home Assistant and a Unifi Controller under a creaky qemu/kvm setup, plus some other under-maintained services. I also have a Steam Deck which is shockingly compatible with the things I’ve been playing. I have virtualization questions about both arenas!

For the home server, I think I’d like to get rid of the mini tower and move to a NUCish form factor, and beef it up so I can play with some more modern homelab/clustering things. My light reading has led me to a Simply NUC Ruby r8 on which I will stick proxmox and then figure out how I want containers and VMs to interplay. And then stub my face on k3s, probably. Is this a sane path to pursue?

For gaming, the time for my Zen 4/Lovelace upgrade is coming and I’m seriously thinking of giving Linux gaming a shot for the first time since “Civ: Call to Power”. If it doesn’t work out for everything, I will probably want to do GPU+USB passthrough to Windows 11 or similar. Can I plausibly do that with a single NVIDIA GPU? I’ve heard tell, but I don’t know how reliable it is.

What’s the state of the art for doing things like clipboard in/out of VMs? I’ve only used the VMware stuff for that, and I’d rather not entangle myself with their stack if I can avoid it.

Thanks for any guidance you can provide!

Adbot
ADBOT LOVES YOU

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

SlowBloke posted:

Topic2 I would suggest going baremetal win11 plus WSL2 rather than doing weird stuff with PCIe passthrough. WSL2 has CUDA/OPENCL access now along with GUI apps so you will not miss anything.

Can I run stuff like k3s and systemd bits and so forth on Arch with WSL2, ideally in a way that doesn’t require reading a handful of “how to X on WSL2” gists every time I want to do something new? I’m hoping to keep the Linux environments as similar as reasonable between desktop, homelab, and Steam a deck because I’m a bear of little brain.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

in a well actually posted:

Don’t buy from SimplyNUC; I got on their mailing list a decade ago and they’ve been sending me spam constantly since, and none or their unsubscribe links work. They also change domains and senders around so it’s annoying to block.

gently caress those guys.

Oh, I was going to use your email address anyway.

Better players in that space?

(Thanks for the tip. If I end up buying from them after all then I’ll use a throwaway.)

freeasinbeer posted:

Any games with anticheat are likely to freak out about virtualization, so basically no recent shooters.

Which really sucks.

As for the NUC solution; I uhh have like 6 of them and then another 6-8 raspberry pis, and I put 99% of my apps on my little x86 router using docker compose. There are a number of low power intel x86 nuc like or smaller things you can get that have 4-6 2.5gbe or even one with 10g sfp+ ports.

I like K8s/k3s and use it all day professionally, but docker compose is just a bit easier for a single node “thing”.

I’m too slow for shooters anyway.

I want to play with k[83]s to better understand the poo poo people keep talking about at work, but I have generally enjoyed docker-compose in the past.

Do you have a recommendation for specific “low power intel x86 nuc like or smaller things”, by chance?

SlowBloke posted:

WSL2 on supported distros is cli until you start apps, also the Wayland part is done during installing. Arch require heavy tinker to run in that scenario, we use Ubuntu on our fleet and it's pretty much painless.

Hmm, that’s too bad. I really do want arch I think. (I have Ubuntu running on my current desktop under WSL2, mostly to avoid having to learn Powershell.)

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

I’m sort of itching to try Proxmox vGPU stuff on my desktop, because 3950X+3090 should be able to handle more than one game at a time, but I might wait a few months until this machine isn’t my daily driver any more.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Actuarial Fables posted:

Does that work with consumer cards now? I've got a 3700X and a 1080 in one of my hosts, could be fun to get that configured.

I’m told it works up to Turing, yeah.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

A friend of mine (he worked on Space/TimeWarp and general GPU insanity at Oculus when I was there) brought his company out of stealth today: Juice, which does IP-based transparent remoting of GPU resources at high speed.

https://www.juicelabs.co/

Binaries available for Windows, Linux (Ubuntu), and Mac; works inside VMs; no client program modifications required. I’ve seen some demo videos of it before and it is pretty friggin’ nuts. Could really change the game for GPU-passthrough sorts of applications. I’m travelling right now so I haven’t installed it yet myself.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Pile Of Garbage posted:

Not really sure how this would work for any non-async workloads. Might be good for transcode offload from one's Plex server?

It apparently works fine for gaming, with ~150Mbit network! I’m travelling right now but I’m going to see if I can use it to virtualize my 4090 to serve a few Minecraft clients on underpowered machines.

The engineers are actively answering questions in their discord, fwiw.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Pile Of Garbage posted:

Just noticed you're selling this in other threads, I don't think anything you say re the tech can be taken as neutral. Further still this sounds cooked I'd advise caution.

I am not neutral, I think the tech is very cool from what I’ve seen of it in demos. I don’t know what you mean by “cooked”, I admit. I’ll have personal experience with it when I get home this weekend.

By “selling this in other threads” do you mean “also posted about it in one other thread, the one about GPUs that often discusses cloud gaming”?

E: To be more explicit, I stand to gain exactly nothing if this is successful commercially, and lose exactly nothing if it fails. I just think it’s exciting tech, and usefully to me personally.

Subjunctive fucked around with this message at 05:09 on Nov 12, 2022

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Yeah, that’s fair. There’s going to be some stuff at SuperComputing (the conference) soon I think but that’ll probably be non-gaming stuff.

As of a couple of hours ago there was a public server instance up and running but I’m on a train and don’t want to gently caress my data plan for the month by trying it out. I’ll give it a try locally if nothing else when I get a chance to kick the kids off the computers this weekend.



Not sure if it’s still up. I asked about benchmarks but I think they’re travelling for the conference so we’ll see.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Probably depends on the devices being used in terms of data centre deployment (IIRC the consumer cards aren’t licensed for use in DCs or something) but they already support vGPU for the industrial cards, right?

I’m not sure what they could do to break it, I admit, without also breaking NVIDIA and Valve’s own remoting, but I haven’t thought about it very much.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

BlankSystemDaemon posted:

WSL has done wonders for Microsoft retaining people on Windows, rather than switching to something alternative.

That’s interesting! How much of a difference has it made?

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Pile Of Garbage posted:

Relevant to the Broadcom buyout of VMware, they're already moving to transition everything to a subscription-based model and as part of that they're killing the perpetual license model for vSphere: https://www.thestack.technology/broadcom-is-killing-off-vmware-perpetual-licences-sns/.

Yeah, that’s what started this conversation I think.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Wibla posted:

For us, it would be cheaper to hire people to maintain Proxmox / XCP-NG rather than keep paying for VMware.

I’ll start the wikifoundation!

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Maybe just put Tailscale on the various computers that travel and get that Threadripper going?

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Watermelon Daiquiri posted:

unfortunately there's that deadly capital allergy :(

I’m sure Dell will lease you something quite happily!

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

I’m glad we’re not still running our own DCs for everything, because we would have to have a lot of hardware idle during most of the year in order to handle Black Friday/Cyber Monday.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Nitrousoxide posted:

Proxmox is great, if only to make backing up your vms and transitioning to new hardware (add node to cluster, stop vm's on old hardware, transfer, start up vms on new hardware) just trivially easy compared to bare metal servers.

I’m starting to gently caress around with proxmox (good enough to run my kid’s palworld server and a unifi controller at least) and I had a problem where I hosed up the config of a VM and it just sat in BIOS looping looking for a bootable disk, which I had rudely not provided. Neither “shutdown” nor “stop” worked to kill the VM, so I had to go kill the qemu process myself in the shell. Otherwise it’s been pretty good.

Trying to figure out how best to do cloud-init in a way that keeps the image updated so I don’t have to apt-get upgrade every time when I clone a new VM.

It’s also a little weird to me that it doesn’t provide a way to just slam a docker/podman application container into it. I can create a VM for all my docker stuff, but that gives me less-flexible allocation of CPUs and memory, and it means that I have to use another storage abstraction layer rather than managing the docker container volumes alongside the VM disks and LXC resources. I’m sure there’s a good reason for it, of course.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

RVWinkle posted:

I think you're looking at it backwards. You have more flexibility if you run docker in a vm because you can allocate resources and manage scaling. If you install docker on the base hypervisor then it will just use as much resources as it wants.

No, I have less flexibility because I need to assign a resource pool to “all things that are managed by docker”, which is not a meaningful thing, and then subdivide it between containers, rather than just giving each container the restrictions I want via cgroups and then letting proxmox do its job of mediating those things.

If I want to expand a container from 16GB to 24GB of RAM or 2 cores to 8, it might be easy to do within the docker container, but no whoops I’ve allocated all the RAM/cores associated with the stub VM, and now I need to take a maintenance window on all the containers in my docker VM to reboot it with more resources—when it should only affect that one container.

(Does dynamically increasing the balloon stuff work? Last time I tried that I was running xen in userspace under gdb, so maybe I should re-evaluate it.)

Nitrousoxide posted:

I have kept my proxmox install totally stock with absolutely no additional packages installed.

Not even tailscale? You goddamned animal.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Nitrousoxide posted:

Why would I have tailscale on the proxmox host? I'm not trying to build a cluster with an offsite server. All my hardware is in the same local subnet.

So that you can hit the proxmox UI and API as proxmox.thingwithtail-thingwithscale.ts.net from any device you own, wherever you are, as directly as is possible in your then-current network configuration.

All my containers have tailscale, all my VMs have tailscale, all my PCs have tailscale, my Steam Deck has tailscale, my 3D printer RPi has tailscale, my phone has tailscale, etc. All my devices are directly meshed, punch through even the most horrifying hotel firewalls, and it just frickin’ works. If I could install tailscale on my wife’s vibrator I would do it just in case.

Brad needs to finish with the WOL stuff so that I can stream games from my desktop to a hotel room more simply, though.

E: I don’t have to copy authorized_keys around, and my private keys, because tailscale ssh takes care of that on the basis of my authenticated tailscale connection! I can let my kid’s friend connect to our LAN minecraft server without opening it up to the entire world. I have zero ports forwarded from my router so I can’t gently caress that up and end up with someone blowing open an RCE in pihole or octoprint or whatever.

E2: I use the same DNS name/IP address to access everything, without concern for whether I’m on my home network or not, and it transfers at wire speed even once I make stupid upgrades to my home network.

Subjunctive fucked around with this message at 04:26 on Feb 1, 2024

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

yeah I’ve thought about running my own control plane with headscale, but it doesn’t seem worth it while they’re doing it so well

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

in a well actually posted:

ESX came out in 2001, though.

Do you remember using it? Oof.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

I remember trying to use the first release of VMware’s stuff to let me build and debug Mozilla on Windows from a Linux machine and it was just miserable.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

You folks make this sound like a lot more work than the cloud.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

CommieGIR posted:

LMAO - If you think the cloud isn't work and isn't built of these bastardized technologies.

Also - the joy of doing Incident Response on someone's lovely cloud environment that got popped because it was 'easy to setup' and they did absolutely zero hardening and everything uses the same super-admin service account.

gently caress the cloud.

Well, you can set up cloud permissions correctly or not, and you can set up on-prem permissions correctly or not. Like, do the job well and things are easier. Try to pay bottom-quintile salaries and decouple security completely from infrastructure from application development and keep people from being able to try different things and you’re going to have a hard time.

I’ve worked with very large on-prem multi-DC stuff and while I didn’t run the base resource allocation layer I worked with the people who did, and it never sounded half as terrible as you are all describing. We didn’t use VMware or OpenStack or whatever, though, just custom virtualization stuff (or bare-metal systems) because there wasn’t really anything that could handle our scale. In 2012 I could go click around and allocate a few hundred machines in various DCs for a test deployment or whatever, I didn’t have to file tickets and wait around. Tell them what base image to use and what deployment namespace to pull apps and config from, boom.

We use cloud stuff exclusively where I am now and I work closely with the people who manage that on deployment models and system monitoring, and it is a ton easier than you are all describing here. We have tight control over what can access what, great audit trails, the dynamic scaling we need (our workloads vary by more than 50x over the course of the year). If someone wants to try something new we can trivially set them up in an isolated thing with some propagated safe test data and tooling, and all they can do is hit their budget limit. We were on-prem entirely until 2018, and the people who have been in both worlds for us much prefer this one. (We’ll probably do some more on-prem stuff in the future, for selected predictable loads because of better economics, but it’s going to take a lot of work to get to the point that teams can self-serve or scale database/compute/cache or whatever as well as they can even with GCP, which is not the #1 cloud provider in terms of tooling. That Oxide stuff looks really nice, though…)

Mostly, though, I was joking about how so much of this thread is complaining about horrors instead of being excited by new stuff that is making things easier.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

CommieGIR posted:

Two groups of people who have no clue how networks and infrastructure works are being allowed to handle it all themselves.

Yeah, don’t do that. Give the devs and SREs the tooling (including integration into the development stack) and education such that the thing you want them to do is the easy path and exceptional needs get thoughtful, collaborative support instead of “square peg, please choose from our selection of round holes”.

The whole reason any of this poo poo exists is to run the applications for the business.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

(But no matter what you do, some motherfucker is going to make you rack a few dozen Mac Minis on IKEA shelves so they can do CI.)

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Harry_Potato posted:

hardcore folks running out of date crap that can't move

you’re right, probably more than 10%

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

I'm a babe in the proxmox woods, but I haven't found anything about this by searching or reading config files, so I'll supplicate here to the virtualization gods:

I want to set up login to the proxmox web interface such it searches multiple auth domains ("Linux PAM standard authentication" and "Proxmox VE authentication server") and the user doesn't have to select the one they're in. Is that viable, or should I find a way to unify the users somehow?

(I don't care about how conflicting usernames are handled, because I won't have any that aren't the same user.)

Adbot
ADBOT LOVES YOU

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Wibla posted:

Can you tell us a bit more about what you are really trying to do here? That sounds cumbersome...

I have some users who use the web interface to log in and do things like restart a specific VM or access one VM’s console, and I’m trying to simplify the login process so they don’t have to pick a realm. These users are defined with the proxmox UI and live in that realm, but root and I live in the PAM realm. I just want to hide that complexity because it’s not relevant to them.

Thanks Ants posted:

Skimming the documentation suggests that multiple auth sources means configuring them as different realms, and those are presented in a dropdown on the login page.

Yeah, that’s the situation I’m in right now that I want to simplify.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply