Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BlankSystemDaemon
Mar 13, 2009



If you're doing virtualization, it may be worth working with what's called a series of golden master snapshots.
You make a VM with your favorite OS, and then snapshot it once it's in its basic unregistered state. Then you add your favorite software to it, and set it up how you want it, and snapshot it.
From that point, if you ever want to go back or start with something fresh, you can always pick one of those golden masters.

For cold storage on spinning rust, ideally you want something that does in-line check-summing and if you're really insistent on it still working N years from now, something that does multiple copies of blocks written to the disk (sometimes called ditto-blocks).
Hot storage with check-summing and mirroring/distributed parity is much better for long-term storage, even if you have to build a low-power system to store things with, because at least if a bitflip happens, the disk firmware and/or filesystem+LVM can fix the thing that went wrong.

Non-volatile flash is assumed by almost everyone to have much worse cold storage capabilities, since it's much easier to bitflip an electrical charge in flash memory than it is to bitflip a magnetic charge on a disk, and modern TLC or QLC flash SSDs simply don't have very much write endurance.

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



Shumagorath posted:

I need this to work on the road :suicide:
UEFI HTTPS Boot is, in theory, a thing.

Whether vendors implement it is anyone's guess, but it's part of the spec.

BlankSystemDaemon
Mar 13, 2009



SlowBloke posted:

New vSphere is out, it's all tanzu and DPU with a handful of QoL improvements. Nothing to justify jumping on it.
There's also something to be said for not supporting Broadcom.

BlankSystemDaemon
Mar 13, 2009



Shaocaholica posted:

How does VM network speed work for VMs running on the same host? Are connections capped to virtual standards like 1G, 10G? Can you have as fast as possible between VMs running on the same host? How fast can that be?
Paravirtualization on top of fast software switch (read: multi-million packets-per-second using I/O batching pre-allocated memory buffers) should let you do multi-gigabit traffic between guests on the host.

On FreeBSD with netmap/vale and bhyve using ptnet(4), I've seen north of 40Gbps between two guests on fairly old server hardware.

SamDabbers posted:

Semi-related: is there any benefit for east-west traffic between VMs on the same box to use SR-IOV and let the NIC switch the packets instead of the CPU? Seems like there'd be less CPU load at the expense of a round trip over PCIe, and would also be dependent on the internal switching capacity of the particular NIC.

Has anybody tried this?
There's a huge advantage to using SR-IOV and letting the NIC do the work in hardware, because you're saving a substantial amount of cputime on the server which otherwise has to do it in software.

Getting SR-IOV to work is another story entirely, though - it's even managed to not work on Supermicro, which is one of the vendors I usually recommend as they're the least-poo poo.

If you can get it working though, it's absolute preferred over any other solution.

BlankSystemDaemon
Mar 13, 2009



I can't speak to any other thin-virtualization solutions, but with FreeBSDs per-jail fully isolated netstack, it's loving great that you can give each jail its own virtual function without having to do any kind of software switching, especially because it's easy to have more jails than can commonly be put in a server.
6x 4-port PCIe NICs only give 24 ports, and the 6-7-port NICs don't tend to have very good chips.

BlankSystemDaemon
Mar 13, 2009



WSL has done wonders for Microsoft retaining people on Windows, rather than switching to something alternative.

BlankSystemDaemon
Mar 13, 2009



Subjunctive posted:

That’s interesting! How much of a difference has it made?
I've heard of a fair few people using it, and I have a sneaking suspicion that as CommieGIR points out, the familiarity and gaming make for a hell of a convenience for certain folks.
I also have a sneaking suspicion that there's folks who don't want to admit to using it - which is silly, because I would hope we as a people would've gotten over tribalism like that.

I know of at least a couple fairly sizable development teams who started doing stuff using it as of WSL1 (which was directly inspired by the Linuxulator in FreeBSD, which Microsoft developers noticed when they were porting dtrace from FreeBSD) - and one of them has switched to WSL2.

I doubt anyone has any numbers though, so it's hard to quantify.

Thanks Ants posted:

WSL does some very weird networking which means IPv6 doesn't work out of the box, it's annoying and it would be nice to just have the WSL instance grab another address when it's opened.
Yeah, IPv6 is only more than a quarter of a century old - practically brand new!

namlosh posted:

So I have an old laptop that’s a first gen i7 (haswell?) with 16gb of ram and an add.

Can I run proxmox on it? It has virtualization extensions and such in the bios and they work just fine. I’ve got fedora on there now with kvm.

Can I multihome the Ethernet port and run each vm with its own IP if I want using proxmox?
Also would I run containers directly on proxmox or create a vm and run them there?

This is all homelab crap so doesn’t have to be super stable or anything.

Sorry for the random questions. Just trying to sanity check what my plan is before moving forward
Hardware-assisted virtualization (and more importantly Second-Level Address Translation) was added in Nehalem and Unrestricted Guest mode (ie. being able to boot in 16-bit real-mode, for DOS and retro-software compatibility) was added in Ivy Bridge, along with virtualization of interrupts (also responsible the thing that during boot selects a system processor based on the value of the EAX registers of each core - all other cores become service processors).
IOMMU virtualization (the ability to pass hardware through PCI devices to the guest) was present in some Sandy Bridge CPUs, but was default by Ivy Bridge.

First-gen i7 doesn't mean much, so you'll want to check Intel Ark for the specific model.
The Intel terms for the above generic names are VT-x, vAPIC, and VT-d.

BlankSystemDaemon fucked around with this message at 18:05 on May 17, 2023

BlankSystemDaemon
Mar 13, 2009



wolrah posted:

WSL2 is just a Linux VM running on Hyper-V with really tight host integration.

WSL1 was more like a reverse-WINE where the Linux apps are actually running on the Windows kernel pretending to be Linux, which it did shockingly well but IIRC there were severe performance issues for certain use cases due to the different ways Linux and Windows handle disk access that weren't considered solvable with that model. It also would have required continuous development to maintain parity with the real kernel, where the WSL2 model gets its kernel updates "for free" from the upstream distros as long as Hyper-V doesn't break anything.

I don't know how it's affected others' use, but I can say that once WSL gained X support it started to impact how often my dual boot machines were on Linux, and when it got GPU support I basically stopped dual booting. I've done it twice for WiFi injection shenanigans and twice just to run updates (one of those times the nVidia driver hosed up and I got a bonus hour of troubleshooting that with my updates...)

The comparison to Mac using devs I agree with. I've used Mac laptops on and off over the years and always enjoyed being able to have both commercial software and my favorite *nix tools side by side in a reasonably well integrated manner. WSL brought the same concept to Windows.
The funny thing is, Linuxulator on FreeBSD regularly beats Linux in synthetic (read: useless) benchmarks.
Not by much, but it's pretty consistent.

I have a hypothesis that this is down to FreeBSD building without SSE/MMX optimizations for the binaries (it targets i686 aka Pentium Pro), since most of the data that's being handled by the kernel is lots of small bits of data consisting of a large amount of instructions where the individual instruction latencies start adding up.
Of course this changes once the newer architectures start reducing instruction latencies (which AMD have been pretty good about, and even Intel is catching up on) - but then you need to build for that specific micro-architecture, and then it won't work on anything else.

namlosh posted:

Thanks so much for the reply. I feel really dumb (Haswell was a Pentium 4 wasn't it, ffs).

So in conclusion, It's safe to say it won't be ideal and I may not be able to pass-through peripherals to underlying VMs,
but will ProxMox VE 7.4 even run on it?
Yeah, you might be able to do para-virtualization for your NICs - but I'm not an expert in Proxmox, so I don't know how it's accomplished there.

BlankSystemDaemon fucked around with this message at 19:03 on May 17, 2023

BlankSystemDaemon
Mar 13, 2009



:jail:: "replacing pre-installed BSD programs with their preferred GNU implementation"

BlankSystemDaemon
Mar 13, 2009



in a well actually posted:

Yeah they’re doing exactly what they said they’d do.
It's not just what they said they'd do, it's what everyone else said they'd do before they said they'd do it.

Internet Explorer posted:

I guess I don't mind VMware licensing turning into a subscription that includes support because I never had the desire to run old versions or without support. The real problem is that they are trying to squeeze out 3x the profit. Financialization is bad, folks. Who knew?
New speedrun category: Become Oracle at any% cost!

BlankSystemDaemon
Mar 13, 2009



It'd be nice if folks learned the lesson that having any one solution is the wrong way, and instead some focus on working on kvm, while others improve bhyve (found in FreeBSD and Illumos distributions, among others) while still others work on Xen.

With enough work, and someone working on interoperability, it'd be possible to have a fleet of three (or more?) hypervisor solutions, all being able to work together.

BlankSystemDaemon fucked around with this message at 14:48 on Dec 17, 2023

BlankSystemDaemon
Mar 13, 2009



CommieGIR posted:

Cloud - "I don't wanna own, I wanna rent and I want a landlord is who looking to charge every cent every time I flush the toilet or turn on the lights"

Cloud has its uses, but realistically a colo with a rented VM would cost you less or running it yourself on surplus hardware.
The one advantage that the butt has is if you've got a very spiky workload or if you're at a very particular point in the start-up curve and you set up everything to take advantage of the elasticity.
Unfortunately, storage is the one thing you can't elastify easily, and usually taking full advantage of the elasticity also means you have to fully buy into the vendor lock-in, meaning you're gonna have a bad time when you try to move away.

So the end-result is that for the vast majority, the butt ends up being more expensive.

BlankSystemDaemon
Mar 13, 2009



Potato Salad posted:

well yeah

so use a modern hypervisor

there's a free gitlabs tier, put up some teraform and go learn proper cdci
Has the dust settled after Hashicorp changed the license of Terraform to a business source license?

If I was a new company looking at it, I'd be very careful, since they've not only shown that they can change the license, but will do so, if it benefits them and nobody else.

Twerk from Home posted:

ESXi is clearly hosed under Broadcom, if IBM hadn't simultaneously been wrecking Red Hat I'd actually be betting on Red Hat Virtualization/ oVirt picking up some market share.

KVM itself is completely rock solid and has been for ages. There's not going to be any real replacement for how big ESXi was because operating an on-prem virtualization farm will become lostech and everyone will just pay AWS more money.

Edit: if anything in the longer term I'd bet that the groups that continue to run on-premises will move to a bare-metal kubernetes solution without any hypervisor. You can run a KVM guest as a kubernetes pod if you need actual VM isolation.
Unless a company can make actual use of the elasticity of the Amazon butt, there's little reason to pay for it as it isn't actually cheaper if you've got a steady-state production system (ie. one that just periodically needs to expand).

Selfishly, I hope bhyve will get the little bit of TLC it needs to get to the point that it can be used in production by everyone.
As it is, the main pain-point is being able to transfer guests between hosts - but that's being worked on, and since most of the things, that people want the high-availability for, can be done further up-stack, it's production ready for the vast majority of use-cases.

BlankSystemDaemon
Mar 13, 2009



Is there any virtualization on x86 or its derivatives, other than that of VMware, that doesn't use hardware-accelerated virtualization with SLAT (aka AMD Vi/Intel VT-x)?
I know VMware can also use it, but their original software did it all in software and somehow managed very low overhead.

EDIT: Oh, right - XenServer is a thing. I forgot
orz

BlankSystemDaemon
Mar 13, 2009



CommieGIR posted:

Just make sure you go for XCP-NG and not the actual Citrix Xenserver. Save yourself the headache of 'why can't I use that feature?'
Oh, it was mostly theoretical.

I'm very happy with bhyve(8).

BlankSystemDaemon
Mar 13, 2009



SamDabbers posted:

VirtualBox runs 32 bit guests without hardware extensions
I think the newest version of virtualbox require Unrestricted Guest/Real Address Mode in order to do it, but I don't have a pre-Westmere CPU to test on.

Finding a Nehalem CPU and board combo probably isn't even that difficult, since it's not old enough to be retro, yet old enough to have been retired from almost every production deployment - but :effort:

ExcessBLarg! posted:

Qemu used to have a Linux kernel module, KQemu, that provided ring 3 (userspace) virtualization before KVM effectively replaced it.
Was it part of the kernel? Because the obsolete documentation I can find seems to indicate it was a loadable module people would compile on their own.

Must've been pretty interesting code to have a kernel module run in userspace.

DevNull posted:

VMware ripped out the binary translator and software MMU a few years ago. I think official support for it has dropped off, or will very soon.

That said, last Friday was my last day there. I noped my way out without another job even lined up.
I vaguely recall a friend talking about about staying on ESXi 6.7 because 7.0 was missing it, so it's probably gone.

On the one hand, getting the gently caress out was definitely the right move - but it does feel a bit like walking a knife-edge without another job lined up.

BlankSystemDaemon
Mar 13, 2009



fresh_cheese posted:

Broadcom vs Oracle - Fight!
Well, given that Larry Ellison reportedly extracts blood from young people to give himself transfusions, he's probably still more evil.

BlankSystemDaemon
Mar 13, 2009



Thanks Ants posted:

Isn't that Thiel?
¿Por Qué No Los Dos?

BlankSystemDaemon
Mar 13, 2009



Hyper-V as a stand-alone product is going away.

BlankSystemDaemon
Mar 13, 2009



fresh_cheese posted:

https://www.ibm.com/products/zvm

The first enterprise hypervisor is doing fine, tyvm
52 years young!
To be fair, hardware-accelerated virtualization and SLAT wasn't really available on x86 until Nahelem and Orleans - and very few people had the talents to develop something without it, as it required intimate knowledge of the CPU.
It wasn't fun to use before virtualization of interrupts and I/O MMU virtualization, which was half a decade later.

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



ExcessBLarg! posted:

The 80386 had v8086 mode all the way back on 1985. Who needs 32-bit OSes anyways?
Bill Gates spotted.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply