If you're doing virtualization, it may be worth working with what's called a series of golden master snapshots. You make a VM with your favorite OS, and then snapshot it once it's in its basic unregistered state. Then you add your favorite software to it, and set it up how you want it, and snapshot it. From that point, if you ever want to go back or start with something fresh, you can always pick one of those golden masters. For cold storage on spinning rust, ideally you want something that does in-line check-summing and if you're really insistent on it still working N years from now, something that does multiple copies of blocks written to the disk (sometimes called ditto-blocks). Hot storage with check-summing and mirroring/distributed parity is much better for long-term storage, even if you have to build a low-power system to store things with, because at least if a bitflip happens, the disk firmware and/or filesystem+LVM can fix the thing that went wrong. Non-volatile flash is assumed by almost everyone to have much worse cold storage capabilities, since it's much easier to bitflip an electrical charge in flash memory than it is to bitflip a magnetic charge on a disk, and modern TLC or QLC flash SSDs simply don't have very much write endurance.
|
|
# ¿ Jun 14, 2022 21:37 |
|
|
# ¿ May 9, 2024 05:26 |
Shumagorath posted:I need this to work on the road Whether vendors implement it is anyone's guess, but it's part of the spec.
|
|
# ¿ Jul 30, 2022 22:40 |
SlowBloke posted:New vSphere is out, it's all tanzu and DPU with a handful of QoL improvements. Nothing to justify jumping on it.
|
|
# ¿ Oct 12, 2022 14:00 |
Shaocaholica posted:How does VM network speed work for VMs running on the same host? Are connections capped to virtual standards like 1G, 10G? Can you have as fast as possible between VMs running on the same host? How fast can that be? On FreeBSD with netmap/vale and bhyve using ptnet(4), I've seen north of 40Gbps between two guests on fairly old server hardware. SamDabbers posted:Semi-related: is there any benefit for east-west traffic between VMs on the same box to use SR-IOV and let the NIC switch the packets instead of the CPU? Seems like there'd be less CPU load at the expense of a round trip over PCIe, and would also be dependent on the internal switching capacity of the particular NIC. Getting SR-IOV to work is another story entirely, though - it's even managed to not work on Supermicro, which is one of the vendors I usually recommend as they're the least-poo poo. If you can get it working though, it's absolute preferred over any other solution.
|
|
# ¿ Apr 17, 2023 22:17 |
I can't speak to any other thin-virtualization solutions, but with FreeBSDs per-jail fully isolated netstack, it's loving great that you can give each jail its own virtual function without having to do any kind of software switching, especially because it's easy to have more jails than can commonly be put in a server. 6x 4-port PCIe NICs only give 24 ports, and the 6-7-port NICs don't tend to have very good chips.
|
|
# ¿ Apr 19, 2023 23:06 |
WSL has done wonders for Microsoft retaining people on Windows, rather than switching to something alternative.
|
|
# ¿ May 17, 2023 08:46 |
Subjunctive posted:That’s interesting! How much of a difference has it made? I also have a sneaking suspicion that there's folks who don't want to admit to using it - which is silly, because I would hope we as a people would've gotten over tribalism like that. I know of at least a couple fairly sizable development teams who started doing stuff using it as of WSL1 (which was directly inspired by the Linuxulator in FreeBSD, which Microsoft developers noticed when they were porting dtrace from FreeBSD) - and one of them has switched to WSL2. I doubt anyone has any numbers though, so it's hard to quantify. Thanks Ants posted:WSL does some very weird networking which means IPv6 doesn't work out of the box, it's annoying and it would be nice to just have the WSL instance grab another address when it's opened. namlosh posted:So I have an old laptop that’s a first gen i7 (haswell?) with 16gb of ram and an add. IOMMU virtualization (the ability to pass hardware through PCI devices to the guest) was present in some Sandy Bridge CPUs, but was default by Ivy Bridge. First-gen i7 doesn't mean much, so you'll want to check Intel Ark for the specific model. The Intel terms for the above generic names are VT-x, vAPIC, and VT-d. BlankSystemDaemon fucked around with this message at 18:05 on May 17, 2023 |
|
# ¿ May 17, 2023 18:01 |
wolrah posted:WSL2 is just a Linux VM running on Hyper-V with really tight host integration. Not by much, but it's pretty consistent. I have a hypothesis that this is down to FreeBSD building without SSE/MMX optimizations for the binaries (it targets i686 aka Pentium Pro), since most of the data that's being handled by the kernel is lots of small bits of data consisting of a large amount of instructions where the individual instruction latencies start adding up. Of course this changes once the newer architectures start reducing instruction latencies (which AMD have been pretty good about, and even Intel is catching up on) - but then you need to build for that specific micro-architecture, and then it won't work on anything else. namlosh posted:Thanks so much for the reply. I feel really dumb (Haswell was a Pentium 4 wasn't it, ffs). BlankSystemDaemon fucked around with this message at 19:03 on May 17, 2023 |
|
# ¿ May 17, 2023 18:54 |
: "replacing pre-installed BSD programs with their preferred GNU implementation"
|
|
# ¿ Jun 7, 2023 23:01 |
in a well actually posted:Yeah they’re doing exactly what they said they’d do. Internet Explorer posted:I guess I don't mind VMware licensing turning into a subscription that includes support because I never had the desire to run old versions or without support. The real problem is that they are trying to squeeze out 3x the profit. Financialization is bad, folks. Who knew?
|
|
# ¿ Dec 14, 2023 20:17 |
It'd be nice if folks learned the lesson that having any one solution is the wrong way, and instead some focus on working on kvm, while others improve bhyve (found in FreeBSD and Illumos distributions, among others) while still others work on Xen. With enough work, and someone working on interoperability, it'd be possible to have a fleet of three (or more?) hypervisor solutions, all being able to work together. BlankSystemDaemon fucked around with this message at 14:48 on Dec 17, 2023 |
|
# ¿ Dec 17, 2023 14:45 |
CommieGIR posted:Cloud - "I don't wanna own, I wanna rent and I want a landlord is who looking to charge every cent every time I flush the toilet or turn on the lights" Unfortunately, storage is the one thing you can't elastify easily, and usually taking full advantage of the elasticity also means you have to fully buy into the vendor lock-in, meaning you're gonna have a bad time when you try to move away. So the end-result is that for the vast majority, the butt ends up being more expensive.
|
|
# ¿ Dec 23, 2023 21:34 |
Potato Salad posted:well yeah If I was a new company looking at it, I'd be very careful, since they've not only shown that they can change the license, but will do so, if it benefits them and nobody else. Twerk from Home posted:ESXi is clearly hosed under Broadcom, if IBM hadn't simultaneously been wrecking Red Hat I'd actually be betting on Red Hat Virtualization/ oVirt picking up some market share. Selfishly, I hope bhyve will get the little bit of TLC it needs to get to the point that it can be used in production by everyone. As it is, the main pain-point is being able to transfer guests between hosts - but that's being worked on, and since most of the things, that people want the high-availability for, can be done further up-stack, it's production ready for the vast majority of use-cases.
|
|
# ¿ Jan 6, 2024 15:20 |
Is there any virtualization on x86 or its derivatives, other than that of VMware, that doesn't use hardware-accelerated virtualization with SLAT (aka AMD Vi/Intel VT-x)? I know VMware can also use it, but their original software did it all in software and somehow managed very low overhead. EDIT: Oh, right - XenServer is a thing. I forgot orz
|
|
# ¿ Jan 29, 2024 18:34 |
CommieGIR posted:Just make sure you go for XCP-NG and not the actual Citrix Xenserver. Save yourself the headache of 'why can't I use that feature?' I'm very happy with bhyve(8).
|
|
# ¿ Jan 29, 2024 18:55 |
SamDabbers posted:VirtualBox runs 32 bit guests without hardware extensions Finding a Nehalem CPU and board combo probably isn't even that difficult, since it's not old enough to be retro, yet old enough to have been retired from almost every production deployment - but ExcessBLarg! posted:Qemu used to have a Linux kernel module, KQemu, that provided ring 3 (userspace) virtualization before KVM effectively replaced it. Must've been pretty interesting code to have a kernel module run in userspace. DevNull posted:VMware ripped out the binary translator and software MMU a few years ago. I think official support for it has dropped off, or will very soon. On the one hand, getting the gently caress out was definitely the right move - but it does feel a bit like walking a knife-edge without another job lined up.
|
|
# ¿ Jan 30, 2024 08:37 |
fresh_cheese posted:Broadcom vs Oracle - Fight!
|
|
# ¿ Feb 13, 2024 20:37 |
Thanks Ants posted:Isn't that Thiel?
|
|
# ¿ Feb 13, 2024 21:13 |
Hyper-V as a stand-alone product is going away.
|
|
# ¿ Feb 17, 2024 09:42 |
fresh_cheese posted:https://www.ibm.com/products/zvm It wasn't fun to use before virtualization of interrupts and I/O MMU virtualization, which was half a decade later.
|
|
# ¿ Mar 8, 2024 19:41 |
|
|
# ¿ May 9, 2024 05:26 |
ExcessBLarg! posted:The 80386 had v8086 mode all the way back on 1985. Who needs 32-bit OSes anyways?
|
|
# ¿ Mar 9, 2024 01:36 |