Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Shaocaholica
Oct 29, 2002

Fig. 5E
10sec of google shows someone was able to get 20G speeds between vms on the same host using VMXNET3 back in 2020.

Adbot
ADBOT LOVES YOU

wolrah
May 8, 2006
what?

Shaocaholica posted:

10sec of google shows someone was able to get 20G speeds between vms on the same host using VMXNET3 back in 2020.
I just threw two minimal Debian instances on Hyper-V on my desktop, installed iperf3, and got ~33gbit/sec between the two of them with 10 streams and absolutely no tuning. Just whatever Windows 11's Hyper-V gives me in a Generation 2 VM attached to the default vSwitch. This is on a Ryzen 9 3900X with 64GB of RAM where I was doing other things the entire time.

Looks like it's using the netvsc driver and ethtool shows it claiming a 10gbit/sec link speed.

I'd imagine maybe some of the ones that emulate actual network cards might be limited to the claimed speed but if you're able to use a hypervisor's native drivers it should be able to go as fast as your system allows.

BlankSystemDaemon
Mar 13, 2009



Shaocaholica posted:

How does VM network speed work for VMs running on the same host? Are connections capped to virtual standards like 1G, 10G? Can you have as fast as possible between VMs running on the same host? How fast can that be?
Paravirtualization on top of fast software switch (read: multi-million packets-per-second using I/O batching pre-allocated memory buffers) should let you do multi-gigabit traffic between guests on the host.

On FreeBSD with netmap/vale and bhyve using ptnet(4), I've seen north of 40Gbps between two guests on fairly old server hardware.

SamDabbers posted:

Semi-related: is there any benefit for east-west traffic between VMs on the same box to use SR-IOV and let the NIC switch the packets instead of the CPU? Seems like there'd be less CPU load at the expense of a round trip over PCIe, and would also be dependent on the internal switching capacity of the particular NIC.

Has anybody tried this?
There's a huge advantage to using SR-IOV and letting the NIC do the work in hardware, because you're saving a substantial amount of cputime on the server which otherwise has to do it in software.

Getting SR-IOV to work is another story entirely, though - it's even managed to not work on Supermicro, which is one of the vendors I usually recommend as they're the least-poo poo.

If you can get it working though, it's absolute preferred over any other solution.

Pile Of Garbage
May 28, 2007



SR-IOV really feels redundant when you can get servers with multiple 40Gb/s DAC interconnects and switches that will just handle that no issues. IMO getting it working well feels like diminishing returns, similar to switching storage adapters from LSI Logic SAS to Paravirtual SCSI.

BlankSystemDaemon
Mar 13, 2009



I can't speak to any other thin-virtualization solutions, but with FreeBSDs per-jail fully isolated netstack, it's loving great that you can give each jail its own virtual function without having to do any kind of software switching, especially because it's easy to have more jails than can commonly be put in a server.
6x 4-port PCIe NICs only give 24 ports, and the 6-7-port NICs don't tend to have very good chips.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Those weirdos two floors up are introducing lots of Windows Servers into their infrastructure, for Hyper-V, to run Docker containers. On first glance, this seems stupid as gently caress. Why not just use Linux hosts? Am I wrong about this or not?

Wibla
Feb 16, 2011

Maybe if they were deploying Azure Stack HCI, it'd make a bit more sense... but wtf.

Mr. Crow
May 22, 2008

Snap City mayor for life

Combat Pretzel posted:

Those weirdos two floors up are introducing lots of Windows Servers into their infrastructure, for Hyper-V, to run Docker containers. On first glance, this seems stupid as gently caress. Why not just use Linux hosts? Am I wrong about this or not?

Are they doing linux containers or windows containers cause docker supports both and you cant do windows containers on linux afaik

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
Yeah makes far more sense to use Linux for that

BlankSystemDaemon
Mar 13, 2009



WSL has done wonders for Microsoft retaining people on Windows, rather than switching to something alternative.

SlowBloke
Aug 14, 2017

Combat Pretzel posted:

Those weirdos two floors up are introducing lots of Windows Servers into their infrastructure, for Hyper-V, to run Docker containers. On first glance, this seems stupid as gently caress. Why not just use Linux hosts? Am I wrong about this or not?

Needing to run legacy .net payloads rather than core is the only useful scenario for a similar setup.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

BlankSystemDaemon posted:

WSL has done wonders for Microsoft retaining people on Windows, rather than switching to something alternative.

That and familiarity and gaming.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

BlankSystemDaemon posted:

WSL has done wonders for Microsoft retaining people on Windows, rather than switching to something alternative.

That’s interesting! How much of a difference has it made?

Pile Of Garbage
May 28, 2007



BlankSystemDaemon posted:

WSL has done wonders for Microsoft retaining people on Windows, rather than switching to something alternative.

lol no it didn't. The only thing that WSL did was make it so that I don't have to run a Linux VM somewhere/locally in order to do Linux development/other Linux things. It's a convenience at most but hardly a game changer. See also: all the devs running MacOS.

Thanks Ants
May 21, 2004

#essereFerrari


WSL does some very weird networking which means IPv6 doesn't work out of the box, it's annoying and it would be nice to just have the WSL instance grab another address when it's opened.

namlosh
Feb 11, 2014

I name this haircut "The Sad Rhino".
So I have an old laptop that’s a first gen i7 (haswell?) with 16gb of ram and an add.

Can I run proxmox on it? It has virtualization extensions and such in the bios and they work just fine. I’ve got fedora on there now with kvm.

Can I multihome the Ethernet port and run each vm with its own IP if I want using proxmox?
Also would I run containers directly on proxmox or create a vm and run them there?

This is all homelab crap so doesn’t have to be super stable or anything.

Sorry for the random questions. Just trying to sanity check what my plan is before moving forward

BlankSystemDaemon
Mar 13, 2009



Subjunctive posted:

That’s interesting! How much of a difference has it made?
I've heard of a fair few people using it, and I have a sneaking suspicion that as CommieGIR points out, the familiarity and gaming make for a hell of a convenience for certain folks.
I also have a sneaking suspicion that there's folks who don't want to admit to using it - which is silly, because I would hope we as a people would've gotten over tribalism like that.

I know of at least a couple fairly sizable development teams who started doing stuff using it as of WSL1 (which was directly inspired by the Linuxulator in FreeBSD, which Microsoft developers noticed when they were porting dtrace from FreeBSD) - and one of them has switched to WSL2.

I doubt anyone has any numbers though, so it's hard to quantify.

Thanks Ants posted:

WSL does some very weird networking which means IPv6 doesn't work out of the box, it's annoying and it would be nice to just have the WSL instance grab another address when it's opened.
Yeah, IPv6 is only more than a quarter of a century old - practically brand new!

namlosh posted:

So I have an old laptop that’s a first gen i7 (haswell?) with 16gb of ram and an add.

Can I run proxmox on it? It has virtualization extensions and such in the bios and they work just fine. I’ve got fedora on there now with kvm.

Can I multihome the Ethernet port and run each vm with its own IP if I want using proxmox?
Also would I run containers directly on proxmox or create a vm and run them there?

This is all homelab crap so doesn’t have to be super stable or anything.

Sorry for the random questions. Just trying to sanity check what my plan is before moving forward
Hardware-assisted virtualization (and more importantly Second-Level Address Translation) was added in Nehalem and Unrestricted Guest mode (ie. being able to boot in 16-bit real-mode, for DOS and retro-software compatibility) was added in Ivy Bridge, along with virtualization of interrupts (also responsible the thing that during boot selects a system processor based on the value of the EAX registers of each core - all other cores become service processors).
IOMMU virtualization (the ability to pass hardware through PCI devices to the guest) was present in some Sandy Bridge CPUs, but was default by Ivy Bridge.

First-gen i7 doesn't mean much, so you'll want to check Intel Ark for the specific model.
The Intel terms for the above generic names are VT-x, vAPIC, and VT-d.

BlankSystemDaemon fucked around with this message at 18:05 on May 17, 2023

wolrah
May 8, 2006
what?

Pile Of Garbage posted:

lol no it didn't. The only thing that WSL did was make it so that I don't have to run a Linux VM somewhere/locally in order to do Linux development/other Linux things. It's a convenience at most but hardly a game changer. See also: all the devs running MacOS.
WSL2 is just a Linux VM running on Hyper-V with really tight host integration.

WSL1 was more like a reverse-WINE where the Linux apps are actually running on the Windows kernel pretending to be Linux, which it did shockingly well but IIRC there were severe performance issues for certain use cases due to the different ways Linux and Windows handle disk access that weren't considered solvable with that model. It also would have required continuous development to maintain parity with the real kernel, where the WSL2 model gets its kernel updates "for free" from the upstream distros as long as Hyper-V doesn't break anything.

I don't know how it's affected others' use, but I can say that once WSL gained X support it started to impact how often my dual boot machines were on Linux, and when it got GPU support I basically stopped dual booting. I've done it twice for WiFi injection shenanigans and twice just to run updates (one of those times the nVidia driver hosed up and I got a bonus hour of troubleshooting that with my updates...)

The comparison to Mac using devs I agree with. I've used Mac laptops on and off over the years and always enjoyed being able to have both commercial software and my favorite *nix tools side by side in a reasonably well integrated manner. WSL brought the same concept to Windows.

namlosh
Feb 11, 2014

I name this haircut "The Sad Rhino".

BlankSystemDaemon posted:

Hardware-assisted virtualization (and more importantly Second-Level Address Translation) was added in Nehalem and Unrestricted Guest mode (ie. being able to boot in 16-bit real-mode, for DOS and retro-software compatibility) was added in Ivy Bridge, along with virtualization of interrupts (also responsible the thing that during boot selects a system processor based on the value of the EAX registers of each core - all other cores become service processors).
IOMMU virtualization (the ability to pass hardware through PCI devices to the guest) was present in some Sandy Bridge CPUs, but was default by Ivy Bridge.

First-gen i7 doesn't mean much, so you'll want to check Intel Ark for the specific model.
The Intel terms for the above generic names are VT-x, vAPIC, and VT-d.

Thanks so much for the reply. I feel really dumb (Haswell was a Pentium 4 wasn't it, ffs).

In order to not be dumb, I went ahead and confirmed.
code:
~]$ lscpu
Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         36 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  8
  On-line CPU(s) list:   0-7
Vendor ID:               GenuineIntel
  Model name:            Intel(R) Core(TM) i7-2630QM CPU @ 2.00GHz
    CPU family:          6
    Model:               42
    Thread(s) per core:  2
    Core(s) per socket:  4
    Socket(s):           1
    Stepping:            7
    CPU(s) scaling MHz:  32%
    CPU max MHz:         2900.0000
    CPU min MHz:         800.0000
    BogoMIPS:            3990.89
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fx
                         sr sse sse2 ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xto
                         pology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16
                         xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm epb pti ssb
                         d ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm ida arat pln pts md_cle
                         ar flush_l1d
Virtualization features:
  Virtualization:        VT-x
Caches (sum of all):
  L1d:                   128 KiB (4 instances)
  L1i:                   128 KiB (4 instances)
  L2:                    1 MiB (4 instances)
  L3:                    6 MiB (1 instance)
NUMA:
  NUMA node(s):          1
  NUMA node0 CPU(s):     0-7
Vulnerabilities:
  Itlb multihit:         KVM: Mitigation: VMX disabled
  L1tf:                  Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
  Mds:                   Mitigation; Clear CPU buffers; SMT vulnerable
  Meltdown:              Mitigation; PTI
  Mmio stale data:       Unknown: No mitigations
  Retbleed:              Not affected
  Spec store bypass:     Mitigation; Speculative Store Bypass disabled via prctl
  Spectre v1:            Mitigation; usercopy/swapgs barriers and __user pointer sanitization
  Spectre v2:            Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS
                         Not affected
  Srbds:                 Not affected
  Tsx async abort:       Not affected

so it's a Sandy Bridge processor... looking it up on ARK like you suggested yields:
Intel® Virtualization Technology (VT-x) ‡
Yes

Intel® Virtualization Technology for Directed I/O (VT-d) ‡
No

Intel® VT-x with Extended Page Tables (EPT) ‡
Yes
from here:
https://ark.intel.com/content/www/us/en/ark/products/52219/intel-core-i72630qm-processor-6m-cache-up-to-2-90-ghz.html

So in conclusion, It's safe to say it won't be ideal and I may not be able to pass-through peripherals to underlying VMs,
but will ProxMox VE 7.4 even run on it?

Pile Of Garbage
May 28, 2007



wolrah posted:

WSL2 is just a Linux VM running on Hyper-V with really tight host integration.

WSL1 was more like a reverse-WINE where the Linux apps are actually running on the Windows kernel pretending to be Linux, which it did shockingly well but IIRC there were severe performance issues for certain use cases due to the different ways Linux and Windows handle disk access that weren't considered solvable with that model. It also would have required continuous development to maintain parity with the real kernel, where the WSL2 model gets its kernel updates "for free" from the upstream distros as long as Hyper-V doesn't break anything.

I don't know how it's affected others' use, but I can say that once WSL gained X support it started to impact how often my dual boot machines were on Linux, and when it got GPU support I basically stopped dual booting. I've done it twice for WiFi injection shenanigans and twice just to run updates (one of those times the nVidia driver hosed up and I got a bonus hour of troubleshooting that with my updates...)

The comparison to Mac using devs I agree with. I've used Mac laptops on and off over the years and always enjoyed being able to have both commercial software and my favorite *nix tools side by side in a reasonably well integrated manner. WSL brought the same concept to Windows.

lmao I know what WSL is. You said "WSL has done wonders for Microsoft retaining people on Windows, rather than switching to something alternative." which is silly because no one ever said "oh I can't use Linux on Windows? guess I better abandon Windows" they just spun up a Linux VM somewhere and SSH into it from their Windows machine. That's why I mentioned macOS because it's the same deal.

Perhaps you meant to say that WSL has made Windows more appealing/convenient, that would make sense.

BlankSystemDaemon
Mar 13, 2009



wolrah posted:

WSL2 is just a Linux VM running on Hyper-V with really tight host integration.

WSL1 was more like a reverse-WINE where the Linux apps are actually running on the Windows kernel pretending to be Linux, which it did shockingly well but IIRC there were severe performance issues for certain use cases due to the different ways Linux and Windows handle disk access that weren't considered solvable with that model. It also would have required continuous development to maintain parity with the real kernel, where the WSL2 model gets its kernel updates "for free" from the upstream distros as long as Hyper-V doesn't break anything.

I don't know how it's affected others' use, but I can say that once WSL gained X support it started to impact how often my dual boot machines were on Linux, and when it got GPU support I basically stopped dual booting. I've done it twice for WiFi injection shenanigans and twice just to run updates (one of those times the nVidia driver hosed up and I got a bonus hour of troubleshooting that with my updates...)

The comparison to Mac using devs I agree with. I've used Mac laptops on and off over the years and always enjoyed being able to have both commercial software and my favorite *nix tools side by side in a reasonably well integrated manner. WSL brought the same concept to Windows.
The funny thing is, Linuxulator on FreeBSD regularly beats Linux in synthetic (read: useless) benchmarks.
Not by much, but it's pretty consistent.

I have a hypothesis that this is down to FreeBSD building without SSE/MMX optimizations for the binaries (it targets i686 aka Pentium Pro), since most of the data that's being handled by the kernel is lots of small bits of data consisting of a large amount of instructions where the individual instruction latencies start adding up.
Of course this changes once the newer architectures start reducing instruction latencies (which AMD have been pretty good about, and even Intel is catching up on) - but then you need to build for that specific micro-architecture, and then it won't work on anything else.

namlosh posted:

Thanks so much for the reply. I feel really dumb (Haswell was a Pentium 4 wasn't it, ffs).

So in conclusion, It's safe to say it won't be ideal and I may not be able to pass-through peripherals to underlying VMs,
but will ProxMox VE 7.4 even run on it?
Yeah, you might be able to do para-virtualization for your NICs - but I'm not an expert in Proxmox, so I don't know how it's accomplished there.

BlankSystemDaemon fucked around with this message at 19:03 on May 17, 2023

wolrah
May 8, 2006
what?

Pile Of Garbage posted:

lmao I know what WSL is.
The part I quoted where you said WSL made it so you didn't have to run a Linux VM locally when it's running a Linux VM locally was at best unclear.

quote:

You said "WSL has done wonders for Microsoft retaining people on Windows, rather than switching to something alternative."
I'm not BlankSystemDaemon.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
WSL is pretty good but it definitely has its quirks.

Ive still used an actual linux VM for DevOPSy work because docker is usually easier to use in it and Docker Desktop stinks. It's usually a 2CPU/1GB of RAM install so it doesn't take a lot of resources. But its all moot now that I use a Mac with the linuxify script.

Boner Wad
Nov 16, 2003
I have an ESX 8.0 machine that has 4x drives on a raid controller with the raid controller passthru'ed to a VM. I installed ESX onto a USB drive. I had to use an NFS mount for the VM guests. Can I put another USB drive in there and use that as my VM guest datastore? How do I do that? I think I tried to put a USB drive in before and I couldn't get it to show up as a datastore.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Boner Wad posted:

I have an ESX 8.0 machine that has 4x drives on a raid controller with the raid controller passthru'ed to a VM. I installed ESX onto a USB drive. I had to use an NFS mount for the VM guests. Can I put another USB drive in there and use that as my VM guest datastore? How do I do that? I think I tried to put a USB drive in before and I couldn't get it to show up as a datastore.

I mean you can do USB passthrough, is that the goal, I wouldn't try to host anything data intensive on a USB drive.

Pile Of Garbage
May 28, 2007



Hi all it's the worst poster here. Quick question, is the general consensus for VM CPU topology configuration still "flat-and-wide", as in single socket with whatever number of cores you require? As I understand the original reasoning was that you ideally want to keep VMs on a single physical NUMA node so as to avoid memory latency introduced from having to traverse QPI/HyperTransport and to ensure that the hypervisor CPU scheduler isn't waiting for cores to become available on more than one socket.

Is that still the general consensus or has VMware introduced some magic that makes it not matter any more? Btw this is for an environment with vSphere 7.0.3/8.0.0 hosts.

Zorak of Michigan
Jun 10, 2006

It's a trade-off. You can hot add sockets, but you can't hot change cores per socket, so if you single socket, you lose a lot of flexibility. Newer versions of vCenter also automatically construct a vNUMA topology behind the scenes that makes it a non-issue for most commonly encountered sizes. The last time I asked our TAM about it, the guidance we got was to feel free to use lots of sockets and one core per socket, up until you might be using more cores than a single physical CPU could accommodate. If your VM might not fit in a single socket, you want to specify a number of sockets that fits your hosts, and a cores-per-socket figure that also fits your hosts.

Pile Of Garbage
May 28, 2007



Zorak of Michigan posted:

It's a trade-off. You can hot add sockets, but you can't hot change cores per socket, so if you single socket, you lose a lot of flexibility. Newer versions of vCenter also automatically construct a vNUMA topology behind the scenes that makes it a non-issue for most commonly encountered sizes. The last time I asked our TAM about it, the guidance we got was to feel free to use lots of sockets and one core per socket, up until you might be using more cores than a single physical CPU could accommodate. If your VM might not fit in a single socket, you want to specify a number of sockets that fits your hosts, and a cores-per-socket figure that also fits your hosts.

Cheers thanks for that. I did some reading and understand things better now. vNUMA seems to be the way to go however there is a caveat in that it's only enabled on VMs with eight or more vCPUs. Of course you can still enable vNUMA on smaller VMs with the numa.vcpu.min setting. Something to be aware of I guess.

Pikehead
Dec 3, 2006

Looking for WMDs, PM if you have A+ grade stuff
Fun Shoe

Pile Of Garbage posted:

Hi all it's the worst poster here. Quick question, is the general consensus for VM CPU topology configuration still "flat-and-wide", as in single socket with whatever number of cores you require? As I understand the original reasoning was that you ideally want to keep VMs on a single physical NUMA node so as to avoid memory latency introduced from having to traverse QPI/HyperTransport and to ensure that the hypervisor CPU scheduler isn't waiting for cores to become available on more than one socket.

Is that still the general consensus or has VMware introduced some magic that makes it not matter any more? Btw this is for an environment with vSphere 7.0.3/8.0.0 hosts.

My understanding is that the esxi scheduler really doesn't care whether it's one-socket-many-cores or many-sockets-few-cores until you get into the larger vms with NUMA rearing it's head.

The downside as posted is that hot-add doesn't work at all/well once you do that.

What I have been told is that the operating system (and potentially applications like SQL server) will schedule things differently depending on what it sees - so threads/work will be scheduled differently/less optimally if the os sees a sixteen-socket-one-core-per-socket machine rather than a one-socket-sixteen-core machine.

I'm not sure it makes much difference because whenever I recommend it to our clients I get crickets back, so *shrug*.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
What is everyone's view on Veeam and Russia? We are testing image backup systems and we dismissed Veeam because of the Russian origins, but maybe it is separated enough nowadays? It is owned by venture capitalist Insight Partners from New York and on the top management there seems to be only the R&D chief who might be from Russia based on the name.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Saukkis posted:

What is everyone's view on Veeam and Russia? We are testing image backup systems and we dismissed Veeam because of the Russian origins, but maybe it is separated enough nowadays? It is owned by venture capitalist Insight Partners from New York and on the top management there seems to be only the R&D chief who might be from Russia based on the name.

Veeam largely is US based now, their HQ is in the US and they are primarily registered as a US business' entity.

Potato Salad
Oct 23, 2014

nobody cares


Veeam's forums and QA staff largely consisted of men who are now AFK fighting to protect their families and towns, so it was never as bad as you may think.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Oops, I did an apt purge command to my bare metal server and didn't look at what it was gonna get rid of really carefully. Now it's for sure gonna stop working after the next boot because it got rid of systemd and a bunch of other critical components. DNS isn't even working on it anymore so I can't even try to reinstall it mid flight. Luckily my docker containers still have the dns settings from when they were spun up my home lab services haven't died yet, though as soon as the system reboots it's donezo.

I do have an image backup of the server from a couple of months ago. And I have the daily backups of my containers therein so nothing's gone regardless. I guess now's a good time to switch over to a virtualized install in proxmox rather than a recreation of the bare metal one. It'll make recovering from catastrophic fuckups like that easier with automatic snapshot backups that I can cart off site.

Trollipop
Apr 10, 2007

hippin and hoppin

Matt Zerella posted:

WSL is pretty good but it definitely has its quirks.

Ive still used an actual linux VM for DevOPSy work because docker is usually easier to use in it and Docker Desktop stinks. It's usually a 2CPU/1GB of RAM install so it doesn't take a lot of resources. But its all moot now that I use a Mac with the linuxify script.

what's that uhhhh linuxify script do? what kinda mac you got that there VM on?

I'm really new to MacOS, got the air with the M2 last September because I'm down with ARM and was stoked on dat silicon and I've never used ARM processing on anything outside of android phones. Thought it would be cool to VM an ARM linux server, but I have no idea what I'm doing really, or if its even a VM server at this point, or just VM ubuntu desktop installed over arm server.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Trollipop posted:

what's that uhhhh linuxify script do? what kinda mac you got that there VM on?

I'm really new to MacOS, got the air with the M2 last September because I'm down with ARM and was stoked on dat silicon and I've never used ARM processing on anything outside of android phones. Thought it would be cool to VM an ARM linux server, but I have no idea what I'm doing really, or if its even a VM server at this point, or just VM ubuntu desktop installed over arm server.

I’m still on x86 for my work laptop, but this is the script:

https://github.com/darksonic37/linuxify

BlankSystemDaemon
Mar 13, 2009



:jail:: "replacing pre-installed BSD programs with their preferred GNU implementation"

rufius
Feb 27, 2011

Clear alcohols are for rich women on diets.

BlankSystemDaemon posted:

:jail:: "replacing pre-installed BSD programs with their preferred GNU implementation"

This exactly.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

BlankSystemDaemon posted:

:jail:: "replacing pre-installed BSD programs with their preferred GNU implementation"

if they weren’t old as balls i wouldnt have to do this.

E: and nothings getting replaced, its just PATH manipulation.

Matt Zerella fucked around with this message at 17:17 on Jun 11, 2023

rufius
Feb 27, 2011

Clear alcohols are for rich women on diets.

Matt Zerella posted:

if they weren’t old as balls i wouldnt have to do this.

E: and nothings getting replaced, its just PATH manipulation.

Oldness is only part of it though, at least for me.

There are differences in behavior and cli switches for some of the GNU variants.

Adbot
ADBOT LOVES YOU

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

rufius posted:

Oldness is only part of it though, at least for me.

There are differences in behavior and cli switches for some of the GNU variants.

That too, “find” was a big problem for me.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply