|
10sec of google shows someone was able to get 20G speeds between vms on the same host using VMXNET3 back in 2020.
|
# ? Apr 17, 2023 21:20 |
|
|
# ? May 9, 2024 11:29 |
|
Shaocaholica posted:10sec of google shows someone was able to get 20G speeds between vms on the same host using VMXNET3 back in 2020. Looks like it's using the netvsc driver and ethtool shows it claiming a 10gbit/sec link speed. I'd imagine maybe some of the ones that emulate actual network cards might be limited to the claimed speed but if you're able to use a hypervisor's native drivers it should be able to go as fast as your system allows.
|
# ? Apr 17, 2023 22:02 |
Shaocaholica posted:How does VM network speed work for VMs running on the same host? Are connections capped to virtual standards like 1G, 10G? Can you have as fast as possible between VMs running on the same host? How fast can that be? On FreeBSD with netmap/vale and bhyve using ptnet(4), I've seen north of 40Gbps between two guests on fairly old server hardware. SamDabbers posted:Semi-related: is there any benefit for east-west traffic between VMs on the same box to use SR-IOV and let the NIC switch the packets instead of the CPU? Seems like there'd be less CPU load at the expense of a round trip over PCIe, and would also be dependent on the internal switching capacity of the particular NIC. Getting SR-IOV to work is another story entirely, though - it's even managed to not work on Supermicro, which is one of the vendors I usually recommend as they're the least-poo poo. If you can get it working though, it's absolute preferred over any other solution.
|
|
# ? Apr 17, 2023 22:17 |
|
SR-IOV really feels redundant when you can get servers with multiple 40Gb/s DAC interconnects and switches that will just handle that no issues. IMO getting it working well feels like diminishing returns, similar to switching storage adapters from LSI Logic SAS to Paravirtual SCSI.
|
# ? Apr 19, 2023 21:21 |
I can't speak to any other thin-virtualization solutions, but with FreeBSDs per-jail fully isolated netstack, it's loving great that you can give each jail its own virtual function without having to do any kind of software switching, especially because it's easy to have more jails than can commonly be put in a server. 6x 4-port PCIe NICs only give 24 ports, and the 6-7-port NICs don't tend to have very good chips.
|
|
# ? Apr 19, 2023 23:06 |
|
Those weirdos two floors up are introducing lots of Windows Servers into their infrastructure, for Hyper-V, to run Docker containers. On first glance, this seems stupid as gently caress. Why not just use Linux hosts? Am I wrong about this or not?
|
# ? May 15, 2023 22:02 |
|
Maybe if they were deploying Azure Stack HCI, it'd make a bit more sense... but wtf.
|
# ? May 15, 2023 22:06 |
|
Combat Pretzel posted:Those weirdos two floors up are introducing lots of Windows Servers into their infrastructure, for Hyper-V, to run Docker containers. On first glance, this seems stupid as gently caress. Why not just use Linux hosts? Am I wrong about this or not? Are they doing linux containers or windows containers cause docker supports both and you cant do windows containers on linux afaik
|
# ? May 15, 2023 23:01 |
|
Yeah makes far more sense to use Linux for that
|
# ? May 16, 2023 04:17 |
WSL has done wonders for Microsoft retaining people on Windows, rather than switching to something alternative.
|
|
# ? May 17, 2023 08:46 |
|
Combat Pretzel posted:Those weirdos two floors up are introducing lots of Windows Servers into their infrastructure, for Hyper-V, to run Docker containers. On first glance, this seems stupid as gently caress. Why not just use Linux hosts? Am I wrong about this or not? Needing to run legacy .net payloads rather than core is the only useful scenario for a similar setup.
|
# ? May 17, 2023 09:02 |
|
BlankSystemDaemon posted:WSL has done wonders for Microsoft retaining people on Windows, rather than switching to something alternative. That and familiarity and gaming.
|
# ? May 17, 2023 12:28 |
|
BlankSystemDaemon posted:WSL has done wonders for Microsoft retaining people on Windows, rather than switching to something alternative. That’s interesting! How much of a difference has it made?
|
# ? May 17, 2023 13:55 |
|
BlankSystemDaemon posted:WSL has done wonders for Microsoft retaining people on Windows, rather than switching to something alternative. lol no it didn't. The only thing that WSL did was make it so that I don't have to run a Linux VM somewhere/locally in order to do Linux development/other Linux things. It's a convenience at most but hardly a game changer. See also: all the devs running MacOS.
|
# ? May 17, 2023 14:47 |
|
WSL does some very weird networking which means IPv6 doesn't work out of the box, it's annoying and it would be nice to just have the WSL instance grab another address when it's opened.
|
# ? May 17, 2023 16:07 |
|
So I have an old laptop that’s a first gen i7 (haswell?) with 16gb of ram and an add. Can I run proxmox on it? It has virtualization extensions and such in the bios and they work just fine. I’ve got fedora on there now with kvm. Can I multihome the Ethernet port and run each vm with its own IP if I want using proxmox? Also would I run containers directly on proxmox or create a vm and run them there? This is all homelab crap so doesn’t have to be super stable or anything. Sorry for the random questions. Just trying to sanity check what my plan is before moving forward
|
# ? May 17, 2023 16:43 |
Subjunctive posted:That’s interesting! How much of a difference has it made? I also have a sneaking suspicion that there's folks who don't want to admit to using it - which is silly, because I would hope we as a people would've gotten over tribalism like that. I know of at least a couple fairly sizable development teams who started doing stuff using it as of WSL1 (which was directly inspired by the Linuxulator in FreeBSD, which Microsoft developers noticed when they were porting dtrace from FreeBSD) - and one of them has switched to WSL2. I doubt anyone has any numbers though, so it's hard to quantify. Thanks Ants posted:WSL does some very weird networking which means IPv6 doesn't work out of the box, it's annoying and it would be nice to just have the WSL instance grab another address when it's opened. namlosh posted:So I have an old laptop that’s a first gen i7 (haswell?) with 16gb of ram and an add. IOMMU virtualization (the ability to pass hardware through PCI devices to the guest) was present in some Sandy Bridge CPUs, but was default by Ivy Bridge. First-gen i7 doesn't mean much, so you'll want to check Intel Ark for the specific model. The Intel terms for the above generic names are VT-x, vAPIC, and VT-d. BlankSystemDaemon fucked around with this message at 18:05 on May 17, 2023 |
|
# ? May 17, 2023 18:01 |
|
Pile Of Garbage posted:lol no it didn't. The only thing that WSL did was make it so that I don't have to run a Linux VM somewhere/locally in order to do Linux development/other Linux things. It's a convenience at most but hardly a game changer. See also: all the devs running MacOS. WSL1 was more like a reverse-WINE where the Linux apps are actually running on the Windows kernel pretending to be Linux, which it did shockingly well but IIRC there were severe performance issues for certain use cases due to the different ways Linux and Windows handle disk access that weren't considered solvable with that model. It also would have required continuous development to maintain parity with the real kernel, where the WSL2 model gets its kernel updates "for free" from the upstream distros as long as Hyper-V doesn't break anything. I don't know how it's affected others' use, but I can say that once WSL gained X support it started to impact how often my dual boot machines were on Linux, and when it got GPU support I basically stopped dual booting. I've done it twice for WiFi injection shenanigans and twice just to run updates (one of those times the nVidia driver hosed up and I got a bonus hour of troubleshooting that with my updates...) The comparison to Mac using devs I agree with. I've used Mac laptops on and off over the years and always enjoyed being able to have both commercial software and my favorite *nix tools side by side in a reasonably well integrated manner. WSL brought the same concept to Windows.
|
# ? May 17, 2023 18:23 |
|
BlankSystemDaemon posted:Hardware-assisted virtualization (and more importantly Second-Level Address Translation) was added in Nehalem and Unrestricted Guest mode (ie. being able to boot in 16-bit real-mode, for DOS and retro-software compatibility) was added in Ivy Bridge, along with virtualization of interrupts (also responsible the thing that during boot selects a system processor based on the value of the EAX registers of each core - all other cores become service processors). Thanks so much for the reply. I feel really dumb (Haswell was a Pentium 4 wasn't it, ffs). In order to not be dumb, I went ahead and confirmed. code:
Intel® Virtualization Technology (VT-x) ‡ Yes Intel® Virtualization Technology for Directed I/O (VT-d) ‡ No Intel® VT-x with Extended Page Tables (EPT) ‡ Yes from here: https://ark.intel.com/content/www/us/en/ark/products/52219/intel-core-i72630qm-processor-6m-cache-up-to-2-90-ghz.html So in conclusion, It's safe to say it won't be ideal and I may not be able to pass-through peripherals to underlying VMs, but will ProxMox VE 7.4 even run on it?
|
# ? May 17, 2023 18:41 |
|
wolrah posted:WSL2 is just a Linux VM running on Hyper-V with really tight host integration. lmao I know what WSL is. You said "WSL has done wonders for Microsoft retaining people on Windows, rather than switching to something alternative." which is silly because no one ever said "oh I can't use Linux on Windows? guess I better abandon Windows" they just spun up a Linux VM somewhere and SSH into it from their Windows machine. That's why I mentioned macOS because it's the same deal. Perhaps you meant to say that WSL has made Windows more appealing/convenient, that would make sense.
|
# ? May 17, 2023 18:48 |
wolrah posted:WSL2 is just a Linux VM running on Hyper-V with really tight host integration. Not by much, but it's pretty consistent. I have a hypothesis that this is down to FreeBSD building without SSE/MMX optimizations for the binaries (it targets i686 aka Pentium Pro), since most of the data that's being handled by the kernel is lots of small bits of data consisting of a large amount of instructions where the individual instruction latencies start adding up. Of course this changes once the newer architectures start reducing instruction latencies (which AMD have been pretty good about, and even Intel is catching up on) - but then you need to build for that specific micro-architecture, and then it won't work on anything else. namlosh posted:Thanks so much for the reply. I feel really dumb (Haswell was a Pentium 4 wasn't it, ffs). BlankSystemDaemon fucked around with this message at 19:03 on May 17, 2023 |
|
# ? May 17, 2023 18:54 |
|
Pile Of Garbage posted:lmao I know what WSL is. quote:You said "WSL has done wonders for Microsoft retaining people on Windows, rather than switching to something alternative."
|
# ? May 17, 2023 22:17 |
|
WSL is pretty good but it definitely has its quirks. Ive still used an actual linux VM for DevOPSy work because docker is usually easier to use in it and Docker Desktop stinks. It's usually a 2CPU/1GB of RAM install so it doesn't take a lot of resources. But its all moot now that I use a Mac with the linuxify script.
|
# ? May 18, 2023 13:30 |
|
I have an ESX 8.0 machine that has 4x drives on a raid controller with the raid controller passthru'ed to a VM. I installed ESX onto a USB drive. I had to use an NFS mount for the VM guests. Can I put another USB drive in there and use that as my VM guest datastore? How do I do that? I think I tried to put a USB drive in before and I couldn't get it to show up as a datastore.
|
# ? May 23, 2023 05:58 |
|
Boner Wad posted:I have an ESX 8.0 machine that has 4x drives on a raid controller with the raid controller passthru'ed to a VM. I installed ESX onto a USB drive. I had to use an NFS mount for the VM guests. Can I put another USB drive in there and use that as my VM guest datastore? How do I do that? I think I tried to put a USB drive in before and I couldn't get it to show up as a datastore. I mean you can do USB passthrough, is that the goal, I wouldn't try to host anything data intensive on a USB drive.
|
# ? May 24, 2023 23:49 |
|
Hi all it's the worst poster here. Quick question, is the general consensus for VM CPU topology configuration still "flat-and-wide", as in single socket with whatever number of cores you require? As I understand the original reasoning was that you ideally want to keep VMs on a single physical NUMA node so as to avoid memory latency introduced from having to traverse QPI/HyperTransport and to ensure that the hypervisor CPU scheduler isn't waiting for cores to become available on more than one socket. Is that still the general consensus or has VMware introduced some magic that makes it not matter any more? Btw this is for an environment with vSphere 7.0.3/8.0.0 hosts.
|
# ? May 26, 2023 12:01 |
|
It's a trade-off. You can hot add sockets, but you can't hot change cores per socket, so if you single socket, you lose a lot of flexibility. Newer versions of vCenter also automatically construct a vNUMA topology behind the scenes that makes it a non-issue for most commonly encountered sizes. The last time I asked our TAM about it, the guidance we got was to feel free to use lots of sockets and one core per socket, up until you might be using more cores than a single physical CPU could accommodate. If your VM might not fit in a single socket, you want to specify a number of sockets that fits your hosts, and a cores-per-socket figure that also fits your hosts.
|
# ? May 26, 2023 19:37 |
|
Zorak of Michigan posted:It's a trade-off. You can hot add sockets, but you can't hot change cores per socket, so if you single socket, you lose a lot of flexibility. Newer versions of vCenter also automatically construct a vNUMA topology behind the scenes that makes it a non-issue for most commonly encountered sizes. The last time I asked our TAM about it, the guidance we got was to feel free to use lots of sockets and one core per socket, up until you might be using more cores than a single physical CPU could accommodate. If your VM might not fit in a single socket, you want to specify a number of sockets that fits your hosts, and a cores-per-socket figure that also fits your hosts. Cheers thanks for that. I did some reading and understand things better now. vNUMA seems to be the way to go however there is a caveat in that it's only enabled on VMs with eight or more vCPUs. Of course you can still enable vNUMA on smaller VMs with the numa.vcpu.min setting. Something to be aware of I guess.
|
# ? May 26, 2023 23:44 |
|
Pile Of Garbage posted:Hi all it's the worst poster here. Quick question, is the general consensus for VM CPU topology configuration still "flat-and-wide", as in single socket with whatever number of cores you require? As I understand the original reasoning was that you ideally want to keep VMs on a single physical NUMA node so as to avoid memory latency introduced from having to traverse QPI/HyperTransport and to ensure that the hypervisor CPU scheduler isn't waiting for cores to become available on more than one socket. My understanding is that the esxi scheduler really doesn't care whether it's one-socket-many-cores or many-sockets-few-cores until you get into the larger vms with NUMA rearing it's head. The downside as posted is that hot-add doesn't work at all/well once you do that. What I have been told is that the operating system (and potentially applications like SQL server) will schedule things differently depending on what it sees - so threads/work will be scheduled differently/less optimally if the os sees a sixteen-socket-one-core-per-socket machine rather than a one-socket-sixteen-core machine. I'm not sure it makes much difference because whenever I recommend it to our clients I get crickets back, so *shrug*.
|
# ? May 27, 2023 03:05 |
|
What is everyone's view on Veeam and Russia? We are testing image backup systems and we dismissed Veeam because of the Russian origins, but maybe it is separated enough nowadays? It is owned by venture capitalist Insight Partners from New York and on the top management there seems to be only the R&D chief who might be from Russia based on the name.
|
# ? May 31, 2023 12:46 |
|
Saukkis posted:What is everyone's view on Veeam and Russia? We are testing image backup systems and we dismissed Veeam because of the Russian origins, but maybe it is separated enough nowadays? It is owned by venture capitalist Insight Partners from New York and on the top management there seems to be only the R&D chief who might be from Russia based on the name. Veeam largely is US based now, their HQ is in the US and they are primarily registered as a US business' entity.
|
# ? May 31, 2023 15:56 |
|
Veeam's forums and QA staff largely consisted of men who are now AFK fighting to protect their families and towns, so it was never as bad as you may think.
|
# ? Jun 1, 2023 17:14 |
Oops, I did an apt purge command to my bare metal server and didn't look at what it was gonna get rid of really carefully. Now it's for sure gonna stop working after the next boot because it got rid of systemd and a bunch of other critical components. DNS isn't even working on it anymore so I can't even try to reinstall it mid flight. Luckily my docker containers still have the dns settings from when they were spun up my home lab services haven't died yet, though as soon as the system reboots it's donezo. I do have an image backup of the server from a couple of months ago. And I have the daily backups of my containers therein so nothing's gone regardless. I guess now's a good time to switch over to a virtualized install in proxmox rather than a recreation of the bare metal one. It'll make recovering from catastrophic fuckups like that easier with automatic snapshot backups that I can cart off site.
|
|
# ? Jun 7, 2023 01:38 |
|
Matt Zerella posted:WSL is pretty good but it definitely has its quirks. what's that uhhhh linuxify script do? what kinda mac you got that there VM on? I'm really new to MacOS, got the air with the M2 last September because I'm down with ARM and was stoked on dat silicon and I've never used ARM processing on anything outside of android phones. Thought it would be cool to VM an ARM linux server, but I have no idea what I'm doing really, or if its even a VM server at this point, or just VM ubuntu desktop installed over arm server.
|
# ? Jun 7, 2023 10:50 |
|
Trollipop posted:what's that uhhhh linuxify script do? what kinda mac you got that there VM on? I’m still on x86 for my work laptop, but this is the script: https://github.com/darksonic37/linuxify
|
# ? Jun 7, 2023 22:49 |
: "replacing pre-installed BSD programs with their preferred GNU implementation"
|
|
# ? Jun 7, 2023 23:01 |
|
BlankSystemDaemon posted:: "replacing pre-installed BSD programs with their preferred GNU implementation" This exactly.
|
# ? Jun 11, 2023 16:37 |
|
BlankSystemDaemon posted:: "replacing pre-installed BSD programs with their preferred GNU implementation" if they weren’t old as balls i wouldnt have to do this. E: and nothings getting replaced, its just PATH manipulation. Matt Zerella fucked around with this message at 17:17 on Jun 11, 2023 |
# ? Jun 11, 2023 17:13 |
|
Matt Zerella posted:if they weren’t old as balls i wouldnt have to do this. Oldness is only part of it though, at least for me. There are differences in behavior and cli switches for some of the GNU variants.
|
# ? Jun 11, 2023 18:03 |
|
|
# ? May 9, 2024 11:29 |
|
rufius posted:Oldness is only part of it though, at least for me. That too, “find” was a big problem for me.
|
# ? Jun 11, 2023 18:04 |