Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
some kinda jackal
Feb 25, 2003

 
 
I may or may not have already asked this, I don't recall, but is there a general containerization or kube megathread for questions or is this it?

Adbot
ADBOT LOVES YOU

some kinda jackal
Feb 25, 2003

 
 
For whatever reason I didn't think to check in the scary programming place. Thanks!

some kinda jackal
Feb 25, 2003

 
 
I can't tell if this is really basic and I'm missing something, or if there's something complicated I just don't understand, but I'm trying to create a few VLANs for different LAB networks in vSphere and use VyOS to route between them all. If I were doing this physically I'd plug everything into one switch, tag the switchports with VLANs and have VyOS with a trunk uplink and sub-VIF interfaces for each VLAN. For the life of me I can't gather how to accomplish the trunk to a single VyOS VM.

I have a vSwitch and it has a number of networks attached to it, each with a different VLAN, but then when I set up VyOS VM I have to choose an actual network for each NIC, I can't just say form a trunk and I'll take care of the network routing myself.

I feel like five years ago I would have known how to do this with my eyes closed and now I'm not even sure what to google :|


e: Oh wait, do I just create a port group on that vSwitch with VLAN set to 4096 ALL and it'll partake in the traffic from the other portgroup/networks on that vswitch, than uplink the vyos guest to that portgroup

some kinda jackal fucked around with this message at 23:38 on Jun 8, 2021

some kinda jackal
Feb 25, 2003

 
 
Thank YOU! That did the ticket.

some kinda jackal fucked around with this message at 23:49 on Jun 8, 2021

some kinda jackal
Feb 25, 2003

 
 
Even older affordable enterprise equipment starting to fall off official support lists is going to start being weird. I don't really plan to move up from 6.7u3 any time soon. Not that I'm worried there'll be any major incompatibility but I guess the further along we go the more likely it will be. Though admittedly with this kind of hardware I suppose the biggest issue is lack of vendor support if anything goes wrong, which is a non-issue for all but an infinitely small subset of homelabbers.

Proxmox seemed to do the job but I just couldn't adapt to the "proxmox way" of doing things. Honestly if I was coming in fresh it would probably be fine, but at this point I want to spend as little time learning the underlying infrastructure or getting "used" to a new way of doing things and just do things. So muscle memory is my enemy here I guess, until something forces me off of VMware's platform.

some kinda jackal
Feb 25, 2003

 
 

CommieGIR posted:

The Ivy Bridge runs actually consume not much for a 1U/2U, I have one that idled at about 300 watts.

Agreed. My R620 with idle VMs,, two 20-threaded E5-2660 Xeons, 128 gigs of ram, six or eight spinning 10k 2.5” drives and it is currently idling at 168W. I’m pretty sure I have it in ultra conservative power usage mode but it’s never felt slow or lacking. My workloads are all idle right now so I’m sure it bounces up but I’ll take that 168W idle any day.

I’m wondering whether switching to SSDs would have a negligible effect on the idle wattage. Those motors have to account for a bit of that, right? :haw:

some kinda jackal fucked around with this message at 21:42 on Aug 19, 2021

some kinda jackal
Feb 25, 2003

 
 
Oh you want vmware, I was going to offer a comedy Azure Stack Hub suggestion.

some kinda jackal
Feb 25, 2003

 
 
All joking aside, Azure Stack Hub kind of rules, but I also love outsourcing all the pain to vendors and I don't pay the bills out of my own pocket so gently caress it :cool:

some kinda jackal
Feb 25, 2003

 
 
I just shut down my VMware homelab server because I can't justify the energy expense and it takes like 15 minutes to boot if I want to bring it up at a moment's notice to try something.

Switching to an old Dell compact desktop with an i7 and a bunch of RAM, hopefully enough to run a few KVM instances. I've been out of the KVM game so two questions:

is there a good minimal footprint Linux that is recommended as a KVM host? Typically I used to just throw CentOS minimal for low footprint VMs, not sure if there is anything more suitable or KVM specific these days. And,

What's the hotness for KVM web management? I can create and launch VMs from the CLI, that's fine, but ideally this is a headless desktop so I need some way to interact with the VMs before they're accessible over the network etc. Just for "oh I need a windows VM for 45 minutes to try something" so I'm not looking to do a whole automated provision-to-network-available deployment pipeline here.

some kinda jackal
Feb 25, 2003

 
 
My prior experience with proxmox was mixed. I think I didn't give it a chance because it "wasn't vsphere" but now that I'm looking for just a barebones thing it might be a better option. All the same I think I will try to just roll my own with Pablo's advice. That's kind of what I'm envisioning, thanks. I also just literally want to think about it as little as possible, and if I didn't have the hardware on hand I'd probably honestly be better off just getting a few micro EC2 instances.

Adbot
ADBOT LOVES YOU

some kinda jackal
Feb 25, 2003

 
 
Maybe naively, I've always considered the LXC vs Docker thing this way:

- Docker: Single application, unitasker. Contains (or SHOULD contain) absolutely bare minimum required to preform its job
- LXC: "Containerized" Linux distribution. A whole working "VM" without the overhead of a hypervisor and all the things it needs to emulate or virtualize. Not necessarily unitasking, you're bringing up a separate distro"

I'm still feeling my way out in the container world but that's how I see it. I stand ready to be corrected.

Re: KVM

IIRC KVM is a type 2 emulator if your guest architecture matches the host and the host makes provisions for virtualization. You have the option of running instruction translation for non-native architectures, but x86_64 on x86_64 should be direct passthrough virtualization unless you have vt-d disabled or something. The running process may be kvm-qemu (or I forget what it actually is) but that shouldn't mislead you to think qemu is doing any instruction translation :)

Same caveat as above, standing ready to be corrected.

some kinda jackal fucked around with this message at 11:43 on Apr 12, 2022

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply