Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Actuarial Fables
Jul 29, 2014

Taco Defender

movax posted:

So I've given up on ESXi and trying out Proxmox now.

One thing I can't figure out... in ESXi, you spawned vmkernel NICs for access to management interface. How does that work on Proxmox? I can't seem to find an obvious place to control where the management interface listens.

It is a DeskMini 310 w/ single NIC I'm finally going to colo, so in my ideal world, I'd like too:

Untagged <--> eno1 <--> vmbr0 <---> pfSense VM WAN Interface
VLAN 10 <--> eno1.10 <--> vmbr0.10 <--> Proxmox management interface

And then vmbr1 is Proxmox, pfSense LAN, TrueNAS and my Linux VM. This will stop me from getting locked out of Proxmox if pfSense is down / let me use it for initial setup at home.

Essentially, any IP address you configure under Networking is a management interface.

The documentation has an example of what you're trying to do with the NIC, under Host System Administraton > Network Configuration > 802.1q. I don't think you can make this kind of configuration through the GUI, so you'll have to edit /etc/network/interfaces manually.

code:
Example: Use VLAN 10 for the Proxmox VE management IP with VLAN aware Linux bridge

auto lo
iface lo inet loopback

iface eno1 inet manual


auto vmbr0.10
iface vmbr0.10 inet static
        address  10.10.10.2
        netmask  255.255.255.0
        gateway  10.10.10.1

auto vmbr0
iface vmbr0 inet manual
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes

Actuarial Fables fucked around with this message at 09:44 on Jan 12, 2022

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



Mr Shiny Pants posted:

Clustering physical machines and running microservices on top of them is Erlang in a nutshell. Erlang is just one VM (BEAM) that can run actors (microservices that pass messages) with supervisors (if you use OTP). You need to program in Erlang but from a technical standpoint they are roughly comparable at what they accomplish. Erlang is elegant though (like really elegant), Kubernetes is not IMHO.

All that is old is new again.

It was more of a reply to: This Kubernetes thing, at least outside of the alluded "hyperscaling setting", feels like someone's pulling a huge practical joke on me/us.
I got that feeling as well. :)
This is just rumours, but with Facebook having moved WhatsApp from the FreeBSD+Erlang setup that Jan Koum originally set up to their own in-house PHP on Linux solution with Twine (their scale-out container-management solution), they've supposedly had to throw 100-200x the number of machines to handle the same level of traffic.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

BlankSystemDaemon posted:

This is just rumours, but with Facebook having moved WhatsApp from the FreeBSD+Erlang setup that Jan Koum originally set up to their own in-house PHP on Linux solution with Twine (their scale-out container-management solution), they've supposedly had to throw 100-200x the number of machines to handle the same level of traffic.

It's like Microsoft buying HoTMaiL all over again

movax
Aug 30, 2008

Actuarial Fables posted:

Essentially, any IP address you configure under Networking is a management interface.

The documentation has an example of what you're trying to do with the NIC, under Host System Administraton > Network Configuration > 802.1q. I don't think you can make this kind of configuration through the GUI, so you'll have to edit /etc/network/interfaces manually.

code:
Example: Use VLAN 10 for the Proxmox VE management IP with VLAN aware Linux bridge

auto lo
iface lo inet loopback

iface eno1 inet manual


auto vmbr0.10
iface vmbr0.10 inet static
        address  10.10.10.2
        netmask  255.255.255.0
        gateway  10.10.10.1

auto vmbr0
iface vmbr0 inet manual
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes

Huhhh, ok. I saw that in the docs and was confused as to how it had limited the management IP listening but I guess the reason I was confused is that it... doesn't.

What do people running Proxmox in production do then? Actually have multiple NICs and split it / assign directly to VMs? This NIC will be staring down the barrel of WAN, so maybe I need to setup the Proxmox firewall then to only listen to certain IPs / subnets?

I suppose at the colo center, it won't be DHCP'd anyways (versus me setting the appropriate IP in pfSense) so it won't listen, but it will be listening (by default) on vmbr1.

Actuarial Fables
Jul 29, 2014

Taco Defender

Ok so posting at 3am apparently means I don't read your full question.

https://pve.proxmox.com/pve-docs/pveproxy.8.html

You're able to configure which IP address the management service binds on (default=all) and which addresses it allows/denies in /etc/default/pveproxy

underlig
Sep 13, 2007
I didn't know 2012r2 with Hyper-V was that common anymore, but apparently KB5009624 breaks Hyper-V
"could not be started because the hypervisor is not running"

We're running Esxi instead, but that's also making the beginning of tyol 2022 hard,

BlankSystemDaemon
Mar 13, 2009



Bob Morales posted:

It's like Microsoft buying HoTMaiL all over again
You're not wrong.
But wait, does this mean Facebook is going to have an apparent face turn like Microsoft did, before doing a slow heel turn like Microsoft is doing?

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

underlig posted:

I didn't know 2012r2 with Hyper-V was that common anymore, but apparently KB5009624 breaks Hyper-V
"could not be started because the hypervisor is not running"

We're running Esxi instead, but that's also making the beginning of tyol 2022 hard,

Hyper V isn't already broken?

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Bob Morales posted:

Hyper V isn't already broken?

It works well enough for what it is, but its not a great replacement for ESXi or Xenserver

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

I have two Hyper-V VM's in the same cluster that both had drive errors (VHD's on the same physical drive array)

One Linux, one Windows (actually our main file server). Had to do file system checks and reboot them and everything seems to be fine now. Do have a couple corrupt Excel files and such reported but was able to pull them from backups.

Anything that could have caused this? They are running on a Dell VRTX that isn't reporting any issues with the storage systems. Weird coincidence?

Mr Shiny Pants
Nov 12, 2012

Bob Morales posted:

I have two Hyper-V VM's in the same cluster that both had drive errors (VHD's on the same physical drive array)

One Linux, one Windows (actually our main file server). Had to do file system checks and reboot them and everything seems to be fine now. Do have a couple corrupt Excel files and such reported but was able to pull them from backups.

Anything that could have caused this? They are running on a Dell VRTX that isn't reporting any issues with the storage systems. Weird coincidence?

Running ReFS by any chance? There are some problems with the latest round of updates it seems; these also include problems with Hyper-V.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

Mr Shiny Pants posted:

Running ReFS by any chance? There are some problems with the latest round of updates it seems; these also include problems with Hyper-V.

No, but thanks for the heads-up

turd in my singlet
Jul 5, 2008

DO ALL DA WORK

WIT YA NECK

*heavy metal music playing*
Nap Ghost
I have a very basic VM on my laptop for playing with Ubuntu. Recently I installed WSL which I think affected my VM somehow... when I tried to boot it back up from a suspended state I got this error message:

"The features supported by the processors in this machine are different from the features supported by the processors in the machine on which the virtual machine state was saved."

and I couldn't get back to the session I was in and had to reboot the VM.

Which isn't really a big deal, but now I can't connect to the internet from my VM anymore. Any ideas? VMWare hasn't been blacklisted by the Windows firewall or anything as far as I can tell but I'm an idiot so I can't actually tell.

Happiness Commando
Feb 1, 2002
$$ joy at gunpoint $$

WSL uses Hyper-V. Were you using Hyper-V to run your Ubuntu VM, and did the WSL installer change some Hyper-V setting or create some network bridge or something?

SlowBloke
Aug 14, 2017
If he used virtualbox to run the vm, adding wsl2 will make it inoperable.

turd in my singlet
Jul 5, 2008

DO ALL DA WORK

WIT YA NECK

*heavy metal music playing*
Nap Ghost

Happiness Commando posted:

WSL uses Hyper-V. Were you using Hyper-V to run your Ubuntu VM, and did the WSL installer change some Hyper-V setting or create some network bridge or something?

Hyper-V is involved according to the results that came up when I searched that error message, but if my VM uses it it certainly wasn't on purpose since I have no idea what Hyper-V is. The VM is just whatever happens when you use the free version of VMWare and install Ubuntu from an ISO and just click "Sure, whatever" on all the setup dialogs.

When I installed WSL it definitely said it was making some kind of important changes to the system and needed admin authorization and a reboot. Again, whatever the installer does if you click "Sure, whatever" on all the dialogs.

SlowBloke posted:

If he used virtualbox to run the vm, adding wsl2 will make it inoperable.

It's VMWare and as far as I can tell works fine, it's just the internet that's hosed.

Internet Explorer
Jun 1, 2005





Like was said WSL uses Hyper-V and likely created a bridged network adapter. VMware does the same. I haven't run into what you're talking about, but my bet is that the WSL changed network bridge settings that messed up the VMware network adapter's networking. If you do some Googling around that I suspect you'll find something. The suggestion I have off the top of my head would be to uninstall and reinstall VMware. Might break WSL networking, might not. But I imagine it would fix VMware networking.

wolrah
May 8, 2006
what?
Hyper-V runs at the system level and conflicts with any other virtualization engines. Because Hyper-V is required for WSL and other modern Windows technologies this presented a problem until Microsoft introduced the Windows Hypervisor Platform APIs which allow third party virtualization tools like VMware or Virtualbox to work within the Hyper-V framework.

https://blogs.vmware.com/workstation/2020/05/vmware-workstation-now-supports-hyper-v-mode.html

That's why you got the alert about different CPU type, the entire virtualization engine had been changed out under you the moment you activated a component requiring Hyper-V.

I'd guess that the virtual network card changed as part of this and is now showing up as eth1 while all the configuration was for eth0.

SlowBloke
Aug 14, 2017
Unless it’s a very old version of workstation/player, vmware will install the hyper visor support platform(not hyperv) which is the same used by wsl2. How old is it?

turd in my singlet
Jul 5, 2008

DO ALL DA WORK

WIT YA NECK

*heavy metal music playing*
Nap Ghost
tried a few different times yesterday, restarting the machine, restarting VMWare, poking around options, didn't work

opened it up today and its' fixed itself lol

must have needed to restart windows or something

turd in my singlet fucked around with this message at 18:41 on Feb 5, 2022

BlankSystemDaemon
Mar 13, 2009



The only legitimate reason to restart a computer nowadays is when the kernel binary executable has been updated.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
Anyone notice when expanding large disk (vSphere), that the VM will drop offline while vCenter/ESXi does it thing?

Expanded a disk on an older (Server 2012) fileserver from 6 to 7tb, while it was working, it dropped off the network for almost a minute.

Internet Explorer
Jun 1, 2005





It shouldn't do that.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
Neat, I have noticed it across all these 2012 file servers (different clusters, different storage).

They are all getting replaced, so I'll have to do some testing on the 2019 boxes.

Also, this specific VM is running on some old rear end storage (Nimble iSCSI), need to get it off of there.

lol internet.
Sep 4, 2007
the internet makes you stupid
Umm not super familiar with VMware tools but is there something simple that can scan ESXi/VCenter to inventory VM and dump it to a CSV (CPU, Name, IP, Configuration, etc)

Preferably a non installer application.

Internet Explorer
Jun 1, 2005





Yes, RVTools.
https://www.robware.net/rvtools/

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

Tons of powershell scripts out there to do it as well

https://www.google.com/url?sa=t&rct...sIJwwD_dPCVd9Sm

bolind
Jun 19, 2005



Pillbug
As the OP is nearly a decade old, I'm taking the liberty to post dumb questions:

I work in a RHEL (OK, we're cheapskates, so Rocky Linux) environment, and I have, over the past year, introduced containers. I've based them on podman, which seems to work well enough for what we're doing: "microservices", little servers that run a simple network based service (SVN server, wiki, various web servers etc.).

First question: where does Linux Containers (LXC) fit into this, and are they worth investigating?

Secondly, we have some stuff that runs via RHEL's native virtualization tools, I believe they're based on KVM. Since we're never doing anything other that virtualizing x86_64 on x86_64, this seems like a dumb idea, as this is a proper emulator, and thus introduces quite a bit of performance penalty. Is this correct? Is there a smarter way to emulate a full machine under these circumtstances.

What I'm looking for is full virtualization (own disks, own NIC(s)) but ideally no emulation.

some kinda jackal
Feb 25, 2003

 
 
Maybe naively, I've always considered the LXC vs Docker thing this way:

- Docker: Single application, unitasker. Contains (or SHOULD contain) absolutely bare minimum required to preform its job
- LXC: "Containerized" Linux distribution. A whole working "VM" without the overhead of a hypervisor and all the things it needs to emulate or virtualize. Not necessarily unitasking, you're bringing up a separate distro"

I'm still feeling my way out in the container world but that's how I see it. I stand ready to be corrected.

Re: KVM

IIRC KVM is a type 2 emulator if your guest architecture matches the host and the host makes provisions for virtualization. You have the option of running instruction translation for non-native architectures, but x86_64 on x86_64 should be direct passthrough virtualization unless you have vt-d disabled or something. The running process may be kvm-qemu (or I forget what it actually is) but that shouldn't mislead you to think qemu is doing any instruction translation :)

Same caveat as above, standing ready to be corrected.

some kinda jackal fucked around with this message at 11:43 on Apr 12, 2022

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

A container is two things:
1. A turbocharged chroot, but for all the resources (network, process tree) instead of just a filesystem, using namespaces and cgroups. Uses the same Linux kernel as the host (usually).
2. An image based software distribution standard.

LXC, Podman, Docker, etc. all do the same kind of things for #1. There’s a lot of (oss) drama around them, but it doesn’t really matter for the same basic processes (download image, start/stop process, manipulate the namespaces.) If you’re happy with podman there’s no need to look at lxc.

Because containers normally run isolated you need to provide all the executables and libraries to run within. This is done via binary images. This can be bone stock debian/rhel, or container optimized os spins that provide just the binary statically linked with the relevant libraries, or some point in between. Alpine Linux is frequently used as a lightweight option.

For standard stuff you get your container from dockerhub or other container repository. It’s (usually) a series of gzipped overlayfs images that docker pulls. It doesn’t really matter as your tool will manage it.

Images are built from Dockerfiles. Basically a rpm spec file.

Containers are usually* immutable.

As jackal mentioned, kvm is standard virt, not emulated. qemu provides the fake hardware (pci etc) but doesn’t do cpu emulation. The rhel tools are fine.

Pile Of Garbage
May 28, 2007



With VMware is there a way to exempt a VM from DRS but at the VM level instead of the cluster? I know you can do it at the cluster level but is there a tag or something I can set on a VM that will make DRS ignore it?

I'm building VM templates with Packer and as part of the build it mounts a floppy to the VM containing the autounattend.xml and some scripts. The issue is that if the VM is migrated via vMotion the floppy drive gets automatically disconnected from the VM. This either causes the build to stall or only complete partially due to missing scripts. This issue seems to arise fairly frequently because DRS appears to be hyper-aggressive on clusters I'm deploying to despite being configured with default settings.

Internet Explorer
Jun 1, 2005





Yes. Old UI, but should be fairly similar.
https://www.yellow-bricks.com/2018/03/28/disable-drs-for-a-vm/

Pikehead
Dec 3, 2006

Looking for WMDs, PM if you have A+ grade stuff
Fun Shoe

Pile Of Garbage posted:

With VMware is there a way to exempt a VM from DRS but at the VM level instead of the cluster? I know you can do it at the cluster level but is there a tag or something I can set on a VM that will make DRS ignore it?

I'm building VM templates with Packer and as part of the build it mounts a floppy to the VM containing the autounattend.xml and some scripts. The issue is that if the VM is migrated via vMotion the floppy drive gets automatically disconnected from the VM. This either causes the build to stall or only complete partially due to missing scripts. This issue seems to arise fairly frequently because DRS appears to be hyper-aggressive on clusters I'm deploying to despite being configured with default settings.

You could also put the floppy image on a shared datastore or library, so vmotioning isn't an issue.

(I believe this will work - I haven't had to mount a floppy image in .. well, ever).

SlowBloke
Aug 14, 2017

Pikehead posted:

You could also put the floppy image on a shared datastore or library, so vmotioning isn't an issue.

(I believe this will work - I haven't had to mount a floppy image in .. well, ever).

It will, i can confirm that(sigh)

Cyks
Mar 17, 2008

The trenches of IT can scar a muppet for life
In vcenter you set it up as an affinity rule.

https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.resmgmt.doc/GUID-FF28F29C-8B67-4EFF-A2EF-63B3537E6934.html

Specifically VM to Host group affinity rule.
Actually just doing manual for the individual vm as the other poster said is easier I’m overthinking it too early in the morning.

Cyks fucked around with this message at 13:18 on Apr 15, 2022

Pile Of Garbage
May 28, 2007




That's still at the cluster level though and requires that the VM already exist. I'm deploying new VMs with Packer using the vsphere-iso builder which doesn't exactly give you options to mess with things in the environment other than the VM it is deploying.

Pikehead posted:

You could also put the floppy image on a shared datastore or library, so vmotioning isn't an issue.

(I believe this will work - I haven't had to mount a floppy image in .. well, ever).

Packer creates the .flp file on the fly and uploads it to the same datastore as the VM so it's already on shared storage. From what I observed the floppy image isn't being unmounted but rather the floppy drive is being disconnected from the VM when it vMotions. As part of the build Packer also mounts two ISOs to the VM on separate CD-ROM drives and those are unaffected by the vMotion.

Cyks posted:

In vcenter you set it up as an affinity rule.

https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.resmgmt.doc/GUID-FF28F29C-8B67-4EFF-A2EF-63B3537E6934.html

Specifically VM to Host group affinity rule.
Actually just doing manual for the individual vm as the other poster said is easier I’m overthinking it too early in the morning.

Again that would require configuration at the cluster level and can't be done prior to the creation of the VM. I'd really like to avoid having to orchestrate something alongside Packer to do this.

Maybe I should just figure out what's going on with DRS. I've seen it vMotion a new VM three times in the space of two minutes basically immediately after the VM was created. Surely that shouldn't happen as Packer is just pointing at the cluster and relying on DRS initial placement to choose a host. Perhaps that initial placement recommendation from DRS is cooked?

Internet Explorer
Jun 1, 2005





Pile Of Garbage posted:

That's still at the cluster level though and requires that the VM already exist. I'm deploying new VMs with Packer using the vsphere-iso builder which doesn't exactly give you options to mess with things in the environment other than the VM it is deploying.

Packer creates the .flp file on the fly and uploads it to the same datastore as the VM so it's already on shared storage. From what I observed the floppy image isn't being unmounted but rather the floppy drive is being disconnected from the VM when it vMotions. As part of the build Packer also mounts two ISOs to the VM on separate CD-ROM drives and those are unaffected by the vMotion.

Again that would require configuration at the cluster level and can't be done prior to the creation of the VM. I'd really like to avoid having to orchestrate something alongside Packer to do this.

Maybe I should just figure out what's going on with DRS. I've seen it vMotion a new VM three times in the space of two minutes basically immediately after the VM was created. Surely that shouldn't happen as Packer is just pointing at the cluster and relying on DRS initial placement to choose a host. Perhaps that initial placement recommendation from DRS is cooked?

Sorry, I was just answering your question of can it be done and showing the setting. I assume there's a property for that that can be set during your deployment, but I don't have time to look into it now. I'm sure someone else has run into this problem before and figured out a solution.

Pikehead
Dec 3, 2006

Looking for WMDs, PM if you have A+ grade stuff
Fun Shoe

Pile Of Garbage posted:

That's still at the cluster level though and requires that the VM already exist. I'm deploying new VMs with Packer using the vsphere-iso builder which doesn't exactly give you options to mess with things in the environment other than the VM it is deploying.

Packer creates the .flp file on the fly and uploads it to the same datastore as the VM so it's already on shared storage. From what I observed the floppy image isn't being unmounted but rather the floppy drive is being disconnected from the VM when it vMotions. As part of the build Packer also mounts two ISOs to the VM on separate CD-ROM drives and those are unaffected by the vMotion.

Again that would require configuration at the cluster level and can't be done prior to the creation of the VM. I'd really like to avoid having to orchestrate something alongside Packer to do this.

Maybe I should just figure out what's going on with DRS. I've seen it vMotion a new VM three times in the space of two minutes basically immediately after the VM was created. Surely that shouldn't happen as Packer is just pointing at the cluster and relying on DRS initial placement to choose a host. Perhaps that initial placement recommendation from DRS is cooked?

If the floppy is on the same datastore as the vm then I wouldn't expect issues, but obviously there are. I can't think why it would disconnect - would there be anything in the esxi logs as to what's going on?

With regard to a very aggressive DRS - that's again a bit unexpected - I thought DRS by default runs every 5 or 15 minutes and not at the frequency you're seeing. Is the vms being powered on each time? - That would possibly make sense, as when DRS is fully automatic it's involved each time a vm is powered on.

Pile Of Garbage
May 28, 2007



Pikehead posted:

If the floppy is on the same datastore as the vm then I wouldn't expect issues, but obviously there are. I can't think why it would disconnect - would there be anything in the esxi logs as to what's going on?

I'll have a look at the logs if any when I'm back at work next week. It is quite strange as normally I'd expect it to cause vMotion to fail outright.

Pikehead posted:

With regard to a very aggressive DRS - that's again a bit unexpected - I thought DRS by default runs every 5 or 15 minutes and not at the frequency you're seeing. Is the vms being powered on each time? - That would possibly make sense, as when DRS is fully automatic it's involved each time a vm is powered on.

For the Packer build the VM is only powered-on the one time. In the guest OS it does an unattended install of Windows Server 2019 so there are a couple of soft reboots and then at the end it powers off so that Packer can emove the CD-ROM and floppy drives and convert it into a template.

Adbot
ADBOT LOVES YOU

Pikehead
Dec 3, 2006

Looking for WMDs, PM if you have A+ grade stuff
Fun Shoe

Pile Of Garbage posted:

I'll have a look at the logs if any when I'm back at work next week. It is quite strange as normally I'd expect it to cause vMotion to fail outright.

I would think that vMotion would work - it's a shared datastore. There's something I'm not getting or something not right here though.

On leave at the moment so can't test.

Pile Of Garbage posted:

For the Packer build the VM is only powered-on the one time. In the guest OS it does an unattended install of Windows Server 2019 so there are a couple of soft reboots and then at the end it powers off so that Packer can emove the CD-ROM and floppy drives and convert it into a template.

What version of vcenter/esxi are you running? 7.x is more transparent on imbalance as per https://4sysops.com/archives/vmware-vsphere-7-drs-scoring-and-configuration/

I'd say to also look at the DRS logs, but I have no idea where they would be, and knowing vmware if you could find them they'd all be an impenetrable mess of GUIDs and spam.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply