|
movax posted:So I've given up on ESXi and trying out Proxmox now. Essentially, any IP address you configure under Networking is a management interface. The documentation has an example of what you're trying to do with the NIC, under Host System Administraton > Network Configuration > 802.1q. I don't think you can make this kind of configuration through the GUI, so you'll have to edit /etc/network/interfaces manually. code:
Actuarial Fables fucked around with this message at 09:44 on Jan 12, 2022 |
# ? Jan 12, 2022 09:02 |
|
|
# ? May 9, 2024 16:19 |
Mr Shiny Pants posted:Clustering physical machines and running microservices on top of them is Erlang in a nutshell. Erlang is just one VM (BEAM) that can run actors (microservices that pass messages) with supervisors (if you use OTP). You need to program in Erlang but from a technical standpoint they are roughly comparable at what they accomplish. Erlang is elegant though (like really elegant), Kubernetes is not IMHO.
|
|
# ? Jan 12, 2022 15:25 |
|
BlankSystemDaemon posted:This is just rumours, but with Facebook having moved WhatsApp from the FreeBSD+Erlang setup that Jan Koum originally set up to their own in-house PHP on Linux solution with Twine (their scale-out container-management solution), they've supposedly had to throw 100-200x the number of machines to handle the same level of traffic. It's like Microsoft buying HoTMaiL all over again
|
# ? Jan 12, 2022 15:53 |
|
Actuarial Fables posted:Essentially, any IP address you configure under Networking is a management interface. Huhhh, ok. I saw that in the docs and was confused as to how it had limited the management IP listening but I guess the reason I was confused is that it... doesn't. What do people running Proxmox in production do then? Actually have multiple NICs and split it / assign directly to VMs? This NIC will be staring down the barrel of WAN, so maybe I need to setup the Proxmox firewall then to only listen to certain IPs / subnets? I suppose at the colo center, it won't be DHCP'd anyways (versus me setting the appropriate IP in pfSense) so it won't listen, but it will be listening (by default) on vmbr1.
|
# ? Jan 12, 2022 17:18 |
|
Ok so posting at 3am apparently means I don't read your full question. https://pve.proxmox.com/pve-docs/pveproxy.8.html You're able to configure which IP address the management service binds on (default=all) and which addresses it allows/denies in /etc/default/pveproxy
|
# ? Jan 12, 2022 18:18 |
|
I didn't know 2012r2 with Hyper-V was that common anymore, but apparently KB5009624 breaks Hyper-V "could not be started because the hypervisor is not running" We're running Esxi instead, but that's also making the beginning of tyol 2022 hard,
|
# ? Jan 12, 2022 21:30 |
Bob Morales posted:It's like Microsoft buying HoTMaiL all over again But wait, does this mean Facebook is going to have an apparent face turn like Microsoft did, before doing a slow heel turn like Microsoft is doing?
|
|
# ? Jan 12, 2022 22:17 |
|
underlig posted:I didn't know 2012r2 with Hyper-V was that common anymore, but apparently KB5009624 breaks Hyper-V Hyper V isn't already broken?
|
# ? Jan 13, 2022 00:36 |
|
Bob Morales posted:Hyper V isn't already broken? It works well enough for what it is, but its not a great replacement for ESXi or Xenserver
|
# ? Jan 13, 2022 00:54 |
|
I have two Hyper-V VM's in the same cluster that both had drive errors (VHD's on the same physical drive array) One Linux, one Windows (actually our main file server). Had to do file system checks and reboot them and everything seems to be fine now. Do have a couple corrupt Excel files and such reported but was able to pull them from backups. Anything that could have caused this? They are running on a Dell VRTX that isn't reporting any issues with the storage systems. Weird coincidence?
|
# ? Jan 14, 2022 17:10 |
|
Bob Morales posted:I have two Hyper-V VM's in the same cluster that both had drive errors (VHD's on the same physical drive array) Running ReFS by any chance? There are some problems with the latest round of updates it seems; these also include problems with Hyper-V.
|
# ? Jan 14, 2022 18:00 |
|
Mr Shiny Pants posted:Running ReFS by any chance? There are some problems with the latest round of updates it seems; these also include problems with Hyper-V. No, but thanks for the heads-up
|
# ? Jan 14, 2022 18:12 |
|
I have a very basic VM on my laptop for playing with Ubuntu. Recently I installed WSL which I think affected my VM somehow... when I tried to boot it back up from a suspended state I got this error message: "The features supported by the processors in this machine are different from the features supported by the processors in the machine on which the virtual machine state was saved." and I couldn't get back to the session I was in and had to reboot the VM. Which isn't really a big deal, but now I can't connect to the internet from my VM anymore. Any ideas? VMWare hasn't been blacklisted by the Windows firewall or anything as far as I can tell but I'm an idiot so I can't actually tell.
|
# ? Feb 5, 2022 04:36 |
|
WSL uses Hyper-V. Were you using Hyper-V to run your Ubuntu VM, and did the WSL installer change some Hyper-V setting or create some network bridge or something?
|
# ? Feb 5, 2022 05:20 |
|
If he used virtualbox to run the vm, adding wsl2 will make it inoperable.
|
# ? Feb 5, 2022 07:45 |
|
Happiness Commando posted:WSL uses Hyper-V. Were you using Hyper-V to run your Ubuntu VM, and did the WSL installer change some Hyper-V setting or create some network bridge or something? Hyper-V is involved according to the results that came up when I searched that error message, but if my VM uses it it certainly wasn't on purpose since I have no idea what Hyper-V is. The VM is just whatever happens when you use the free version of VMWare and install Ubuntu from an ISO and just click "Sure, whatever" on all the setup dialogs. When I installed WSL it definitely said it was making some kind of important changes to the system and needed admin authorization and a reboot. Again, whatever the installer does if you click "Sure, whatever" on all the dialogs. SlowBloke posted:If he used virtualbox to run the vm, adding wsl2 will make it inoperable. It's VMWare and as far as I can tell works fine, it's just the internet that's hosed.
|
# ? Feb 5, 2022 15:02 |
|
Like was said WSL uses Hyper-V and likely created a bridged network adapter. VMware does the same. I haven't run into what you're talking about, but my bet is that the WSL changed network bridge settings that messed up the VMware network adapter's networking. If you do some Googling around that I suspect you'll find something. The suggestion I have off the top of my head would be to uninstall and reinstall VMware. Might break WSL networking, might not. But I imagine it would fix VMware networking.
|
# ? Feb 5, 2022 15:20 |
|
Hyper-V runs at the system level and conflicts with any other virtualization engines. Because Hyper-V is required for WSL and other modern Windows technologies this presented a problem until Microsoft introduced the Windows Hypervisor Platform APIs which allow third party virtualization tools like VMware or Virtualbox to work within the Hyper-V framework. https://blogs.vmware.com/workstation/2020/05/vmware-workstation-now-supports-hyper-v-mode.html That's why you got the alert about different CPU type, the entire virtualization engine had been changed out under you the moment you activated a component requiring Hyper-V. I'd guess that the virtual network card changed as part of this and is now showing up as eth1 while all the configuration was for eth0.
|
# ? Feb 5, 2022 17:11 |
|
Unless it’s a very old version of workstation/player, vmware will install the hyper visor support platform(not hyperv) which is the same used by wsl2. How old is it?
|
# ? Feb 5, 2022 17:19 |
|
tried a few different times yesterday, restarting the machine, restarting VMWare, poking around options, didn't work opened it up today and its' fixed itself lol must have needed to restart windows or something turd in my singlet fucked around with this message at 18:41 on Feb 5, 2022 |
# ? Feb 5, 2022 18:39 |
The only legitimate reason to restart a computer nowadays is when the kernel binary executable has been updated.
|
|
# ? Feb 5, 2022 19:34 |
|
Anyone notice when expanding large disk (vSphere), that the VM will drop offline while vCenter/ESXi does it thing? Expanded a disk on an older (Server 2012) fileserver from 6 to 7tb, while it was working, it dropped off the network for almost a minute.
|
# ? Mar 2, 2022 01:38 |
|
It shouldn't do that.
|
# ? Mar 2, 2022 04:07 |
|
Neat, I have noticed it across all these 2012 file servers (different clusters, different storage). They are all getting replaced, so I'll have to do some testing on the 2019 boxes. Also, this specific VM is running on some old rear end storage (Nimble iSCSI), need to get it off of there.
|
# ? Mar 2, 2022 04:37 |
|
Umm not super familiar with VMware tools but is there something simple that can scan ESXi/VCenter to inventory VM and dump it to a CSV (CPU, Name, IP, Configuration, etc) Preferably a non installer application.
|
# ? Mar 28, 2022 06:11 |
|
Yes, RVTools. https://www.robware.net/rvtools/
|
# ? Mar 28, 2022 06:24 |
|
Tons of powershell scripts out there to do it as well https://www.google.com/url?sa=t&rct...sIJwwD_dPCVd9Sm
|
# ? Mar 29, 2022 17:32 |
|
As the OP is nearly a decade old, I'm taking the liberty to post dumb questions: I work in a RHEL (OK, we're cheapskates, so Rocky Linux) environment, and I have, over the past year, introduced containers. I've based them on podman, which seems to work well enough for what we're doing: "microservices", little servers that run a simple network based service (SVN server, wiki, various web servers etc.). First question: where does Linux Containers (LXC) fit into this, and are they worth investigating? Secondly, we have some stuff that runs via RHEL's native virtualization tools, I believe they're based on KVM. Since we're never doing anything other that virtualizing x86_64 on x86_64, this seems like a dumb idea, as this is a proper emulator, and thus introduces quite a bit of performance penalty. Is this correct? Is there a smarter way to emulate a full machine under these circumtstances. What I'm looking for is full virtualization (own disks, own NIC(s)) but ideally no emulation.
|
# ? Apr 12, 2022 09:43 |
|
Maybe naively, I've always considered the LXC vs Docker thing this way: - Docker: Single application, unitasker. Contains (or SHOULD contain) absolutely bare minimum required to preform its job - LXC: "Containerized" Linux distribution. A whole working "VM" without the overhead of a hypervisor and all the things it needs to emulate or virtualize. Not necessarily unitasking, you're bringing up a separate distro" I'm still feeling my way out in the container world but that's how I see it. I stand ready to be corrected. Re: KVM IIRC KVM is a type 2 emulator if your guest architecture matches the host and the host makes provisions for virtualization. You have the option of running instruction translation for non-native architectures, but x86_64 on x86_64 should be direct passthrough virtualization unless you have vt-d disabled or something. The running process may be kvm-qemu (or I forget what it actually is) but that shouldn't mislead you to think qemu is doing any instruction translation Same caveat as above, standing ready to be corrected. some kinda jackal fucked around with this message at 11:43 on Apr 12, 2022 |
# ? Apr 12, 2022 11:38 |
|
A container is two things: 1. A turbocharged chroot, but for all the resources (network, process tree) instead of just a filesystem, using namespaces and cgroups. Uses the same Linux kernel as the host (usually). 2. An image based software distribution standard. LXC, Podman, Docker, etc. all do the same kind of things for #1. There’s a lot of (oss) drama around them, but it doesn’t really matter for the same basic processes (download image, start/stop process, manipulate the namespaces.) If you’re happy with podman there’s no need to look at lxc. Because containers normally run isolated you need to provide all the executables and libraries to run within. This is done via binary images. This can be bone stock debian/rhel, or container optimized os spins that provide just the binary statically linked with the relevant libraries, or some point in between. Alpine Linux is frequently used as a lightweight option. For standard stuff you get your container from dockerhub or other container repository. It’s (usually) a series of gzipped overlayfs images that docker pulls. It doesn’t really matter as your tool will manage it. Images are built from Dockerfiles. Basically a rpm spec file. Containers are usually* immutable. As jackal mentioned, kvm is standard virt, not emulated. qemu provides the fake hardware (pci etc) but doesn’t do cpu emulation. The rhel tools are fine.
|
# ? Apr 12, 2022 17:19 |
|
With VMware is there a way to exempt a VM from DRS but at the VM level instead of the cluster? I know you can do it at the cluster level but is there a tag or something I can set on a VM that will make DRS ignore it? I'm building VM templates with Packer and as part of the build it mounts a floppy to the VM containing the autounattend.xml and some scripts. The issue is that if the VM is migrated via vMotion the floppy drive gets automatically disconnected from the VM. This either causes the build to stall or only complete partially due to missing scripts. This issue seems to arise fairly frequently because DRS appears to be hyper-aggressive on clusters I'm deploying to despite being configured with default settings.
|
# ? Apr 15, 2022 03:46 |
|
Yes. Old UI, but should be fairly similar. https://www.yellow-bricks.com/2018/03/28/disable-drs-for-a-vm/
|
# ? Apr 15, 2022 04:09 |
|
Pile Of Garbage posted:With VMware is there a way to exempt a VM from DRS but at the VM level instead of the cluster? I know you can do it at the cluster level but is there a tag or something I can set on a VM that will make DRS ignore it? You could also put the floppy image on a shared datastore or library, so vmotioning isn't an issue. (I believe this will work - I haven't had to mount a floppy image in .. well, ever).
|
# ? Apr 15, 2022 06:37 |
|
Pikehead posted:You could also put the floppy image on a shared datastore or library, so vmotioning isn't an issue. It will, i can confirm that(sigh)
|
# ? Apr 15, 2022 12:45 |
|
In vcenter you set it up as an affinity rule. https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.resmgmt.doc/GUID-FF28F29C-8B67-4EFF-A2EF-63B3537E6934.html Specifically VM to Host group affinity rule. Actually just doing manual for the individual vm as the other poster said is easier I’m overthinking it too early in the morning. Cyks fucked around with this message at 13:18 on Apr 15, 2022 |
# ? Apr 15, 2022 13:03 |
|
Internet Explorer posted:Yes. Old UI, but should be fairly similar. That's still at the cluster level though and requires that the VM already exist. I'm deploying new VMs with Packer using the vsphere-iso builder which doesn't exactly give you options to mess with things in the environment other than the VM it is deploying. Pikehead posted:You could also put the floppy image on a shared datastore or library, so vmotioning isn't an issue. Packer creates the .flp file on the fly and uploads it to the same datastore as the VM so it's already on shared storage. From what I observed the floppy image isn't being unmounted but rather the floppy drive is being disconnected from the VM when it vMotions. As part of the build Packer also mounts two ISOs to the VM on separate CD-ROM drives and those are unaffected by the vMotion. Cyks posted:In vcenter you set it up as an affinity rule. Again that would require configuration at the cluster level and can't be done prior to the creation of the VM. I'd really like to avoid having to orchestrate something alongside Packer to do this. Maybe I should just figure out what's going on with DRS. I've seen it vMotion a new VM three times in the space of two minutes basically immediately after the VM was created. Surely that shouldn't happen as Packer is just pointing at the cluster and relying on DRS initial placement to choose a host. Perhaps that initial placement recommendation from DRS is cooked?
|
# ? Apr 15, 2022 16:30 |
|
Pile Of Garbage posted:That's still at the cluster level though and requires that the VM already exist. I'm deploying new VMs with Packer using the vsphere-iso builder which doesn't exactly give you options to mess with things in the environment other than the VM it is deploying. Sorry, I was just answering your question of can it be done and showing the setting. I assume there's a property for that that can be set during your deployment, but I don't have time to look into it now. I'm sure someone else has run into this problem before and figured out a solution.
|
# ? Apr 15, 2022 17:16 |
|
Pile Of Garbage posted:That's still at the cluster level though and requires that the VM already exist. I'm deploying new VMs with Packer using the vsphere-iso builder which doesn't exactly give you options to mess with things in the environment other than the VM it is deploying. If the floppy is on the same datastore as the vm then I wouldn't expect issues, but obviously there are. I can't think why it would disconnect - would there be anything in the esxi logs as to what's going on? With regard to a very aggressive DRS - that's again a bit unexpected - I thought DRS by default runs every 5 or 15 minutes and not at the frequency you're seeing. Is the vms being powered on each time? - That would possibly make sense, as when DRS is fully automatic it's involved each time a vm is powered on.
|
# ? Apr 15, 2022 17:45 |
|
Pikehead posted:If the floppy is on the same datastore as the vm then I wouldn't expect issues, but obviously there are. I can't think why it would disconnect - would there be anything in the esxi logs as to what's going on? I'll have a look at the logs if any when I'm back at work next week. It is quite strange as normally I'd expect it to cause vMotion to fail outright. Pikehead posted:With regard to a very aggressive DRS - that's again a bit unexpected - I thought DRS by default runs every 5 or 15 minutes and not at the frequency you're seeing. Is the vms being powered on each time? - That would possibly make sense, as when DRS is fully automatic it's involved each time a vm is powered on. For the Packer build the VM is only powered-on the one time. In the guest OS it does an unattended install of Windows Server 2019 so there are a couple of soft reboots and then at the end it powers off so that Packer can emove the CD-ROM and floppy drives and convert it into a template.
|
# ? Apr 16, 2022 00:12 |
|
|
# ? May 9, 2024 16:19 |
|
Pile Of Garbage posted:I'll have a look at the logs if any when I'm back at work next week. It is quite strange as normally I'd expect it to cause vMotion to fail outright. I would think that vMotion would work - it's a shared datastore. There's something I'm not getting or something not right here though. On leave at the moment so can't test. Pile Of Garbage posted:For the Packer build the VM is only powered-on the one time. In the guest OS it does an unattended install of Windows Server 2019 so there are a couple of soft reboots and then at the end it powers off so that Packer can emove the CD-ROM and floppy drives and convert it into a template. What version of vcenter/esxi are you running? 7.x is more transparent on imbalance as per https://4sysops.com/archives/vmware-vsphere-7-drs-scoring-and-configuration/ I'd say to also look at the DRS logs, but I have no idea where they would be, and knowing vmware if you could find them they'd all be an impenetrable mess of GUIDs and spam.
|
# ? Apr 16, 2022 08:52 |