Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
I've never gotten around to posting SH/SC till recently, but nice to see we have a VM thread, I'm an avid Xen and HyperV user, but proffessionally I used VMWare ESXi

Adbot
ADBOT LOVES YOU

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

evol262 posted:

There's no good reason to get Xeons for a lab, IMO.

i5s were fine. Last time I built compute was haswell, but 4 cores and 64gb of memory plus a cheap, tiny SSD would have run me about $600/node for everything.

Why build a "monster" instead of smaller servers on desktop hardware?

Shoving that much memory in a NUC gets very expensive very fast, too.

I do hit my lab very hard, and it would be overkill for a lot of people, but a Celeron (or i3) with 16gb or 32gb wouldn't be enough....

Craiglist is your friend. I run an R710 with 4 x 2.5 Ghz Quad Xeons and 288GB DDR3 for my lab, doubles as my NAS and iSCSI host with an MD1000. And altogether, it really only consumes a little more than a fully built desktop while also letting me spin up Boot2Docker or anything I need without adding more machines to the power bill.

But I also use it for client work extensively within jails, so it largely pays for itself.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

evol262 posted:

I said there's no good reason for Xeons mostly because of noise, but I also don't think you've compared the power consumption on that to NUCs or something else small. "Fully built desktops" usually have excessively large PSUs and probably GPUs.

People don't dump hardware on eBay or Craigslist for next to nothing because it's efficient compared to new builds.

Unless you have an attached garage you can shove it in, rackmount servers great for price, and terrible for everything else lab related (power, heat, noise, form factor)

Cost isn't the reason I got rid of all of my rackmount servers.

Mine is using about 215 watts + ~75 for the MD1000 and makes very little noise, at least compared to my old R905.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

evol262 posted:

It's all relative, really. And if you like it, that's important. i5 haswell NUCs are about 30w and silent at full load. Different strokes.

True. I have a thing for full size servers too.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
Just take good nightly images of your VMs and back them up on a separate drive. That's what I do with my current lab, which has a lot of client copied images on it for testing/diagnostics, so while it is a lab environment, I need to maintain data integrity, so I know how you feel.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Wibla posted:

Sigh, I need a compact esxi host with room for 4-6 3.5" drives and 16ish GB ram, but the microserver gen8 is out of production and the gen10 is apparently garbage?

Find a SuperMicro system, you can get a 1U with a dual Xeon or AMD Opteron for fairly cheap.

I run a Quad AMD Opteron system for my Virtual Lab running Xenserver and PfSense for virtual switch routing/vlans.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
Yeah, for on desktop, HyperV is hard to beat.

I'm still a XenServer evangelist for low cost bare metal personal labs.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

TheFace posted:

For ease of setup sure, it's pretty painless to get working well... and it kinda just works. But if you're trying to lab to get experience with something you might use out in the world I'd say you're better off with KVM, Hyper-V, or VMware (if you can get the licensing on it).

For just hosting VMs, Xenserver does awesome and comes with some fairly advanced features out of the box, even without a license.

But yeah, those three are more common in the market.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Docjowles posted:

If you don’t care about commercial support there is also XCP-ng, which is basically CentOS for XenServer. It unlocks all the features that are normally gated behind a paid license from Citrix.

:stare: Hell yes, will be setting this up right away.

It even has a native Xenserver migration platform, SCORE!

CommieGIR fucked around with this message at 17:22 on May 11, 2019

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
Loving XCP, I mean, its just Xen with all the features. I just wish USB 3.0 passthrough was finalized, I think it only does USB 2.0 passthrough right now.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
Buy a cheap Dell PowerEdge R710. Can get one of those for half the cost of a NUC, and itll do virtualization ten times better

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Moey posted:

There is also a huge difference in terms of power/cooling/noise/physical space between a modern NUC and a 9 year old rackmount server.

Small price to pay for low cost virtualization hardware.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
You could probably build a decent ITX Xeon system for less than a NUC

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Potato Salad posted:

Xen is...really not where the future is at. Today is about a huge variety of container stacks atop kvm, esxi, and (to a lesser degree) hyper-v

Eh. Depends on what you mean. Xen is far better as an open source lab solution than ESXi.

I'd rather use Xen XCP than ESXi, as it has more community support and more features that you can use without a license.

Now, Enterprise? I agree there.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Bob Morales posted:

Do you mind expanding on why? I haven't used Xen in about 10 years but I'm opening to hear what advantages it has on the low-end

Well, for one not having to pay to get the Management Console. Since XCP is largely community driven and free-as-in-beer, and you can use almost all of the Enterprise grade features that normally you'd pay for in VMware (or even Citrix Xenserver for that matter), I don't see VMWare as a competitor for lab use. Not to mention a lot of advanced storage features and powerful utilities for managing your lab like Xen Orchestra.

Sure, you could argue that Using VMWare strictly by console gets you more used to managing an ESXi box, but honestly, most people want virtualization at home to use the Hypervisor, not to learn VMware.

Don't get me wrong: From an Enterprise perspective, VMware is much more mature.

Caveat: I haven't played with KVM yet, so hence why I have no opinion on it.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

TheFace posted:

This is exactly why I try to tell people to use KVM in home labs, or just Hyper-V built in to W10 (or if you wanna become a Hyper-V Powershell pro, Hyper-V server). Use things you might actually see in a work situation.

I used Xen (and XenServer) for years at work, and now it's pretty dead everywhere I've worked or seen for the past 5+ years. Final nail in Xen coffin was when AWS rolled their own flavor of KVM and stepped away from using Xen.

Maybe I'll move to KVM down the road, but Xen is still keeping me very happy.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
VMs within VMs.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
My Security Virtual Lab has the primary router as a VM inside the cluster, but the servers and controller workstation have static IPs.

Its built that way on purpose to let us disable network access in an emergency.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Schadenboner posted:

So there's just no nested virtualization in W10 on AMDs, right?

From my :google: VMware Workstation and Hyper-V both don't support it but I'd be really happy to be wrong because goddamn, those Ryzen 9s...

:(

No, it should support Nested Virtualization, but you may have to make a config change. Had to do this before.

I was wrong, its intel only :(

However, its apoarently a Microsoft issue, not so much an AMD one.

CommieGIR fucked around with this message at 02:59 on Aug 16, 2019

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
What's the best way to get started on KVM? Is there a distro like XCP with the OS and KVM already configured?

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
Just FYI: If you notice sluggishness while running HyperV on your host, you may need to disable hyperthreading on your machine.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Pile Of Garbage posted:

Is that still an issue on the latest Win10/Server 2019 builds?

In Windows 10 at least

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
So I recently acquired a Dell M1000e Bladecenter and a bunch of blade servers, so I'm taking the opprotunity to use all these spare servers to do side by side comparisons of virtualization hypervisors for home lab stuff.

I normally use Xenserver XCP-NG in my lab, but I wanted to try Proxmox, throw in ESXi, and maybe KVM, what other open source hypervisors should I try?

I setup Proxmox this evening. So far, I like it, not quite as nice and intuitive as Xenserver's Management Center, but its still very clean for a Web UI.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
So, I really want to post this solution in case someone using Xen also runs into it:

I had a CIFS SR for my VHDs, and I added a member to the pool. What I forgot was I was running a cached password for the NFS that had changed. So, when I tried to attach the SR to the new pool member, it kicked BOTH pool members off and threw a generic SMB error on the GUI.

So, I checked the dmesg:
code:
[63785.230730] CIFS VFS: cifs_mount failed w/return code = -13
[63897.747524] Status code returned 0xc000006d STATUS_LOGON_FAILURE
[63897.747546] CIFS VFS: Send error in SessSetup = -13
[63897.747573] CIFS VFS: cifs_mount failed w/return code = -13
[63898.774159] Status code returned 0xc000006d STATUS_LOGON_FAILURE
Oops.

However, Xen XCP and Citrix are not clear on the ability to update the Secret used by the SR, their recommendation is just to destroy and reconnect the SR. I didn't want to do that, so I dug in deeper

Run xe pbd-list
code:
uuid ( RO)                  : 36011f83-18b5-24fb-9646-9ccd006fe87f
             host-uuid ( RO): 097c27c4-ef10-427d-9440-5a20e76781ff
               sr-uuid ( RO): eba24fc3-572d-487c-5605-3d57ca259ffb
         device-config (MRO): server: \\192.168.1.248\VHD; username: xen; password_secret: 3fb287bc-4304-12f7-e8fe-0f134050f419
    currently-attached ( RO): false
Ah, there's a secret you can read with xe secret-list!....in plain text.

xe secret-list
code:

uuid ( RO)     : 3a4ff613-4ee2-7302-6757-56f5cc3d4d17
    value ( RW): *THISISAPASSWORD*
Huh, that value is R/W, so in theory, I can update it. But NOBODY on Citrix or XCP state clearly how.

Do a tab lookup of xe secret-* and you come up with
code:
 xe secret-
secret-create        secret-destroy       secret-list          secret-param-clear   secret-param-get     secret-param-list    secret-param-set
secret-param-set should be the ticket!
code:
XE(1)
=======
:doctype: manpage
:man source:   xe secret-param-set
:man version:  {1}
:man manual:   xe secret-param-set manual

NAME
-----
xe-secret-param-set - Set parameters for a secret

SYNOPSIS
--------
*xe secret-param-set*  uuid=<SECRET UUID> [ <PARAMETER>=<VALUE> ] [ <MAP PARAMETER>:<MAP PARAMETER KEY>=<VALUE> ]

DESCRIPTION
-----------
*xe secret-param-set* sets writable parameters. Use *xe secret-list* and *xe secret-param-list to identify writable parameters (RW, MRW). To append a value to a writable set or map (SRW, MRW) parameter use *xe secret-param-add*
Got it. And the UUID is RW, so we can write to it!
code:
xe secret-param-set uuid=3fb287bc-4304-12f7-e8fe-0f134050f419  value='THISISANEWPASSWORD'
Great! Here comes the kicker: Since this is a Pool, its going to create two or more CIFS SRs, since each SR UUID is unique to the server, so you will have to update the key twice because its going to try to use a cached one.

Again, Its not that this was well hidden or anything, but XCP and Citrix do not provide documentation that I could easily find to repair a SR with a changed password. Their recommendation was generally to delete and recreate, which is a pain.

CommieGIR fucked around with this message at 17:43 on Jan 30, 2020

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
https://twitter.com/CommieGIR/status/1224807652519305219?s=20

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

BangersInMyKnickers posted:

Eh, I get their position on this. Core count increases were moving pretty at a pretty even state until zen. This isn't nearly as bad as when they tried to make licensing based on allocated vram which would have completely killed the over-provisioning savings from virtualizing in the first place. Twice the cores over this threshold, pay twice the socket licensing. I'd be really pissed if I was running 48 and 56 core Xeons though.

I mean.....considering VMWare is a Dell majority owned company, and Dell is backing Xeon over Epyc (and let's be honest, Epyc is doing it better than Xeon as far as density), I get why VMWare is doing it, but it seems like an attack on AMD rather than just adjusting for shrinking socket counts due to rising core counts.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
So Dell has dual SD Cards for hosting the hypervisor in places where you are doing remote logging.

Except in my M915s case, the redundant SD card didn't work. Oops. Oh well, back to hosting the hypervisor on a RAID1 SAS. I fully suspect it wouldn't work well, but since I had HA servers, I figured I'd give it a shot.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Moey posted:

Just change the syslog location to a datastore or a syslog server.

Its more the issue of the redundant SD cards didn't fail over, or they failed both at the same time.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Moey posted:

Ha, writing logs to em may do that. I've never had both die at once tho.

ESXi will keep chuggin along running in memory without it's boot drive, just won't reboot or mount the VMware tools ISO on guests.

Not really sure, because they were logging to an ELK, so there shouldn't have been excess writes. Oh well, its recovered on the RAID1 and back in operation.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
Yeah, ZFS isn't going to like that much layers normally, but people do it.

Personally, I like having my FreeNAS box seperate to provide iSCSI or NFS mounts to the ESXi/Xen box for VMs.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Martytoof posted:

What would be the best troubleshooting method to go through to determine why a guest Linux OS deployed from a template isn't receiving guest customizations? Guest has open-vm-tools installed which, I understand, ought to be able to handle guest customizations.

CentOS 7, vSphere and ESXi 6.7. open-vm-tools are latest as of whatever is available in epel-release today.

vSphere or ESXi logs probably.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Potato Salad posted:

how do you get onprem san with meaningful redundancy for less than a quarter of a million dollars though

Dude, storage is relatively cheap, and most SAN gear is 25GB+ fiber minimum now.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

RVWinkle posted:

I spent some more time thinking about LXC and security and I'm not sure it makes sense to run containers right on the hypervisor. Even if you run unprivileged, you end up with issues like the inability to snapshot when using nfs mounts. I ended up deploying a RancherOS vm instead. I used it a while back when they tried to make it the official FreeNAS docker solution and it's pretty cool how the whole OS is defined by composer files.

Sounds like a good idea, best to never expose the hypervisor itself when possible.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

TheFace posted:

I don't think Red Hat workstation versions are officially supported under Hyper-V though I don't know what difference that would make.

I mean, RH has official guides on how to install on HyperV

https://developers.redhat.com/rhel8/install-rhel8-hyperv

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
SecureCRT maybe? Putty supports setting vt100 emulation for Telnet: term=xterm or set term=vt100

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

BlankSystemDaemon posted:

I noticed that the USB 3.0 PCI-ex daughterboard I've got plugged into my Windows 10 VM through VMDirectPath I/O wasn't picking up the USB 2.0 device I plugged in, and when I added a USB 2.0 controller through VMDirectPath I/O, something went absolutely apeshit and the audio started stuttering wildly after not playing back for the first minute whenever I'd start something with audio.
I'm thinking DPC latency issues caused by lack of MSI-X or something else along those lines, but haven't really got any solution other than using USB 3.0, so my question is:
Shouldn't the USB 3.0 controller be capable of picking up USB 2.0 devices, so I don't have to use a separate USB 2.0 controller?

Yes, USB 3.0 is fully backwards compatible. Sounds like there was an interrupt issue.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
Systemd is.....meh. I get it, change sucks, but at the same time alot of the change systemd brings is....not really fixing a lot of things.

https://ihatesystemd.com/

Biggest thing is changing things that really didn't need to be changed like Networking and other things, and then instead of being consistent, they are changing it AGAIN with the new systemd release. They really are trying to do good things with systemd, but doing so in seemingly the worst and most asinine ways possible

CommieGIR fucked around with this message at 16:02 on Apr 9, 2021

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
The problem is, again, that systemd is replacing things that don't need replacement, constantly changing how they are replacing things. Like with changing the network management again after everyone was almost used to as it was implemented.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
I continue to preach the good word of XCP-NG

Adbot
ADBOT LOVES YOU

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Martytoof posted:

I didn’t even make it a week into proxmox. For some reason I felt the performance of my VMs was really sluggish. Not sure why but I bet it was some human error on my part.

Either way, just resigned to continuing the VCSA life forever I guess. I thought about going the xen route but man, that needs a management VM too if I want to do any fun template stuff, and if I’m doing to do that I may as well just stick with the one I have licensed now.

RIP experimenting I guess.

You can talk directly to the hypervisor via the xe commands, and is compatible with all the known automation tools I know of. Plus, XCP-NG comes with the Xen Orchestra virtual appliance.

https://www.criticaldesign.net/post/automating-lab-builds-with-xenserver-powershell-part-3-unlimited-vm-creation

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply