Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
You could probably build a decent ITX Xeon system for less than a NUC

Adbot
ADBOT LOVES YOU

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

A Dell PowerEdge server would be nice but anything with a Xeon and a good SSD or three would probably make a nice machine too. Throw a couple NIC’s in there too.

Potato Salad
Oct 23, 2014

nobody cares


i failed to include in my post that there are uATX xeon boards/cases out there

You can get a c226 board for $60

Xen is...really not where the future is at. Today is about a huge variety of container stacks atop kvm, esxi, and (to a lesser degree) hyper-v

Potato Salad fucked around with this message at 01:57 on Jun 18, 2019

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Potato Salad posted:

Xen is...really not where the future is at. Today is about a huge variety of container stacks atop kvm, esxi, and (to a lesser degree) hyper-v

Eh. Depends on what you mean. Xen is far better as an open source lab solution than ESXi.

I'd rather use Xen XCP than ESXi, as it has more community support and more features that you can use without a license.

Now, Enterprise? I agree there.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

CommieGIR posted:

Eh. Depends on what you mean. Xen is far better as an open source lab solution than ESXi.

I'd rather use Xen XCP than ESXi, as it has more community support and more features that you can use without a license.

Now, Enterprise? I agree there.

Do you mind expanding on why? I haven't used Xen in about 10 years but I'm opening to hear what advantages it has on the low-end

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Bob Morales posted:

Do you mind expanding on why? I haven't used Xen in about 10 years but I'm opening to hear what advantages it has on the low-end

Well, for one not having to pay to get the Management Console. Since XCP is largely community driven and free-as-in-beer, and you can use almost all of the Enterprise grade features that normally you'd pay for in VMware (or even Citrix Xenserver for that matter), I don't see VMWare as a competitor for lab use. Not to mention a lot of advanced storage features and powerful utilities for managing your lab like Xen Orchestra.

Sure, you could argue that Using VMWare strictly by console gets you more used to managing an ESXi box, but honestly, most people want virtualization at home to use the Hypervisor, not to learn VMware.

Don't get me wrong: From an Enterprise perspective, VMware is much more mature.

Caveat: I haven't played with KVM yet, so hence why I have no opinion on it.

TheFace
Oct 4, 2004

Fuck anyone that doesn't wanna be this beautiful

CommieGIR posted:

Caveat: I haven't played with KVM yet, so hence why I have no opinion on it.

This is exactly why I try to tell people to use KVM in home labs, or just Hyper-V built in to W10 (or if you wanna become a Hyper-V Powershell pro, Hyper-V server). Use things you might actually see in a work situation.

I used Xen (and XenServer) for years at work, and now it's pretty dead everywhere I've worked or seen for the past 5+ years. Final nail in Xen coffin was when AWS rolled their own flavor of KVM and stepped away from using Xen.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

TheFace posted:

This is exactly why I try to tell people to use KVM in home labs, or just Hyper-V built in to W10 (or if you wanna become a Hyper-V Powershell pro, Hyper-V server). Use things you might actually see in a work situation.

I used Xen (and XenServer) for years at work, and now it's pretty dead everywhere I've worked or seen for the past 5+ years. Final nail in Xen coffin was when AWS rolled their own flavor of KVM and stepped away from using Xen.

Maybe I'll move to KVM down the road, but Xen is still keeping me very happy.

Marinmo
Jan 23, 2005

Prisoner #95H522 Augustus Hill

Potato Salad posted:

i failed to include in my post that there are uATX xeon boards/cases out there

You can get a c226 board for $60

Xen is...really not where the future is at. Today is about a huge variety of container stacks atop kvm, esxi, and (to a lesser degree) hyper-v
I'd like to thank you all for letting me know, however I'm not in the US so cheap hardware ain't really a thing unless I want to spend time finding a seller on eBay who will ship internationally outside of eBay's own shipping service ... And for computer parts, that's (understandably) pretty rare. And it's not cheap anymore if you opt to go for the eBay shipping (since it automatically adds customs charges). Really, I'm glad you all try to point me in a cheaper direction, but I'm having a hard time seeing how a system built from parts will end up cheaper once you include the shipping-part of buying the individual parts. Perhaps it can be done, but as someone else said in the thread earlier, chances are high I'd end up with a hotter, bigger and louder system, and not necessarily save that much (if any) money anyway.

Everyone posted:

Xen is kinda dead
Thanks for the discussion on this topic. Perhaps KVM is the way to go for me then since I'm mostly looking to do home-computer-stuff - again, I'm really really not trying to build a personal lab environment here, just divide the services a bit on the computer and learn some on the way of getting there. As far as KVM goes, I reckon qemu is the preferred way of utilizing it? I guess I'll look into both Xen XCP and qemu ... Again; please note and keep in mind: I earn my money in a completely different profession, this is just for learning stuff to me, I don't expect to use whatever skills I have acquired from this professionally or put it on my CV expecting it to propel me into any new job. I just want cheap, manageable and hopefully performant setup, my list of priorities being in that order too.

Potato Salad
Oct 23, 2014

nobody cares


A good kvm management stack will have great free features and be applicable to real world ops

It's going to be a little bit harder to pick up and you have to do a lot more reading, but when you come out the other end you're going to have some pretty cool skills

evil_bunnY
Apr 2, 2003

CommieGIR posted:

Eh. Depends on what you mean. Xen is far better as an open source lab solution than ESXi.
The entire vSphere ecosystem is very much payware, Xen has more ungated features but it's comparatively not very popular.

Marinmo posted:

I just want cheap, manageable and hopefully performant setup, my list of priorities being in that order too.
If that's really the case then some form of desktop ("type 2")virtualization might be your best bet.

evil_bunnY fucked around with this message at 12:17 on Jun 19, 2019

Marinmo
Jan 23, 2005

Prisoner #95H522 Augustus Hill

evil_bunnY posted:

If that's really the case then some form of desktop ("type 2")virtualization might be your best bet.
Yup, looked into and set up a CentOS 7 VM last night with KVM, just to get a hang of the basics. Seems very simple and efficient, even with just running virt-manager; I read there were more advanced interfaces too such as oVirt. I also read on Stackexchange that sharing a resource that's not a LVM VG or whole physical disk (say a directory) is most easily done through NFS which seems reasonable enough. However, it made me wonder about locks in such a situation; say I have 2 VMs wanting to access (read, not write) the same file, say seeding a linux ISO from one VM and wanting to write it to a USB key on another computer, would that be possible with an NFS solution, or would the VM lock the ISO from being used elsewhere? I seem to remember file locking can be a tricky bit with NFS shares. Any experience with this or are there other, better ways to share a directory to many VM:s at the same time?

evil_bunnY
Apr 2, 2003

Read locks usually aren’t exclusive

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
What's really cool about discussing NFS locking behavior is that locking isn't part of NFSv2/v3 at all, servers don't care whether or not you lock any files whatsoever, and it's implemented through a separate protocol (NLM) that is respected by some but not all NFS clients.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

I know VMware has a thing where it limits a file to 5 read locks max, everything after that was rejected. Made problems when a bunch of vms were mounting the same ISO on a datastore. Maybe other hypervisors have similar restrictions.

evil_bunnY
Apr 2, 2003

Vulture Culture posted:

What's really cool about discussing NFS locking behavior is that locking isn't part of NFSv2/v3 at all, servers don't care whether or not you lock any files whatsoever, and it's implemented through a separate protocol (NLM) that is respected by some but not all NFS clients.
Lmao I’m so glad I never have to worry about this.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

evil_bunnY posted:

Lmao I’m so glad I never have to worry about this.
Storage is the printers of the datacenter

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

evil_bunnY posted:

Lmao I’m so glad I never have to worry about this.

This made for some real fun if you ever tried to do multiprotocol access on a NetApp volume since CIFS has mandatory locking and NFS has advisory locking and getting those to play nicely on the same file is basically impossible.

Mostly it works fine though since any application that cares about locking and supports NFS just handles it out of band through lock files or some other mechanism, since even NLM support isn't always a given.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

YOLOsubmarine posted:

This made for some real fun if you ever tried to do multiprotocol access on a NetApp volume since CIFS has mandatory locking and NFS has advisory locking and getting those to play nicely on the same file is basically impossible.

Mostly it works fine though since any application that cares about locking and supports NFS just handles it out of band through lock files or some other mechanism, since even NLM support isn't always a given.
Ask me about ingesting 1 million points of time series data per second to figure out that this dumb unannounced loving IBM change to SMB opportunistic locking on GPFS was to blame for all of my HPC NFS client performance taking a complete poo poo

Thanks Ants
May 21, 2004

#essereFerrari


Vulture Culture posted:

Storage is the printers of the datacenter

evil_bunnY
Apr 2, 2003

YOLOsubmarine posted:

This made for some real fun if you ever tried to do multiprotocol access on a NetApp volume since CIFS has mandatory locking and NFS has advisory locking and getting those to play nicely on the same file is basically impossible.
When we got our first filer i looked at the doc for this stuff and it took about 30mn for me to just nope out of the whole thing.

Thanks Ants
May 21, 2004

#essereFerrari


So uh, using NSX in your VMware powered ':yaybutt:' platform would be able to prevent tenants from just assigning IP addresses that they haven't been allocated (in this case, the gateway) to their VMs and killing Internet access for an entire subnet, wouldn't it?

Count Thrashula
Jun 1, 2003

Death is nothing compared to vindication.
Buglord
Anyone have any issues installing SRM 8.2? It may have to do with my vCenter SSO setup here, but switching from external PSC to embedded has me questioning where I have best practices here. These are all in different datacenters.

- vCenter1 (SSO/vsphere.local master), VR/SRM installed fine and runs fine, registered with vCenter1
- vCenter2 (connected to vsphere.local on vCenter1), VR/SRM installed, but SRM errors out trying to register with vCenter2
- vCenter3 (connected to vsphere.local on vCenter1), no SRM needed
- vCenter4 (connected to vsphere.local on vCenter1), no SRM needed

If I open vCenter1 and look at "site recovery", both SRM installations show up, but vCenter2's install has a red box below it saying that SRM isn't installed, and I get a 404 when trying to go to the SRM DR page (https://srm-vcenter2/dr)

Any ideas? Using 8.2 OVF application for everything, they're not Windows installs. Are people using that sort of hub-and-spoke setup with ESCs? Or should I be making each datacenter's vCenter its own SSO master, and then joining it to the company AD?

some kinda jackal
Feb 25, 2003

 
 
Considering blowing away my seldom-used FreeNAS server to try Azure Stack dev kit. Server itself is an R710 with 96gb and plenty of disk. Also have an R620 with 128 currently running vSphere setup

Just trying to decide what my options are. I gather the ASDK is intended for only single node setups, though it would be pretty nice to run the Azure Stack support services on the 710 and leave the 620 for pure compute. Not sure if that’s possible though config change or if ASDK is actually factually locked down to single node operation.

I’ve done all of five seconds research on this, spurred on only by the fact that I have one VM running on my beefy 620 and I’m fine blowing it away to try something new. If this is a terrible idea I’m completely open to that idea.

Potato Salad
Oct 23, 2014

nobody cares


COOL CORN posted:

Anyone have any issues installing SRM 8.2? It may have to do with my vCenter SSO setup here, but switching from external PSC to embedded has me questioning where I have best practices here. These are all in different datacenters.

- vCenter1 (SSO/vsphere.local master), VR/SRM installed fine and runs fine, registered with vCenter1
- vCenter2 (connected to vsphere.local on vCenter1), VR/SRM installed, but SRM errors out trying to register with vCenter2
- vCenter3 (connected to vsphere.local on vCenter1), no SRM needed
- vCenter4 (connected to vsphere.local on vCenter1), no SRM needed

If I open vCenter1 and look at "site recovery", both SRM installations show up, but vCenter2's install has a red box below it saying that SRM isn't installed, and I get a 404 when trying to go to the SRM DR page (https://srm-vcenter2/dr)

Any ideas? Using 8.2 OVF application for everything, they're not Windows installs. Are people using that sort of hub-and-spoke setup with ESCs? Or should I be making each datacenter's vCenter its own SSO master, and then joining it to the company AD?

sweet heavens, using AD binds will be so much easier

more resilient, too

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Thanks Ants posted:

So uh, using NSX in your VMware powered ':yaybutt:' platform would be able to prevent tenants from just assigning IP addresses that they haven't been allocated (in this case, the gateway) to their VMs and killing Internet access for an entire subnet, wouldn't it?

Yes, you could use SpoofGuard to do this.

Thanks Ants
May 21, 2004

#essereFerrari


Thanks, I was sure there must be something but don’t know the product so couldn’t pinpoint the relevant feature.

Actuarial Fables
Jul 29, 2014

Taco Defender
I'm in the process of re-configuring my home lab and have networking question. My (planned so far) setup is:

VM Host1: CentOS(maybe) KVM on AMD 1700x/64GB, one 1gb ethernet port
VM Host2: Windows Server 2016 Hyper-V on Intel 4790k/16GB, one 1gb ethernet port
Storage/VHD Store: FreeNAS on Intel E3-1220 v3/32GB, 6 4TB in RAID10, two 1gb ethernet ports

All connected to an 8-port managed switch

Because the storage server is the only one with multiple network interfaces, I wouldn't be able to utilize iscsi multipathing on the two hosts, correct? I have the ability to use link-aggregation between the storage and switch, so if I'm not able to use multipathing I assume it would probably be best to enable LAG.

TheFace
Oct 4, 2004

Fuck anyone that doesn't wanna be this beautiful

Actuarial Fables posted:

I'm in the process of re-configuring my home lab and have networking question. My (planned so far) setup is:

VM Host1: CentOS(maybe) KVM on AMD 1700x/64GB, one 1gb ethernet port
VM Host2: Windows Server 2016 Hyper-V on Intel 4790k/16GB, one 1gb ethernet port
Storage/VHD Store: FreeNAS on Intel E3-1220 v3/32GB, 6 4TB in RAID10, two 1gb ethernet ports

All connected to an 8-port managed switch

Because the storage server is the only one with multiple network interfaces, I wouldn't be able to utilize iscsi multipathing on the two hosts, correct? I have the ability to use link-aggregation between the storage and switch, so if I'm not able to use multipathing I assume it would probably be best to enable LAG.

Why even attempt to use iSCSI? Just aggregate the links and use NFS and SMB.

You can technically use MPIO with a single NIC because the storage end has two NICs (so two paths, Server NIC to Storage 1, and Server NIC to storage 2). I just don't think the juice is worth the squeeze when you can just go file instead of block.

Actuarial Fables
Jul 29, 2014

Taco Defender

TheFace posted:

Why even attempt to use iSCSI?

I want to try it out! :)

quote:

You can technically use MPIO with a single NIC because the storage end has two NICs (so two paths, Server NIC to Storage 1, and Server NIC to storage 2). I just don't think the juice is worth the squeeze when you can just go file instead of block.

Makes sense. I was reading up on aggregating links in FreeNAS and noticed that they recommended leaving the links separate for iSCSI if multipathing was desired. I doubt I would notice any performance difference either way, but it's a lab so hey.

TheFace
Oct 4, 2004

Fuck anyone that doesn't wanna be this beautiful

Actuarial Fables posted:

I want to try it out! :)


Makes sense. I was reading up on aggregating links in FreeNAS and noticed that they recommended leaving the links separate for iSCSI if multipathing was desired. I doubt I would notice any performance difference either way, but it's a lab so hey.

If you're gonna do iSCSI for the experience of doing iSCSI definitely figure out MPIO. You're right you're not really going to see much if any performance benefit since your hosts are single NIC but it's worth doing if you never have before. It's pretty easy in both hypervisors you have going (actually it's fairly easy in pretty much anything I've ever done).

TheFace fucked around with this message at 14:15 on Jul 16, 2019

Moey
Oct 22, 2010

I LIKE TO MOVE IT

TheFace posted:

Why even attempt to use iSCSI? Just aggregate the links and use NFS and SMB.

You can technically use MPIO with a single NIC because the storage end has two NICs (so two paths, Server NIC to Storage 1, and Server NIC to storage 2). I just don't think the juice is worth the squeeze when you can just go file instead of block.

You can pry iSCSI out of my cold dead hands.

To be fair, we are running dual 10gbe iSCSI links with MPIO from our ESXi hosts. Slightly different use case than what you quoted.

TheFace
Oct 4, 2004

Fuck anyone that doesn't wanna be this beautiful

Moey posted:

You can pry iSCSI out of my cold dead hands.

To be fair, we are running dual 10gbe iSCSI links with MPIO from our ESXi hosts. Slightly different use case than what you quoted.

Yeah, anything in our datacenter that isn't on vSAN is on some form of iSCSI SAN. Used to do a ton of FC and just prefer iSCSI now.

Pile Of Garbage
May 28, 2007



I miss working with FC, zoning was neat and fun :(

SlowBloke
Aug 14, 2017
I never considered FC to be interesting until i started working with it. Unlike iSCSI it either works perfectly or everything is hosed. The native multipathing is a nice extra.

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

SlowBloke posted:

I never considered FC to be interesting until i started working with it. Unlike iSCSI it either works perfectly or everything is hosed. The native multipathing is a nice extra.

Perfect linear link aggregation is really nice too, the idea that to add bandwidth just add a port is awesome.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

FC is coming back around thanks to NVMe. You can do NVMe over Ethernet of course, but FC-NVMe is what is getting most of the early support out of the gate. And its gonna be a lot easier to gently caress up an Ethernet NVMe network than an iSCSI one.

bolind
Jun 19, 2005



Pillbug
Is this the place to ask about docker?

Docjowles
Apr 9, 2009

bolind posted:

Is this the place to ask about docker?

There's probably more docker nerds in the CI/CD thread. This thread tends to be for more traditional virt stuff.

Adbot
ADBOT LOVES YOU

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
VMs within VMs.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply