Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
evol262
Nov 30, 2010
#!/usr/bin/perl

MC Fruit Stripe posted:

This, for what it's worth, is where I'm leaning. I'm not ready to put a rack of Cisco equipment between the two boxes to simulate separate locations, but that's going to be the end goal and another NIC for each box would need to be part of that, so I think maybe this is simply going to be a step in that direction.

That plus static route might just do everything I ask of it, good show!

Install pfsense on the 2nd lab machine. Connect the two with ipsec or openvpn. Use .10 on both.

Adbot
ADBOT LOVES YOU

evol262
Nov 30, 2010
#!/usr/bin/perl

H.R. Paperstacks posted:

I ended up getting a 12U cabinet off Amazon (it was wallmount so I had to add casters).

http://amzn.com/B00GAPUMDE

I love these, but half depth is so limiting.

evol262
Nov 30, 2010
#!/usr/bin/perl

H.R. Paperstacks posted:

Yeah that is a downside since you aren't fitting a standard rackmount server in there. Home lab wise it's almost perfect since people are rolling their own server and the cases are usually shallow like that 2U I have in there.

I had a lot of problems finding half-depth cases with ears on Newegg 4 years ago (last time I had a half-depth rack). Is there a good option these days?

evol262
Nov 30, 2010
#!/usr/bin/perl

phosdex posted:

I'm building a little home esxi server and have everything ordered but one last item. USB flash drive for booting off of. Are there any particular ones that are more reliable for this use? I'll buy a couple for backup, but right now I'm thinking Crucial because they are more likely to have built it start to end?

They pretty much all use the same chips. Crucial is no more or less likely to survive, really. Buy one cheapish one for now. They'll only get cheaper and better in the future.

evol262
Nov 30, 2010
#!/usr/bin/perl

Martytoof posted:

edit: Oh ehh, they require ECC ram. Nevermind.

This is the problem, really. ECC SO-DIMMs

evol262
Nov 30, 2010
#!/usr/bin/perl

Dilbert As gently caress posted:

I thought it did just *experimental*, I could be wrong I know it works in VM player which is also free.

VirtualBox does not do nested virt at all. Neither does Hyper-V. Free has nothing to do with it.

evol262
Nov 30, 2010
#!/usr/bin/perl

Dilbert As gently caress posted:

Oh okay I thought virtualbox did, I was just mentioning VMware player is free?
VMware player's a much better option if you want nested virt, agreed.

Dilbert As gently caress posted:

Either way he should be able to play with hyper-v, so he can get a feel for the configurations.
https://www.youtube.com/watch?v=YrJP5Xg9etY
I believe you can start the Hyper-V utilities from inside vbox, but not any guests. That could be completely wrong, though, since Hyper-V uses the same basic approach as Xen, and the Hyper-V management stuff is running inside a VM anyway.

evol262
Nov 30, 2010
#!/usr/bin/perl

Martytoof posted:

So I've been hemming and hawing about building this vmlab server for the past month and change. I've heard bad things about AMD FX chips (not in this thread, granted, especially since we talked about this FX8320 situation a few pages back) but honestly I can't tell how much of this is just people's preference for intel because on paper it seems like the AMD will run about the same as an i5 but has more cores and is cheaper. There's also the fact that AMD FX chips have a higher TDP than the rough equivalent i5, but I'm not sure how much a 130W vs 95W TDP CPU will reflect in electrical costs in reality.

I mean right now I'm running an i7 920 which is like a 130W TDP which is on 24/7 (though minimal workload most of the day) and I haven't really cried at my electric bill or anything.

There's an FX8300 which is 95W but you can literally not buy it in north america it seems. I'm a little weary of going to eBay for a CPU.

So I don't know. Everyone is saying stick to Intel but I'm on a budget and in the meantime I've spent a month not simming poo poo because I've been obsessively checking prices on intel vs AMD stuff, etc.

I think I'm just going to buy the 8320 with a decent mobo and 16 gigs of ram and just loving start using it rather than spend my time browsing forums reading about wattage and electric costs and AMD vs Intel slapfights.

Honestly, it depends on what you're doing. Is CPU performance important to you? More important than price and density? Buy Intel. But it's virt. And core for core, AMD is about 25% cheaper even taking in the cost of motherboards and such, plus they tend to have better support for nested virt, PCI passthrough, etc.

I recommend AMD for high density (blades, openstack deployments), vdi, and clusters on the cheap. Intel for mid-budget, virtualizing databases or compute, or buying vendor 1u-2u kit. But you can't lose either way

evol262
Nov 30, 2010
#!/usr/bin/perl

Martytoof posted:

Can someone recommend a budget dual PCIe gig NIC that is either supported by ESXi 5.5+ or has a VIB that can be easily added in?

Technically PCI would be fine too, but that would mean I have to relocate my internal USB drive :(

Intel. $25 on ebay

evol262
Nov 30, 2010
#!/usr/bin/perl

Stealthgerbil posted:

I was running freeNAS on its own computer and it seems to be good so far. I was just wondering how building a basic PC would be versus buying a synology box. If I can build a PC for $200 + the raid card, that is pretty much the same as the synology box, it would be pretty awesome.

Also I saw these cheap 4gb fiber cards http://www.stikc.com/QLogic-QLE2460-HBA-Adapter-PF323

Would that be a cheap way of getting speeds of greater then 1gbit for my home lab setup? From what I read about doing any sort of high availability virtual machine setup, 1gbit ethernet is just not fast enough.

Getting a managed switch and setting up bonding.

Multipath for iscsi (don't bond it) on storage vlans with max mtu

evol262
Nov 30, 2010
#!/usr/bin/perl

smokmnky posted:

So basically "you can't"? I'm not sure what "force the flash/html" means.

It doesn't run well in WINE or mono. Virtualize windows. Or use the web client. Those are basically your options in a professional environment anyway unless the VIC has stopped updating constantly and requiring admin permissions you probably don't have.

evol262
Nov 30, 2010
#!/usr/bin/perl

Mr Shiny Pants posted:

So i just redid my NAS and i've got a TS440 running 2012R2. I want to export Zvols for use as VM disk. Using Hyper-V gen 2 VMs it is possible to pass-through a raw disk.

Anybody know what the preferred blocksize of a zvol and the corresponding NTFS blocksize is?

Use iscsi luns instead. But ideal block size depends on workload. What are the vms doing?

evol262
Nov 30, 2010
#!/usr/bin/perl

Chickenwalker posted:

I'm going to set up a lab for the CCNA and MCSA at work and start coming in on the weekends to study. What sort of hardware is going to benefit me most, or is it possible to use something like an old ProCurve I have lying around and GNS3 to full effect?

GNS3, yes. The Procurve, maybe not for a CCNA (which, while it teaches things well conceptually, is also a bit Cisco-ish and pairs best with a Cisco CLI for learning)

evol262
Nov 30, 2010
#!/usr/bin/perl

Martytoof posted:

Let's go with the more hardware side of things:

Anyone know where I can get four 20U or less square hole rack supports? I'm building a nice wooden enclosed rack that I'm going to put in my den which will fit in with some furniture. I've got the wood and plans picked out, but I'm having trouble sourcing the actual rails. I don't really need it to be 20U but if I buy 20U i have room to grow/shrink my design.

Your local metal shop. Honestly.

evol262
Nov 30, 2010
#!/usr/bin/perl

kiwid posted:

I've been given $4000 (maybe $5000) towards hardware (no licensing costs in that) to build a home lab from the company I work at. I want to build a 3-2-1 vmware solution.

What should I be looking at? I figured a QNAP for the iscsi storage, a couple of cheap gig-e switches and 3 white box hosts. Any hardware recommendations? Also, must be hardware compatible for vmware 6.0.

Whiteboxes are fine. e3 Xeons if you want, but there are some reasonable (and cheap) workstation/"server" options from Dell and Lenovo. AMD's stuff is very reasonable for virt, especially on a budget, and for a lab. Get something with VT-d (intel) or IOMMU/AMD-Vi (AMD) support so you can pass through devices if you want to.

I'd use a MicroServer for the storage, personally. QNAP performance is going to suck at that budget.

Get fanless switches from your vendor of choice. They all have them, eBay is reasonable, and you should be able to get HP, Dell, or Cisco (Cisco's more, especially fanless, but you should still come in under $750 for 2 switches, and it's been a while since I looked). You may not want it right now, but get switches with LACP/802.3ad support. And a real (non-web) console.

evol262
Nov 30, 2010
#!/usr/bin/perl

kiwid posted:

Thanks.

As for the whitebox, I think I'm going to build this: http://www.ryanbirk.com/shuttle-sz87r6-vmware-esxi-5-5-home-lab/

The only thing I don't know is if it will work with VMware 6.0. Anyone running these?

Get a real case. And for "serious" labbing stuff you're going to want more than 4 NICs (6, probably). Plus the ability to add more for passthrough if you ever get there. That means, probably, more than 2 PCI e slots if you're already burning one for a dual/quad port NIC. 3+ would be good.

evol262
Nov 30, 2010
#!/usr/bin/perl

kiwid posted:

Is vSphere 6.0 compatible with the AHCI controller on the SuperMicro Mobos? If so, I could direct passthrough each device to a freenas VM, right?
It doesn't matter. If you're doing passthrough, all that matters is freebsd/freenas having support for the device, and the CPU+motherboard supporting VT-d/IOMMU passthrough.

evol262
Nov 30, 2010
#!/usr/bin/perl

kiwid posted:

That Xeon has VT-d, so would I be able to direct pass the block SSD device to a freenas host then? The other guy is saying no?
You could pass the entire controller through, which means all the disks will go to the VM. Not a specific disk. Passthrough works on PCI IDs. You can pass through the block SSD with RDM (if that still works), but it's terrible practice and you shouldn't.

kiwid posted:

just so I know I can pass the devices through direct to the VM without worrying about the mobo compatibility

That's not how it works. Your motherboard (chipset) must support VT-d as well as the CPU for passthrough to work at all, and the vendor must have actually enabled this support in their firmware. In this case, all of these things are true. But they could just as easily not have been. In the future, you can't rely on VT-d working just because the Xeon can do it.

evol262
Nov 30, 2010
#!/usr/bin/perl
Half/shrt depth servers may be tight, and you're almost certainly going to give up niceties like "hot swappable 3.5 inch bays" or "doing any cabling without pulling the whole rack out". You'll probably just want shelves, really. Cooling should be OK. Worry about noise

evol262
Nov 30, 2010
#!/usr/bin/perl
Pretty much anything will do that kind of basic-level virtualization these days. Unless you have stringent hardware requirements, just shove a bunch of memory into anything made in the last 3 years.

evol262
Nov 30, 2010
#!/usr/bin/perl

Ciaphas posted:

I was led to believe that allowing multiple vlans on a single port (in this case, vlan 4 tagged and vlan 5 untagged) was the definition of trunking. Was I wrong and that it means something else?
Thats a port with a PVID set, basically.

Networking terminology is kind of brand-specific. What you want is for all frames passing that port to have a tag.

Connect your cable modem to some port. Set that port as untagged, with a PVID of some VLAN (2, whatever).

Port 3 (your NUC) should be configured to accept tagged frames with that VLAN. It should also be configured to allow another VLAN (3, for example). Configure a vSwitch which listens for VLAN 2 (pulling this out inside a vswitch will make your initial configuration easier -- feel free to go wild once it works), and pass that to your pfsense VM.

Create another vswitch for your other VLAN (3), and assign ports from that switch to your other VMs. Also assign a port from that to your pfsense VM, so you can access it.

Configure the management network on ESXi's management console (the yellow and gray/black screen) to use VLAN3.

Set ports 4-8 as untagged, with a PVID of 3.

Ciaphas posted:

As for PVID, I'm led to understand that if I don't set, say, port 4 (my main PC) to VLAN 5 any ethernet packets it generates don't get tagged VLAN 5 and so can't communicate with anything else on the network--NUC-with-router included. Am I wrong again?
That's correct.

evol262
Nov 30, 2010
#!/usr/bin/perl
Turtles all the way down. No reason to switch if it works. vSAN and gluster will both choke on one disk without much memory. Ceph is a no-go with that use case.

evol262
Nov 30, 2010
#!/usr/bin/perl
Are you actually using VLANs?

evol262
Nov 30, 2010
#!/usr/bin/perl
Labs are for breaking/testing things you might want to learn without wrecking poo poo at work or where it matters. KVM has less hold on the workplace in general. But if you have questions, I run KVM everywhere, so feel free

evol262
Nov 30, 2010
#!/usr/bin/perl
VMware (in general) has a lot more resources on Google if you wanna ask a quick question, and the user interface is somewhat more friendly for doing odd stuff (which you'll need to do to virtualize OSX)

The hardware support can be finicky compared to KVM, though, which will pretty much run on any crapbox which has hardware virtualization support

evol262
Nov 30, 2010
#!/usr/bin/perl
KVM (through libvirt) can trivially create/destroy/clone and export/import configuration through virsh, virt-manager, kimchi, or whatever.

I does not do things like HA. At all. Because KVM is essentially a driver, and libvirt sits on top to say "here's how you access storage/etc". To make it do things like HA, you can either set up obnoxious resources in pacemaker, or use an actual product backed by KVM (oVirt, proxmox, etc)

KVM is comparable to vmkernel, not vSphere.

evol262
Nov 30, 2010
#!/usr/bin/perl

Nativity In Black posted:

Contrary to the (now ancient) OPs you can buy used Dell servers stupid cheap on ebay these days. If I have a rack sitting in the back room, is there any reason I should spend money on a c6100 or r710 just to gently caress around with? Seems like you can get a decent amount of cores plus a sizable amount of memory for <$300

eBay servers were always cheap. They're also loud power hogs. Have you heard a C6100?

If you can live with that, great, get one.

Adbot
ADBOT LOVES YOU

evol262
Nov 30, 2010
#!/usr/bin/perl

Mr Shiny Pants posted:

The bigger servers are pretty nice, a DL380 Gen 8 is pretty quiet. 1U servers are usually loud, 2U is much better.

The Nexus switches we have sound like they are always on the verge of taking off.

2U is tolerable in a garage or basement. You'll probably hear 1U through the entire house. The "3 systems in 1 chassis" like the c6100 are some of the loudest I've ever heard.

  • Locked thread