Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
evol262
Nov 30, 2010
#!/usr/bin/perl

Sepist posted:

This is the thread for setting up home labs on a goon budget; where cheeto stained fingers meet enterprise networking
I really wish this weren't so Cisco-focused, honestly. Home labs for people studying for various server/VMware certifications are just as popular, but I'm also excited to see that GNS3 is so much friendlier than it used to be.

Sepist posted:

Systems: System can be emulated with VMware ESX or Workstation / Microsoft Hypervisor / Oracle VM VirtualBox. You would install one of those products and from there you can install virtual machines. You would need to :filez: an ISO of your target operating system and install it on a blank virtual machine. If you don't have a spare server/workstation to install ESX on, you can run ESX inside of a VM like you would any other virtualized OS, then build VM's into that, effectively making the inception version of server virtualization
Nitpicky, but ESX is a dead product. ESXi is free (forever) with registration. It's really easy to reinstall every 60 days to keep full functionality (including vSphere) if you're cheap. Secondly, you don't need to :filez: anything if you use Linux.

Sepist posted:

V. Server VM Gotchas

I don't know of any so this needs to be updated
Clock rate syncing on RHEL5 and older versions of Windows. Not a problem if VMware tools are installed (or elevator=deadline in Linux, but potentially an issue).

Sepist posted:

For the non-networking people, I guess you could just set up a shitload of Windows 2008 R2 servers and make yourself a AD / DNS / CA authority, tie all your home computers to the domain and lock mom out of farmville when you get grounded.

Linux. You absolutely do not need WIndows for DNS, DHCP, LDAP, and Kerberos (though you obviously do for AD), but hey.

Dilbert As gently caress posted:

virtual freenas ZFS server 4.5GB ram, 20GbZlog and 20GB L2ARC
1GB RAM per TB of ZFS storage, right? That's pretty much optimal without dedupe.

Dilbert As gently caress posted:

DC 2008R@
CA/DNS/DHCP/AD/FS/SQL
vCenter
Vmware services
It's bad practice SQL Server on the same box as an AD controller.

Why is one of your ESXi boxes addressed by IP when you have DNS?

Adbot
ADBOT LOVES YOU

evol262
Nov 30, 2010
#!/usr/bin/perl

SamDabbers posted:

Ahem, Samba4 :colbert:

I was under the impression that you still need a Windows environment to do anything useful with AD, unless the Samba guys have created a meaningful way to edit/apply GPOs and anything else which makes AD practical.

evol262
Nov 30, 2010
#!/usr/bin/perl

SamDabbers posted:

I suppose I should clarify that you don't need Windows Server in your environment to do AD if you use Samba4. You're correct; you still need a Windows client with the server management tools to administer the Samba4 Domain Controller. Then again, what's the point of using AD if you don't have a Windows client to manage in the first place?

For some people, it's easier to just use AD than to get DNS, DHCP, LDAP, and Kerberos all on the same page. I mean, I don't really see the point either, but it happens.

Powercrazy posted:

If you are planning on making a network lab specfically for networking, then creating an NMS server is a good first step, in the enterprise as well as the lab.

FTP/TFTP Server
Logging Server
NTP Server
DHCP/DNS/etc.
Management Portal Server/Console Server

All of these can and should be the same device and basically think of it as the entry point to your network. You will consolidate all logs, backup all images, and use it as your one stop shop for management and learning. Also just learning how to correctly setup and deploy all of those features/services is a great way to start to get into the more interesting parts of IT.

Ask yourself honestly: "does my network need a bastion host?" The answer is probably no. Even if it does, there's no reason for DHCP/DNS to be there. Syslog should be inside the network (not on a bastion). [T]FTP should be inside the network unless you're providing public FTP services (it's 2013, don't do this).

evol262
Nov 30, 2010
#!/usr/bin/perl

Powercrazy posted:

I'm talking specifically about a lab setup. If you are doing your CCNA and you are planning on going further, then learning what all those services are and how to deploy them is a good idea.

Obviously in the enterprise many of those services will be separate especially as the environment scales. For a home lab, all of that stuff can be deployed on a single router.

I don't see a reason why these services wouldn't be separate in an environment predicated on virtualization.

evol262
Nov 30, 2010
#!/usr/bin/perl

smokmnky posted:

for a testing and learning lab only I don't see any reason not to just work on everything. Telling people not to try out t/ftp or anything else just because "it's 2013..." is short sighted. The point is to learn not conform to your one specific thought process.

I'm not saying not to try out [T]FTP. I'm saying not to make it accessible on the same machine as the console server. Keep old, insecure protocols in their own DMZ in 2013. Set up a tftp helper for PXE (which is good practice for the CCNA anyway).

Powercrazy posted:

Totally a possibility. Though then you run into the issue of access to your lab being dependent upon the function of the lab. I'd prefer the ability to completely isolate the lab devices, but still being able to access each one. So for example even though I'm running vsphere I wouldn't want to manage vsphere through a VM hosted on vsphere.
A raspberry pi is $35 and will handily host these services, offering you segregation for 5W and $35, as well as being solid-state and instantly coming back on the moment you get power (in case of an outage).

evol262
Nov 30, 2010
#!/usr/bin/perl

Indecision1991 posted:

This is sorta what I want to run at home but with a lot less equipment. I would like to virtualize as much as possible since I am taking some vmware classes and want to keep up the momentum. I was thinking of a single server with beefy specs to use for ESXi, I know I can find 3+ year old equipment with decent specs for 400-500, I may even be getting a free server from an old boss. The networking is my weakest point but I do plan to buy some routers/switches within the next 6 months for practice. Do you think it would be a good idea for me to host a small single server with 2 routers and 3 switches?All for practice of course.

I always think two servers with lower specs (but lots of memory, which is cheap) are better than one, so you're not screwed with a hardware failure and you can play around with clustering without nesting ESXi instances.

But, yes, an environment like that (2 routers, 2 servers, 2 switches) is a good idea.

evol262
Nov 30, 2010
#!/usr/bin/perl
Unless you've done this before and know what you're doing, you almost certainly do not want a 1U with a 5 year old Xeon, no matter how enticing it looks on paper, especially if you want to do nested virt (which has improved dramatically from the hardware side in the last 5 years).

evol262
Nov 30, 2010
#!/usr/bin/perl

Indecision1991 posted:

Good point, I am simply looking at the specs and not thinking about the improvements that current hardware has over older stuff. I am just confused I guess, I would be fine using a white box, i have a ton of old drives i can use including an ssd. At the same time having some old refurbished systems to play around with still sounds, to me, like a decent idea.

Edit: should have added onto my last comment, I cant delete this one so I apologize for the double post.

It's loud, and it sucks power. It's loud. I cannot emphasize enough how piercing 40mm fans in a home environment. While you're getting 8 cores and 72GB of memory for the cost of a Haswell i5 and 32gb of memory, really ask yourself whether the tradeoff in noise, heat, and power consumption is worth it.

evol262
Nov 30, 2010
#!/usr/bin/perl

Indecision1991 posted:

Budget is around 600-700 for a white box since I can buy components over time. If I were to buy a refurb server I would say a budget of 500 since its a single big purchase.

Or buy:

i5 Haswell - $190
32GB memory - $108 x2
Motherboard - $70

You have drives. You probably have a case/PSU as well. If not, it's ~$50 extra. So it's $500 for new, quiet hardware. And you probably don't need 72GB of memory anyway. Especially not in a 1U.

evol262
Nov 30, 2010
#!/usr/bin/perl

Dilbert As gently caress posted:

Pro's very similar to my setup that I visio's
Con's
Bit more expensive and a bit bulkier, also would need a video card to setup ESXi temporarily

More expensive because it includes a SSD and 2 disks. And it has two more cores. You could easily replace that with something FM2 which would draw less power and provide video for the same price.

evol262
Nov 30, 2010
#!/usr/bin/perl

Return Of JimmyJars posted:

I'm going to be the voice of dissonance and say that as long as you power down your lab when you're done getting that eBay Dell is fine. The idea behind the lab is to build experience and you're not going to get familiar with how a real bare metal server is setup by building a generic beige box. Server hardware and the consumer hardware people on here are recommending are radically different. You can also tuck your lab into the garage or basement if the noise is that obtrusive.

If he doesn't have a garage or basement, he's screwed.

A "generic beige box" is still a bare-metal setup. It's not ESXi on ESXi.

Server hardware and consume hardware differ very, very little these days, unless you think "ECC instead of non-ECC; SAS instead of SATA" is "drastic". The big difference you'd see is using an OEM-customized ESXi image that has drivers built in. Big whoop. The VMware experience is just the same. A lot of whiteboxes will do hardware monitoring out of the box with ESXi.

This whole "just turn it off" thing is insane. Once you get reasonably used to having an AD environment, you're going to tie it into the rest of your network. Then what? Leave your 1U running all the time?

evol262
Nov 30, 2010
#!/usr/bin/perl

World z0r Z posted:

For what it's worth the G7 and up HP proliant 2U boxes like a DL380 are really really quiet for what they are. They might spin up for 5 seconds on boot but they are no noisier than a video card playing games.

Terrible-config DL380 G7s are still 3 times as expensive as a Haswell i5 build.

evol262
Nov 30, 2010
#!/usr/bin/perl

Ron Burgundy posted:

Still not possible to nest 64-bit guests under VMWare Player or Virtualbox with ESXi hey? Guess that rules out Server 12, seems to only come in 64 flavour.

What? Yes, it is. You can nest virtualization-capable guests in Player or KVM. I don't think you can in VIrtualbox.

evol262
Nov 30, 2010
#!/usr/bin/perl

klosterdev posted:

I've got a copy of Packet Tracer lying around due to some Cisco classes I took a while back. Out of curiosity, is it illegal to distribute the software? I'm asking because PT has no DRM to speak of.

Are you daft?

See here.

quote:

The Packet Tracer software is available free of charge ONLY to Networking Academy instructors, students, alumni, and administrators that are registered Academy Connection users.

"It has no DRM so it must be free for distribution" is an incredible argument.

evol262
Nov 30, 2010
#!/usr/bin/perl

Swink posted:

The dude reckons you need an i7 Nehalam core.
:hurr:
gently caress that guy. It's not even remotely true on KVM, and I don't see why it would be on VMware, either. It performs better with EPT (and the list of processors with EPT includes loving Celerons), but it's not a requirement. VMware has a checkbox now. You don't need to manually edit configs.

evol262
Nov 30, 2010
#!/usr/bin/perl

Tekhne posted:

^^^ Good post, but I don't understand the hate against refurb server hardware for a lab. I'm assuming your post is at least in partial reaction to my previous one since you mentioned a Dell without HDDs. While I did enjoy your sperg on installing the hypervisors, I too can do them in my sleep. That being said there isn't much reason to put someone down over something like that. I'm sure no one gives a poo poo that you can tie your own shoes (I'm assuming here) but I bet you were pretty proud the first time you did it. One thing I didn't mention is I already have a FreeNAS setup with bonded NICs. I also have two PCs with i7 3370's and 16GB of ram. These are my gaming machines and like you suggested in your thread I've been using Workstation to build up my lab on top of them for quite a while. Its been working fine, however an upgrade was in order as I am wanting to get into performance testing, DirectPath scenarios, automation, etc.

Due to these needs I wanted to move away from the inception build and have the hypervisor on the physical hardware. The obvious choice was build up two new whiteboxes and dedicate them for my lab. The cost would have been roughly $1100 or so. Once I found the C6100 for $770 and that it contains four independent servers within its chassis, I was sold. Sure the L5520 line was released in 2009, but its got plenty of power for what I am trying to do. The power consumption is low and the noise / heat won't be an issue as I have a dry basement that could use some heating in the winter.

I really don't even have the words. I work on RHEV/oVirt, from home. I have a lab. I have L5520s literally sitting on the floor because it's not worth the power bill and added runtime of the AC to have them on. I have a full-height rack in my office and it's not worth my time to have L5520s racked up because IPC is horrifyingly low, nested virtualization on them sucks, and performance is worse than my W530. To some point more cores buys you more vCPUs without hammering on interrupts, but I'm not sure why the next step "modern hardware" is automatically "5 year old decommissioned hardware". Hint: they're not using it any longer for a reason.

For the cost of your C6100, you could have 2 hex core Visheras with 32GB of memory each, which will support advancements in virtualization over the intervening 4 years (there are a lot), cost you 1/4 of the power bill, generate 1/4 of the heat, 10% of the noise, and generally run circles around those L5520s on anything other than distributed compiles and cluster databases (but realistically, you probably don't have the IOPS to make either of those relevant). How is it a "gem"?

evol262 fucked around with this message at 17:31 on Aug 21, 2013

evol262
Nov 30, 2010
#!/usr/bin/perl

smokmnky posted:

So I totally agree that being able to install ESXi on a box isn't that impressive just like being able to install Windows 7 isn't either but I would like to know once you have it installed and a few VMs running what would you consider an "accomplishment" in regards to actual VMWare work? Is it getting them networked and talking to each other? I've been "deploying" VMs for a little while now but I'd like to get some more knowledge and working into what makes a good VMWare admin

What makes a good VMware admin subject matter knowledge of:

SANs (FC and/or iSCSI), including best practices for multipathing, how to handle LUN masking and replication, etc
Scripting -- PowerCLI is the standard, but you can use anything you want
Systems Administration -- you're almost certainly going to end up hands-on with some of your VMs, and you should be comfortable in any OS running on your VMware environment, especially sysprep if you deal with Windows
Networking -- Know when to use link aggregation and when not to. Understand VLANs and how they work, as well as how to segment your network and troubleshoot problems.
Disaster recovery -- enough said; large VMware environments almost always have a DR site somewhere, and you should be familiar with scoping the required resources and setting up processes to ensure that a hot (or cold, depending on your environment) environment is ready
Performance tuning -- know how the VMware scheduler works, and when 2 vCPUs are actually better than one. Know how dense you can make your environment. Get a handle on how many IOPS you need.
Resiliency -- keeping critical services up through failures. Nobody wants your virtualized AD controllers to die.
VDI -- plays into performance tuning/density/systems admin
Imaging -- fading, but "golden images", templates, linked clones, and other ready-to-go images are still important.

Nobody is going to hand you a configured environment and say "plug in your servers, assign these addresses, and collect a paycheck". Realistically, you'll help design the environment and administer it on a day-to-day basis, probably including the guests. A good virtualization admin has (or has had in the past) a hand in every pot.

evol262
Nov 30, 2010
#!/usr/bin/perl

Agrikk posted:

It's my opinion that home labs, when done properly, are the most overcomplicated and overbuilt environments ever on a per user basis. :)

I was bitching in the daily poo poo thread about how complicated and cluttered my lab had become so I tore it all down, set it up in its current configuration and vowed not to touch it until my MRTG installation fills up the Yearly Graph (1 Day Average).

Except that yesterday I ordered three FC HBAs and am toying with the idea of building a new storage server around Windows Server 2012 R2 and converting everything from iSCSI to 4GB FC.

And maybe some Infiniband for giggles.

gently caress.

iSCSI over IPoIB. gently caress FC.

evol262
Nov 30, 2010
#!/usr/bin/perl

Agrikk posted:

IB is next.

I'm comfortable on FC so I want to get familiar with a new technology (tiered storage in Server 2012 R2) while refreshing on FC. Plus FC gear (HBAs and switches) is a lot less expensive than IB gear.

But why the FC hate? It's not that complicated to manage and has been bombproof in all of my past deployments. If I had any complaint about it would be the lack of insight into actual traffic utilization over your FC fabric that I'm not sure has been resolved.

I actually like FC. But FCoE is abortive, and ethernet just isn't going away. I like the idea of having a segmented storage network on a separate protocol layer, but the reality is that almost all the advantages of FC can be accomplished with iSCSI, MPIO, and VLANs. Not that FC is bad, just that it's dying. FC shops will stay FC, but new deployments will probably be iSCSI until it takes over the world. Maybe just my opinion.

evol262
Nov 30, 2010
#!/usr/bin/perl

Docjowles posted:

Basically, if your goal is running "VM's inside VM's", VMware Player owns. All it requires is Windows or Linux as your host OS (no Mac support), and a 64-bit CPU released within the last few years. It's very bare-bones feature wise but it does the job.

I should mention that KVM can also do this. And Parallels can, I think. So you have the ability to do it on every major OS. I'd recommend using KVM's nested virt over Player's on Linux (just because everything Workstation/Player related is a huge PITA compared to KVM), but eh.

evol262
Nov 30, 2010
#!/usr/bin/perl

Docjowles posted:

I'm sure there are easier and more power solutions. However, as far as I know, Player's the only product that satisfies my two very specific goals of being 1) free, and 2) running on Windows. Longer term I definitely want to build a dedicated lab box, but I can build all the "lab" I need right now in VM's for free, which is good enough for me.

No, I mean, Player's a great solution for Windows.

Nested VM support:

Windows
  • VMware Player
  • VMware Workstation
Linux
  • VMware Player
  • VMware Workstation
  • KVM
OSX
  • Parallels
  • VMware Fusion

evol262
Nov 30, 2010
#!/usr/bin/perl

kill your idols posted:

Thinking about this HP ProCurve 1810G-8 v2 for $135 shipped. Worth the upgrade from my NetGear GS108T?

Yes. But 1810-24Gs are $50 more. A PowerConnect 5324 and (replacement) quiet 40mm fans is less and more capable in every possible way.

evol262
Nov 30, 2010
#!/usr/bin/perl

Pretty much, yeah. And update the firmware first thing so it actually works with IE.

It's also really goddamn loud (like a lot of switches). Get 2 quiet 40mm fans and 4 solderless quick splices and fix that (it's the usual dell problem where hot and neutral are actually reversed for no reason). The biggest win of the 1810-24G is that it's fanless, but it's honestly much less capable than a 5324.

evol262
Nov 30, 2010
#!/usr/bin/perl

alo posted:

So this isn't homelab material (I already have a Norco with an E3 Xeon loaded with m1015's and drives -- it's terrific and quiet)... but...

At work I need to set up a lab. Normally the answer to this is "just get some older decommissioned machines," but in this case, I really don't have any older machines*. So lets set a budget of 1500 dollars. I'm looking for some storage (probably going to be running some Solaris derivative for easy iSCSI/NFS), plus two ESXi hosts. This is a bit different than the home setup, since I don't pay the power bill or care about the noise, since it's going in a cute little half rack in the corner of our DC.

So lets start out with two C1100's at $430 apiece (72gb ram, with rails, no drives). There are 32gb models for a few bucks less, but meh. Is there anything that's going to beat that, in a rack form factor?

If I go with those two, I'll have 640 left over for some storage. I'd like to throw at least 1 SSD in there as well as a few 3.5 inch drives. What's a good choice here? As noted below, I do have some older machines, if there's a particularly sweet DAS or SAS expander setup.

Oh, and I have a pile of terrible Dell 5224 switches sitting around, is there anything better in the ~200 dollar range?

Tell me about your dream setups that your wife/mom won't let you have.

* I have a pile of old 2950's (original, not iii) sitting around with 4gb of ram in them... I'm not really looking to buy loose ram for some even crappier Xeons.

The question at this point is really "what are you going to do with your lab?"

Tekhne posted:

I purchased the C6100 for my home lab recently and am very happy with it. Mine has 4 nodes each with 24GB ram and dual Xeon L5520's. This particular model has 12 HDD slots - normally three go to each node, but with a little modding you can have all 12 go to one node. In my case I have all 12 going to a node running FreeNAS for now. There are some trays that you can buy for a few bucks to fit an SSD into it. This particular seller accepted my offer of $769.99 (with some haggling, so start low). Additional info on this model can be found here. As for power usage, all four nodes on and idle use 121.1 watts.

Noise at load: 77dba. That's a car driving 65mph passing you at 25 feet. Or a vacuum cleaner.
Noise at idle: 66dba. Standing next to a running dishwasher. Cash registers working.

Your offer of $769 would have purchased two current generation 8 core systems with 24GB of memory each. In some respects, it's "half" of the C6100. Except for the noise. And the IPC. And the bus speed. And the memory speed. And...

I'm glad you're happy. I just wish people would stop recommended recycled 5 year old server kit for home labs.

evol262
Nov 30, 2010
#!/usr/bin/perl

Tekhne posted:

Not accurate in the least. I'm not sure why you feel the need to criticize my purchase every chance you get. Every post you make on this subject is so full of assumptions and inaccuracies its ridiculous. If your criticisms were actually based on fact, then they might be valid. For starters, he was asking for recommendations on a work lab, not a home lab. He specifically stated he didn't care about power or noise. He also stated he was looking at the C1100's and wanted opinions on if there are any better solutions out there for the price that fit into a rack. Additionally he mentions that he'll need to create a storage array. Did I not address all of those with my post? Sure there are plenty of other options, but you have yet to actually recommend one that fits his requirements.
The comments about noise weren't aimed at all at alo, which is why he wasn't quoted when I commented on the noise, and why I asked "what you going to do with your lab?", because recommendations vary after that. One SSD and a few 3.5" drives paired with 2 C1100s is going to leave you I/O starved. A full MD3000i with one PE2900 is going to leave you way overcommitted on CPU. It's a balancing act.

Tekhne posted:

Secondly the noise is very minimal, certainly not like standing next to a running dishwasher. In fact I just measured it with Noise Meter on my Android phone. Not the most accurate reading I'm sure, but from exactly two feet away from the back of the chassis, it measures 32.5dB. 5 feet away at my desk it is 28.7dB. My ultimate plan is to put it in my rack in the basement, in which case I wouldn't hear it at all.
The noise levels came from the link you gave about 'additional info on this model'. Dell's spec sheet agrees. 30 decibels is literally whisper-quiet.

I'm not invested in getting people to buy/not buy C1100s, C6100s, or whatever except that I've run 1U and 2U equipment at home, and it's not a pleasant experience. It looks really good on paper, because you can get 8 cores and 72GB of memory in 1U, but that's far more capacity than the vast majority of home users need (labs included), and all the warts of server kit are hard to get around unless you have a rack in the basement or the garage. 40mm fans are often audible through floors even if it's in the garage.

Tekhne posted:

Additionally this is not a five year old server kit. In fact this is a Gen 11 server that first came out in 2010. My particular server has a build date in 2011. Most enterprises don't replace their servers but every 4-5 years. Considering this is a 2-3 year old server, I think it will manage. Yes the Xeon E5520 was launched in 2009, but it is still supported by Intel and does the job just fine.
The L5520 was release Q1 '09. Your server may have been built in 2011. Or 2013. The release date was around the end of March 2009. In 6 months, it'll be 5 year old server kit. I'm rounding up very marginally.

Tekhne posted:

Just for kicks, why don't you make a build list of the components you would purchase for your 8 core system so we can see how it compares dollar for dollar. Be sure to include cases, power supplies, cables, etc as not all of us have spare parts laying around. Once you make those two hosts, also add another for storage as I have mentioned twice now that I use FreeNAS (and note it is not a VM) so your recommendation also needs to have the ability to account for storage. Since I make no mention of the drives I use, you don't need to spec those out. Maybe your next post can contribute something useful.
I'm really not interested in dollar-for-dollar comparisons. Especially against someone who's going to defend his purchase to the death with such vigor that he wants me to spec out another system because you have a FreeNAS node (not virtualized, eating 8 cores and 24GB of memory on your chassis, which just makes your "5 year old kit" vs equivalent modern gear comparison look worse, since all cores and all memory are not equal). I also wouldn't bother speccing drives because your purchase iddn't come with any. Gluster is fine. vSAN is fine. Local storage is also fine. It's a tradeoff between 60+ dba servers with components you can't replace without scouring eBay, potentially limited HD compatibility, and unknown usage patterns on used equipment vs. flat consumer gear.

Generic case+PSU - $45
AM3+ motherboard with integrated graphics and 4 DIMM slots - $45
8 Core Zambezi - $150
16GB DIMM (2x8GB) - $108

Two of those is $696, assuming you buy right now and don't wait for any deals on hardware. Plus two 8GB (2x4GB) kits for $50 each puts it at $796 (which is only marginally more expensive than your purchase) for two evenly-specced systems. If you were willing to suffer with 4 cores per node (which is still plenty, honestly), you could bump it from 24GB/node to 32GB/node.

You don't get RAID controllers, hot-swappable drives (or any hot-swappable equipment), DRAC/iLO, and whatever else you want to use to justify your purchase. You do get consumer equipment which you can get replacements for at any Fry's or Microcenter. You only get half the memory (albeit with better/newer memory controllers than Nehalem CPUs) and half the CPUs (albeit with much newer architectures, better virtualization instructions, and more IPC). You also don't have a 1400W PSU (maybe two!). You don't have a server that's minimum (per your link and Dell's datasheet) 65dba.

Again, I'm glad you're happy. It's just not a good purchase for most people. It's a fine purchase if you have a half-rack in the corner of a datacenter that you want to set up a lab to play with in. My house doesn't have a datacenter.

E:

Just to be clear, I'm not trying to rag on your purchase of a C6100 in particular. I didn't remember it was you who purchased one previously. I'm just reiterating that "buying used Dell kit" isn't always the best or most practical solution.

evol262 fucked around with this message at 18:20 on Sep 13, 2013

evol262
Nov 30, 2010
#!/usr/bin/perl

alo posted:

A whole bunch of things. We don't currently have any extra hardware to test large changes in our environment, so it would be nice. I've been using my home setup to make sure things work before I deploy them, but there are limits to what I can do at home. I have to maintain my impeccable "never fucks poo poo up" record.

I'm in a very mixed environment where I'm technically a Linux sysadmin, but I end up touching storage, VMware, Windows and Windows clients (thankfully only on the deployment side) -- so it's really valuable to be able to play with stuff before making changes that would keep me at work past 5pm.

As for storage...

I actually have an MD3000i sitting around, but I wouldn't use it... it's a terrible device. I see people recommending the newer versions of it and I hope they've improved ( http://rtumaykin-it.blogspot.com/2012/04/fixing-unresponsive-management-ports-on.html as an example ).

I'm probably going to go the route of SSD + a few 3.5" drives and buy better stuff later if I need it (I have a pile of 10k SAS drives sitting around too). The question is really about enclosures, since I want to be flexible in that regard.

Thanks for the suggestion, Tekhne. Can you detail what "a little modding" actually is? I'm still leaning toward the C1100's with their 72gb of ram and a separate box for storage.

Oh and please be friends.
The MD3000i actually has reasonably good iSCSI performance. If you provision a LUN and present it to a few ESXi hosts, you'll be hard-pressed to beat it for performance, even if you route all 12 bays on a C6100 to one node and present it via OpenFiler, Nexenta, or whatever. Obviously you could dump a 10GE NIC into one of those nodes, but I'm guessing you don't have a 10GE switch in your corner, either.

You'll have a hard time beating refurb C6100s or C1100s for a lab in a datacenter. Just make sure you get L5639s instead of L5520s. 4 C1100s (dump your 10k drives into the chassis) with one datastore on the MD3000i and one on vSAN spread across the drives is very likely the best you'll do for $1500.

evol262
Nov 30, 2010
#!/usr/bin/perl

Stealthgerbil posted:

I would love to build a home datacenter and get the fastest FIOS and xfinity business plans. Get solar panels and a battery system and I could have a totally solar powered micro datacenter.

I'm pretty sure that even in Phoenix, I couldn't power one rack with solar panels covering my entire property. Not to mention there's no availability of fiber when I could throw a rock and hit CenturyLink's regional HQ, but...

evol262
Nov 30, 2010
#!/usr/bin/perl

three posted:

You guys are so negative. :psyduck:

I was really looking for more builds. I think they're interesting.

There should be a battle for cheapest whitebox with 32GB of RAM.

Edit: I <3 you, Corvettefisher. You're not crazy like you used to be.

Take the build from half a page up:

Generic case+PSU - $45
AM3+ motherboard with integrated graphics and 4 DIMM slots - $45
8 Core Zambezi - $150
16GB DIMM (2x8GB) - $108

Cut down the CPU to a quad if you want to save $70. I don't personally think it's worth it. Double the memory. Boot from SAN. Or add a very cheap drive. That motherboard (which has gone up $20 in the last week, :psyduck:) is whitebox compatible.

evol262
Nov 30, 2010
#!/usr/bin/perl

The Third Man posted:

Is it possible to lab Nagios? I'd like to be able to claim some sort of monitoring experience when I try and get a new job.

Sure. Or Zabbix. Or whatever. But please don't put "I had 5 hosts monitored on my home network" as monitoring experience.

E:

WHen I see experience on someone's resume, I assume that means "at scale, in a production environment". You're never going to watch a Nagios host choke because of database problems at home. Or see flapping alarms because there's packet loss on a trans-continental link. Or set up slave servers in different DCs to report to a master. Or... This is the same reason why shade tree "Linux experience" guys who've installed Ubuntu at home and used it for 2 months to get "Linux experience" that they put on their resume don't look good. By all means, tell interviewers you've touched Nagios. If they ask. Don't put it on your resume if you've only touched it in a home lab.

Agrikk posted:

Oh god what am I about to do?

I am building a new storage server to replace my current iSCSI target that is buried under the SQL server / Hyper-V / ESXi requests I throw at it. I'm putting together this box based on my familiarity with each of the hardware components and availability on eBay:


Supermicro H8SGL-F motherboard - $180
Opteron 6128 (8-core @ 2GHz) - $45
HP SmartArray P410 array controller with 512MB battery-backed cache - $150
2x Mini SFF-SATA fan cables - $15
16GB DDR3-1333 RAM - $80
500w gold power supply - $90
4x Samsung 840 Pro 512GB SSD - $1800
4x 1TB SATA HDs <exists> - $0
Mellanox ConnectX-2 HBA - $190

Total: $2550

I'll be using Server 2012 R2 for my iSCSI target so I can play with its tiered storage capability. 180,000 iops available and over 1 gigabyte of read/write speeds from the SSD array with a 2TB storage tier. and the Mellanox card will give me a theoretical limit of 20gbit throughput via RDMA (SMB Direct) making the storage throughput available to the network.
M1015 instead of P410.

$1800 of SSDs is complete overkill.

evol262 fucked around with this message at 20:53 on Sep 18, 2013

evol262
Nov 30, 2010
#!/usr/bin/perl

Agrikk posted:

M1015 doesn't have battery-backup or on-board cache or advanced RAID configs.
Doesn't Windows Storage Server have something that'll handle advanced configs and SSD cache layering?

Agrikk posted:

I suppose I could do 4 256GB drives in RAID-10 and add additional pairs to expand the array onto, though.

This is what I meant. That while 1TB of SSD storage is juicy, it's probably overkill, and 512MB RAID10 is probably 1/3rd of the cost.

evol262
Nov 30, 2010
#!/usr/bin/perl

Agrikk posted:

Someone correct me, but I think a virtual network operates at 10gb speeds?

Last I checked, it was literally as fast as it could push data over the bus.

evol262
Nov 30, 2010
#!/usr/bin/perl

IT Guy posted:

So, I'm thinking about doing the following:

2 of the servers are for home lab, the third one will be where I consolidate my current poo poo to for home entertainment, etc.

The 5 DP NICs, 1 for each server, 1 for my NAS, 1 for my backup NAS.

One thing I really want to test is iSCSI MPIO over Gig-E since it's likely something we're going to implement at work in the near future.

Any problems with it?

edit: and the mic you can ignore.

You already have a managed switch, right?

evol262
Nov 30, 2010
#!/usr/bin/perl

IT Guy posted:

Yes. It's a lovely Dell 3348 but it'll do.

Not gigabit, is it?

evol262
Nov 30, 2010
#!/usr/bin/perl

IT Guy posted:

Ah right, it isn't. I have a netgear smart switch that is gig-e though. I could use that. I really only need vlan for MPIO, right?

You'll probably want to segment off a storage VLAN with a larger MTU as well.

evol262
Nov 30, 2010
#!/usr/bin/perl
I wasn't joking when I mentioned IPoIB. You can get a 24 port 10gb Infiniband switch, HBAs, and cables for less than an 8port 10GE switch.

evol262
Nov 30, 2010
#!/usr/bin/perl

Comradephate posted:

I'm not at all familiar with infiniband offerings. Mind linking example products that you'd suggest?

Go on eBay. Look for Mellanox HCAs (QLogic HCAs are also fine), CX4 cables, and an infiniband switch (QLogic, Mellanox, maybe Topspin or Cisco). Any PCIe HCA you see should be at least 10gb. You can get 10gb (8gb effectively, because it probably won't be FDR) for ~$25, and dual-port HCAs for $40. 10gb is old hat in the Infiniband world. It's actually that simple.

evol262
Nov 30, 2010
#!/usr/bin/perl

Stealthgerbil posted:

Am I stupid for wanting a Dell C6100? I feel like out of all the cheap servers, for a home lab you really cant beat it when you can get one for as cheap as $650. It gives you the option of using 1-4 nodes which would be really helpful and they are decently powerful. The only downside for me is the power usage, I heard it uses 300-600 watts depending on load. I am almost tempted to buy one and colocate it with one of those bargain places and they said I could include a switch as long as it is under 4ul. However I am pretty sure that it would use more then 2amps on a 208v line.

It's a 65-80 dba server that draws an amp or more, and you can build two brand-new 8 core nodes with 16GB of memory each for the same price. If the caveats of the C6100 (noisy as all get out, draws power like mad, 5 year old CPUs) don't bother you, you'll have a hard time beating it for the price.

If you want something you can run at home, I'd advise building cheap Zambezi boxes. But this is probably the fifth time I've said this in this thread and not everyone agrees. It may make sense for you if you can live with the drawbacks.

evol262
Nov 30, 2010
#!/usr/bin/perl

MC Fruit Stripe posted:

Just a bit of a throwaway question, but can anyone tell me where Openfiler is placing data before it flushes to disk? I created volume and presented it to an ESXi host, copied over 60gb of software, but could not actually find that data on my hard drive. Memory on my local system, the ESXi host, Openfiler, hell even the VM admin box I use to run vSphere, none of them showed any memory pressure. No increased file sizes, no swapping, nothing that I could find, but obviously that information was somewhere - it finally showed up in earnest when I shut down both ESXi and Openfiler, but until then the files were there, accessible, everything, just not actually registering on my harddrive. Where were they, any guesses?

A thoroughly unimportant question.

I'm 99% sure OpenFiler creates LVM volumes with extX on top of those, but it's hard to say without knowing whether you're using iSCSI, NFS, or whatever. iSCSI allocation in OpenFiler is probably LVM volumes presented as LUNs as bare, unformatted filesystems (that you allocate from wherever), and if it's thin-provisioned, it adds another layer where something could change here.

evol262
Nov 30, 2010
#!/usr/bin/perl

This is the home lab thread. Do the needful. You can't play with "real" SANs and UCSes at home, but it's easy to roll an environment large and complex enough to do nontrivial work with iSCSI MPIO, kick started installs, svMotion, etc.

If you want "large environment", VARs don't hold a candle to enterprises running rows full of hypervisors. If you know how large our openstack deployment is or how large some of the VMware labs I've seen have been...

Adbot
ADBOT LOVES YOU

evol262
Nov 30, 2010
#!/usr/bin/perl

You can get VAAI without a "real" SAN? Tell me more. Who else does this?

  • Locked thread