|
There's CCNA in the title, but hopefully this thread is for IT labs of all types, including infrastructure builds and environment simulation. My lab was originally this: Until I the power bill started killing me, so I moved it into my colo space that I get for free by providing ad-hoc IT support for a friend. He gives me unmetered bandwidth on a 10mbit pipe, a half a cabinet and power. So I built this: But it's all my own gear so technically it's a private lab. I hope that's okay. Cabling The cabling might be hard to decipher, but basically I have VLANs for: Local networking Out of band management iSCSI Heartbeat. Gear list My current lab consists of a shitton of gear that I scored when everyone in my office was fired and the office itself closed down during a buyout. I was the last employee to go, kept around to dismantle and "eWaste" the gear and turn off the lights, so I eWasted a bunch of it into the trunk of my car: Dell PowerConnect 6248P gigE switch Sonicwall PRO 2040 Firewall / VPN concentrator Five Supermicro AS-1012G-MTF 1U barebones servers with 1 8-core Opteron and 16GB RAM - 2 ESXi 5.1 hypervisors - 2 Windows Server 2008 R2 hosts - 1 Windows Server 2008 R2 iSCSI target Two HP ProLiant DL120 G6 servers with 1 Quad-core Xeon and 16GB RAM each One HP Proliant DL160 G6 server with 2 quad-core Xeons and 48GB RAM I realize that this list is absolutely rediculous for a "home lab" but I'm posting it to give another example of what's possible. This lab is basically unsustainable since any component failure would require a $500-1000 purchase from eBay to replace and I'm not willing to lay out that kind of dough. Still, I have the gear now so why not use it, eh? Configuration The two DL120 servers are configured with IIS/MySQL/PHP and run small web servers as well as what might be the longest continuously running Quake 2 Devastation server on the planet (mind the broken links - when I moved this part of the web site to a new host the links broke and I haven't bothered to update them yet). One of the DL120s also acts as my vSphere server for the 2-node iSCSI-based ESXi 5.1 HA cluster. The DL180 server is an iSCSI target for the Hyper-V nodes and the ESXi nodes (below) as well as a Folding@Home rig. There is a dedicated LAN link, plus a pair of iSCSI links in round-robin load balancing for the target software. One of the Supermicro boxes is also configured as low-performance iSCSI storage, plus basic CIFS file sharing. ESXi 5.1 cluster - iSCSI based - 2 Supermicro servers (16-cores, 32GB total) VMs: - Windows 2008 R2 Domain controller - 2 SQL Server 2008 R2 Reporting Services servers on Server 2008 R2 - 2 CentOS Folding@Home servers Windows Server Failover Cluster Click here for a pic of the failover cluster manager. - iSCSI based - 2 Supermicro servers (16-cores, 32GB total) - SQL Server 2008 R2 failover Cluster - SQL Server 2008 R2 Analysis Services cluster - Hyper-V cluster VMs: - Exchange 2010 Server - Windows 2008 R2 Domain controller This lab is pretty much my Swiss Army knife, allowing me to mess around with iSCSI, basic networking, VPN tunneling, ESX, Hyper-V and database stuff. I don't have the biggest virtual environment, but it's enough for me to stand up random virtual appliances, do high-availability stuff and/or tinker with different OSes as my interests dictate.
|
# ¿ Aug 6, 2013 21:44 |
|
|
# ¿ May 2, 2024 05:37 |
|
Indecision1991 posted:This is sorta what I want to run at home but with a lot less equipment. I would like to virtualize as much as possible since I am taking some vmware classes and want to keep up the momentum. I was thinking of a single server with beefy specs to use for ESXi, I know I can find 3+ year old equipment with decent specs for 400-500, I may even be getting a free server from an old boss. The networking is my weakest point but I do plan to buy some routers/switches within the next 6 months for practice. Do you think it would be a good idea for me to host a small single server with 2 routers and 3 switches?All for practice of course. I think it is silly for IT folks to not have an IT lab in their home. It is very easy to get rusty on skills that you don't use every day and having the right gear handy is a great way to brush up on stuff (especially before interviews, etc). If I were to build a simple lab, I would build a white box ESXi host, stuff a bunch of nics in it, and then plug those nics into a router or two and a managed switch capable of multiple vlans. Stay away from the temptation to buy old servers (but free can be good!): They tend to be noisier and more power hungry than a whitebox build and you typically get more bang for your buck on a home built (plus you get more hardware experience). Here's a decent home ESXi server for around $550: ASRock 970 Extreme3 R2.0 AM3+ AMD 970 - $85 AMD FX-8120 Zambezi 3.1 GHz Eight-Core Desktop Processor - $150 2x8GB DDR3 SDRAM (16GB total) - $170 Rosewill capstone-550 80 PLUS GOLD power supply - $70 ATI Rage XL Pro 8BM PCI — $8 eBay 3 network cards: 2xPCI-e 1GB, 1xPCI 1GB — $~24 eBay Computer case - $50 You surely have old SATA hard drives floating around, right? Buy a USB key ($8) to install ESXi on and use your SATA HDs for storage. Make sure to buy the RAM in two sticks of 8GB each, so you can upgrade to 32GB at a later date by purchasing two more 8gb sticks when you can afford it. Then as your funding allows, buy a used Dell PowerConnect 2716 switch (~$80) and maybe a Cisco 2621 (dual-port fast ethernet) ($90) to allow you to route between VMs and vlans using hardware. Until you can get networking hardware, I think there are open source router appliances available for vmware that will get you designing your very own overcomplicated home network in no time...
|
# ¿ Aug 7, 2013 17:49 |
|
Indecision1991 posted:Good point, I am simply looking at the specs and not thinking about the improvements that current hardware has over older stuff. I am just confused I guess, I would be fine using a white box, i have a ton of old drives i can use including an ssd. At the same time having some old refurbished systems to play around with still sounds, to me, like a decent idea. So there are a bunch of people telling you why this is a bad idea, but if you want to do the goon-in-the-well thing, be my guest. It sounds like you are kinda stoked on the idea of having Enterprise Server Gear in your house so go hog wild. Trust us, though. It will arrive in the mail and you will put your dives in and fire it up and get all kinds of excited about it. For about two weeks. Then eventually the fan noise will get annoying and you will start thinking about where you could put it to dampen the sound. Then you will start powering it off to get relief from the noise and power bill. Then you will be slightly irritated every time you want to try something and you have to wait while it powers on. Then you'll start browsing ebay to find out how much you can sell it for while you part out your new white box. The one with the efficient power supply, new processor and large, silent fans. edit: evol262 posted:It's loud, and it sucks power. It's loud. I cannot emphasize enough how piercing 40mm fans in a home environment. Nth-ing this. You think you know how loud 40mm fans are, because you sit next to some gear in an office and are used to the noise. But offices have a lot of background ambient noise that give context to server noise. The AC is blowing, there's people rattling around in the break room. Cubicle conversations, etc. You will get your new server home to your nice quiet house and no matter where you are or what you are doing you will always, always hear those 40mm fans. Agrikk fucked around with this message at 18:50 on Aug 7, 2013 |
# ¿ Aug 7, 2013 18:46 |
|
Indecision1991 posted:You are 100% right, I did respond to Dilbert to see what he suggest on a whitebox. At the end of the day I will get tired of the increased bill and the loudness. I am by no means asking for advice then resisting it, so I apologize if it seems that way. No harm, no foul. Get this one up and running and get your lab solid. If you do score the free server from work, that can be your second ESX host for extra capacity and goofing off space.
|
# ¿ Aug 7, 2013 19:42 |
|
Sefal posted:
Ah hah hah hah holy poo poo! Look at the previous set of posts on this exact page: Indecision1991 posted:I was originally looking at something like this: Then read our responses.
|
# ¿ Aug 7, 2013 19:44 |
|
The Home Lab Thread: Please don't talk to us about http://www.ebay.com/itm/251269644666
|
# ¿ Aug 7, 2013 20:02 |
|
evol262 posted:This whole "just turn it off" thing is insane. Once you get reasonably used to having an AD environment, you're going to tie it into the rest of your network. Then what? Leave your 1U running all the time? I hear windows for workgroups works nice.
|
# ¿ Aug 9, 2013 17:38 |
|
Swink posted:Can I use openfiler or similar to learn about storage? I know virtually nothing and dont know where to start. Openfiler, FreeNAS and Microsoft's iSCSI target are all solid ways to get into centralized storage using iSCSI.
|
# ¿ Aug 13, 2013 05:56 |
|
Has anyone built a homebrew fiber channel or infiniband SAN? I'm getting the itch to redo my current home/lab iSCSI SAN and experiment with Server 2012 and its storage capabilities and am thinking about building a dedicated storage box and a switched FC/IB environment. eBay shows some aging Mellanox infiniband gear for a couple hundred bucks, but before I jump down the rabbit hole and start investigating component compatibility I want to see if someone else has a trip report. I want to play with some new gear, not reinvent the wheel here.
|
# ¿ Aug 19, 2013 18:38 |
|
Dilbert As gently caress posted:Boot From SAN/Embedded Do people do this anymore? I never thought it made sense to use pricey SAN space to host a boot image if a USB key or a pair of tiny drives in RAID-1 will do the job for far cheaper.
|
# ¿ Aug 22, 2013 19:53 |
|
QPZIL posted:
It's my opinion that home labs, when done properly, are the most overcomplicated and overbuilt environments ever on a per user basis. I was bitching in the daily poo poo thread about how complicated and cluttered my lab had become so I tore it all down, set it up in its current configuration and vowed not to touch it until my MRTG installation fills up the Yearly Graph (1 Day Average). Except that yesterday I ordered three FC HBAs and am toying with the idea of building a new storage server around Windows Server 2012 R2 and converting everything from iSCSI to 4GB FC. And maybe some Infiniband for giggles. gently caress.
|
# ¿ Aug 23, 2013 21:22 |
|
evol262 posted:iSCSI over IPoIB. gently caress FC. IB is next. I'm comfortable on FC so I want to get familiar with a new technology (tiered storage in Server 2012 R2) while refreshing on FC. Plus FC gear (HBAs and switches) is a lot less expensive than IB gear. But why the FC hate? It's not that complicated to manage and has been bombproof in all of my past deployments. If I had any complaint about it would be the lack of insight into actual traffic utilization over your FC fabric that I'm not sure has been resolved.
|
# ¿ Aug 23, 2013 22:52 |
|
Agrikk posted:IB is next. Well gently caress. Two of my FC bids on eBay for sets of FC cards got sniped in the closing seconds of the auction. It looks like destiny is telling me to jump on IB. edit: or to invest in an eBay sniper app.
|
# ¿ Aug 26, 2013 23:30 |
|
evol262 posted:Pretty much, yeah. And update the firmware first thing so it actually works with IE. 5324 owner seconding the awesomeness of the 5324 and how loud the fans are. I solved that problem like this: where I took the 40mm leads and used them to power a pair of 80mm fans. The lower current slows the 80mm fans down a lot, but they run super quiet. Disadvantages are that the fan-alert red LED is on all of the time now and you lose the 1U of space above the switch due to the fans. Temps of my switch dropped significantly with this mod, though. edit: I am assuming that this discussion is for a home lab. If this is in your office, just stick it in your wiring closet and close the door. edit2: Another smaller switch to look into is the PowerConnect 2716. Dunno if it does LAGs or alternate MTU sizes, though. You might need to check on that. Agrikk fucked around with this message at 20:39 on Aug 29, 2013 |
# ¿ Aug 29, 2013 20:29 |
|
Oh god what am I about to do? I am building a new storage server to replace my current iSCSI target that is buried under the SQL server / Hyper-V / ESXi requests I throw at it. I'm putting together this box based on my familiarity with each of the hardware components and availability on eBay: Supermicro H8SGL-F motherboard - $180 Opteron 6128 (8-core @ 2GHz) - $45 HP SmartArray P410 array controller with 512MB battery-backed cache - $150 2x Mini SFF-SATA fan cables - $15 16GB DDR3-1333 RAM - $80 500w gold power supply - $90 4x Samsung 840 Pro 512GB SSD - $1800 4x 1TB SATA HDs <exists> - $0 Mellanox ConnectX-2 HBA - $190 Total: $2550 I'll be using Server 2012 R2 for my iSCSI target so I can play with its tiered storage capability. 180,000 iops available and over 1 gigabyte of read/write speeds from the SSD array with a 2TB storage tier. and the Mellanox card will give me a theoretical limit of 20gbit throughput via RDMA (SMB Direct) making the storage throughput available to the network.
|
# ¿ Sep 18, 2013 20:22 |
|
evol262 posted:
M1015 doesn't have battery-backup or on-board cache or advanced RAID configs. Overkill? Not really. Expensive? most definitely. If I want 1TB of SSD space and want to avoid parity calculations, then my choices are RAID-0 and RAID-10, and I'm not using RAID-0 to store data. So I'm stuck with 4 SSDs in RAID-10. I suppose I could do 4 256GB drives in RAID-10 and add additional pairs to expand the array onto, though.
|
# ¿ Sep 18, 2013 21:24 |
|
Moey posted:How big is your home lab where you need that much raw ssd space? It's actually a personal lab that is hosted out of a datacenter (power, bandwidth and cabinet space for free. Weee!) running 2 ESXi hosts, two Hyper-V hosts and a SQL Server 2012 cluster. My problem is that I've databases totaling over 280gb in size. VMs I can put on my existing slower storage nd live with it, but my databases are clobbering that storage with average disk queue lengths averaging over a thousand and access times measured in seconds during some processes. L2ARC, though. hrm... That bears some consideration, although I'd have to rip the storage guts out of my lab and redo it... A weekend of work away from the fam or dropping stacks on SSD. Meh.
|
# ¿ Sep 18, 2013 21:31 |
|
evol262 posted:Doesn't Windows Storage Server have something that'll handle advanced configs and SSD cache layering? quote:This is what I meant. That while 1TB of SSD storage is juicy, it's probably overkill, and 512MB RAID10 is probably 1/3rd of the cost. It's actually more like 2/3. The 256GB 840Pro is $300 on Amazon and the 512GB is $440. Maybe I buy two 512GB disks in RAID-1 and then expand it to RAID-10 if I need more. Problem is that I then lose half my IOPS.
|
# ¿ Sep 18, 2013 21:40 |
|
luminalflux posted:For the full Nagios experience, create a script that sends you 40 texts at 4 am. This is my favorite. Hey. Windows server DB1 is unavailable. Hey. The database $database1 on DB1 is unavailable. Hey. The database $database2 on DB1 is unavailable. Hey. The database $database3 on DB1 is unavailable. Hey. The database $database4 on DB1 is unavailable. Hey. C: is unavailable. Hey. D: is unavailable. Hey. Q: is unavailable. Hey. R: is unavailable. Hey. S: is unavailable. Hey. T: is unavailable. Hey. U: is unavailable. Hey. Memory counters for server DB1 are unavailable. Hey. CPU counters for server DB1 are unavailable. Hey. Disk space counters for server DB1 are unavailable. Hey. The network interface E1000 on server DB1 is unavailable. . . .
|
# ¿ Sep 19, 2013 18:06 |
|
IT Guy posted:So is it generally recommended to go for a two host system rather than one big beefy host? Two hosts allow you to do things like failover clustering and other nifty things. If you can afford the extra power, go with multiple hosts instead of a single beefy guy.
|
# ¿ Sep 23, 2013 23:59 |
|
three posted:You can do failover clustering with one host while virtualizing the hypervisors. Yes, that is possible. IMO, though, I'd rather have the two boxes.
|
# ¿ Sep 24, 2013 00:07 |
|
For all of you C6100 havers and lovers out there, I found a thread quite by accident on Servethehome.com called Taming the C6100 where a brave soul attempts to reduce its noise. Skimming through the article goes deep into fan replacement, rewiring and modding to drop the noise by 20db (from 70db to 50db). Apparently it's still loud enough to hear through the walls of your home but not the shriek it normally is. YMMV.
|
# ¿ Sep 24, 2013 19:08 |
|
IT Guy posted:Can you elaborate on this? With a SAN with any decent amount of IOPS, it becomes fairly easy to saturate a single gigE link with a dozen guests and several moderate sized databases. MPIO alleviates that somewhat, but a virtual network that stays on the hypervisor will always outperform traffic that has to pass through a nic of any kind since the virtual network exists purely in ram. Someone correct me, but I think a virtual network operates at 10gb speeds?
|
# ¿ Sep 25, 2013 17:37 |
|
Dilbert As gently caress posted:Install it onto a USB drive, you can even install it back onto the usb you used to boot esxi on if installing via usb A lot of server motherboards have internal USB slots for just this purpose: Buy yourself a low profile USB key and install ESX to that and you have an instant host. The USB is inside the enclosure so you don't risk someone walking along and pulling it out.
|
# ¿ Nov 15, 2013 20:48 |
|
Does anyone know if Microsoft has a SCOM evaluation program I can get in on to run an evaluation in my home lab? It's time to get current on some Microsoft technologies and for the first time in a long while I don't have access to an MSDN subscription. edit: Nevermind. Found the page here. Agrikk fucked around with this message at 02:18 on Jan 21, 2014 |
# ¿ Jan 21, 2014 02:13 |
|
Sefal posted:I'm currently studying for my CCNA. I'm now at the part of subnetting router protocols is right after this. and I understand how to do that. I understand the theory but I have no hands on experience with it. i know which cables to use i don't know how to make them. I can identify them. The only hands on experience I have is with Packet Tracer and that is virtualized. I want to know how to connect switches with a patch through the wall to a central switch. I want to know how to actually subnet routers. I want to know how to actually subnet and connect a real work environment. I'd suggest picking up a With a trio of routers and the switch, you can break up the switch with VLANs into multiple network segments and then route between them with the routers: Note that each of the network segments (VLAN1, VLAN2 and VLAN3) are actually the same You can also search eBay for "CCNA lab" and see what stuff appears. Often bundles appear that are pretty good values. edit: Dell gear for a cisco certification. Duh! Agrikk fucked around with this message at 22:47 on Jan 31, 2014 |
# ¿ Jan 31, 2014 21:05 |
|
QPZIL posted:Why not just buy a few Cisco 2950s instead of a Dell switch since he's studying for, you know, a Cisco exam? Hah hah. Didn't even think of that. It's become so ingrained in me to look to Dell for the cheaper version of Cisco that I wasn't thinking here.
|
# ¿ Jan 31, 2014 22:48 |
|
|
# ¿ May 2, 2024 05:37 |
|
thebigcow posted:I'm looking for a smart switch on the cheap. I've had Dell Powerconnect 5324 suggested in the past and I see a bunch on ebay in the 50-60 range. Seconding whoever suggested the 5324. I bought one off of eBay a few years back and the thing is bombproof. Managable, web interface, SNMP MIB support, fiber slots, etc. Unless you need a Cisco interface for studying for labs or whatnot, I can't see why you wouldn't buy one of these.
|
# ¿ Jan 9, 2015 02:45 |