my lab... pic sucks. from top down Juniper SA-2500 SSLVPN Palo Alto PA-200 firewall Juniper SRX210 Juniper SRX210 Avocent ACS6016 console server Cisco 3750 Switch Cisco 3550 Switch Cisco 3550 Switch power strip Cisco 3650 Switch Juniper J4300 Router Cisco 3845 ISR Router Juniper J6300 Router Cisco 2811 ISR router with NAM Juniper M7i router HP DL360 G7 with 8 1G NICS for Vsphere I'm an ex-WAN guy (for now) so my lab was mostly WAN-centric until I added SSLVPN, firewalls and VSphere this year after phasing out all my 2500 and 2600 devices. The M7i is logically split into 16 routers using "logical systems" which is similar to Nexus VDC. So that is my "WAN" - everything else breaks out into remote sites off the WAN. Call manager express on the 2811 and Call manager on a VM in VSphere for some 7960's I also have. I typically don't have this whole rack turned on at once. I only configure what I want to train on: WAN, LAN, VOIP, firewalls, ESX and server related tech, etc. For quick IOS L3 stuff I fire up a Cisco IOU VM and build things virtually.
|
|
# ? Aug 9, 2013 01:37 |
|
|
# ? May 4, 2024 08:57 |
|
Last time I tried to install vSphere 4 on a non-server I discovered the driver problem. What should we look for when selecting a consumer motherboard for a white box lab?
|
# ? Aug 9, 2013 12:51 |
|
Drighton posted:Last time I tried to install vSphere 4 on a non-server I discovered the driver problem. What should we look for when selecting a consumer motherboard for a white box lab? I use supermicro, but really google is your friend.
|
# ? Aug 9, 2013 13:56 |
|
I'm going to be the voice of dissonance and say that as long as you power down your lab when you're done getting that eBay Dell is fine. The idea behind the lab is to build experience and you're not going to get familiar with how a real bare metal server is setup by building a generic beige box. Server hardware and the consumer hardware people on here are recommending are radically different. You can also tuck your lab into the garage or basement if the noise is that obtrusive.
|
# ? Aug 9, 2013 14:57 |
|
Return Of JimmyJars posted:I'm going to be the voice of dissonance and say that as long as you power down your lab when you're done getting that eBay Dell is fine. The idea behind the lab is to build experience and you're not going to get familiar with how a real bare metal server is setup by building a generic beige box. Server hardware and the consumer hardware people on here are recommending are radically different. You can also tuck your lab into the garage or basement if the noise is that obtrusive. If he doesn't have a garage or basement, he's screwed. A "generic beige box" is still a bare-metal setup. It's not ESXi on ESXi. Server hardware and consume hardware differ very, very little these days, unless you think "ECC instead of non-ECC; SAS instead of SATA" is "drastic". The big difference you'd see is using an OEM-customized ESXi image that has drivers built in. Big whoop. The VMware experience is just the same. A lot of whiteboxes will do hardware monitoring out of the box with ESXi. This whole "just turn it off" thing is insane. Once you get reasonably used to having an AD environment, you're going to tie it into the rest of your network. Then what? Leave your 1U running all the time?
|
# ? Aug 9, 2013 15:19 |
|
I think the most important thing in a lab isn't about installing it on a Dell/HP/etc because at the end of the day installing ESXi/Citrix/Hyper-v on x86 is pretty much the same all around. The important thing to take away from it is the knowledge of the workings of the systems such as esxi, GNS3, Windows/Linux, etc. Getting some vendor hardware is great but not always the main goal. Yes there are some thing's like the Cisco UCS's configuration that is sometimes daunting but there are bunches of simlabs for that which do it without spending a bunch on rebranded HP servers. Probably the only real thing I would say for a lab where buying vendor HW is useful is for storage, but even then there are some many java programs that simulate the environment which vendors usually give out for little to nothing
|
# ? Aug 9, 2013 15:25 |
|
I am starting small and keeping VM instances limited to VirtualBox on a MBA. Can I setup three VMs in Vbox, build a domain controller on one, a web server on another, and a federation server on a third? Or will all my VMs be perfect little islands without any networking hardware?
|
# ? Aug 9, 2013 15:31 |
|
Turnquiet posted:I am starting small and keeping VM instances limited to VirtualBox on a MBA. Can I setup three VMs in Vbox, build a domain controller on one, a web server on another, and a federation server on a third? Or will all my VMs be perfect little islands without any networking hardware? You can create a virtual network for your VMs, and you can bridge that to a physical NIC.
|
# ? Aug 9, 2013 15:56 |
|
Turnquiet posted:I am starting small and keeping VM instances limited to VirtualBox on a MBA. Can I setup three VMs in Vbox, build a domain controller on one, a web server on another, and a federation server on a third? Or will all my VMs be perfect little islands without any networking hardware? They will be virtually networking in RAM, vBox has some nice network manager that as long as they are all on the same "vlan" they will work. Just make sure you only give the ram/cpu what is needed
|
# ? Aug 9, 2013 15:57 |
|
evol262 posted:This whole "just turn it off" thing is insane. Once you get reasonably used to having an AD environment, you're going to tie it into the rest of your network. Then what? Leave your 1U running all the time? I hear windows for workgroups works nice.
|
# ? Aug 9, 2013 17:38 |
For what it's worth the G7 and up HP proliant 2U boxes like a DL380 are really really quiet for what they are. They might spin up for 5 seconds on boot but they are no noisier than a video card playing games.
|
|
# ? Aug 10, 2013 00:13 |
|
World z0r Z posted:For what it's worth the G7 and up HP proliant 2U boxes like a DL380 are really really quiet for what they are. They might spin up for 5 seconds on boot but they are no noisier than a video card playing games. Terrible-config DL380 G7s are still 3 times as expensive as a Haswell i5 build.
|
# ? Aug 10, 2013 00:39 |
|
Still not possible to nest 64-bit guests under VMWare Player or Virtualbox with ESXi hey? Guess that rules out Server 12, seems to only come in 64 flavour.
|
# ? Aug 10, 2013 01:16 |
|
Ron Burgundy posted:Still not possible to nest 64-bit guests under VMWare Player or Virtualbox with ESXi hey? Guess that rules out Server 12, seems to only come in 64 flavour. What? Yes, it is. You can nest virtualization-capable guests in Player or KVM. I don't think you can in VIrtualbox.
|
# ? Aug 10, 2013 22:15 |
|
Well gently caress. It never worked for me in VBox so I Googled around a bit and saw some outdated threads about it being the same on everything except Workstation which is like a million dollars AUD Maybe I should have actually tried it...
|
# ? Aug 10, 2013 23:44 |
|
I've got a copy of Packet Tracer lying around due to some Cisco classes I took a while back. Out of curiosity, is it illegal to distribute the software? I'm asking because PT has no DRM to speak of.
|
# ? Aug 12, 2013 17:47 |
|
klosterdev posted:I've got a copy of Packet Tracer lying around due to some Cisco classes I took a while back. Out of curiosity, is it illegal to distribute the software? I'm asking because PT has no DRM to speak of. Are you daft? See here. quote:The Packet Tracer software is available free of charge ONLY to Networking Academy instructors, students, alumni, and administrators that are registered Academy Connection users. "It has no DRM so it must be free for distribution" is an incredible argument.
|
# ? Aug 12, 2013 20:00 |
|
Just it doesn't matter.
|
# ? Aug 12, 2013 20:15 |
|
Can I use openfiler or similar to learn about storage? I know virtually nothing and dont know where to start.Ron Burgundy posted:Still not possible to nest 64-bit guests under VMWare Player or Virtualbox with ESXi hey? Guess that rules out Server 12, seems to only come in 64 flavour. I was trying to find a way to nest Win7 > Vmware Workstation > HyperV > Win8 VM. I found a blog where some guy did it but only with a brand new CPU and some editing of config files. I cant find the blog right now but it might be possible. edit: Here's the blog - http://www.veeam.com/blog/nesting-hyper-v-with-vmware-workstation-8-and-esxi-5.html The dude reckons you need an i7 Nehalam core. Swink fucked around with this message at 01:22 on Aug 13, 2013 |
# ? Aug 13, 2013 01:14 |
|
I actually managed to do what I needed to do with VMWare Player 5.0.2 and ESXi 5.1.0. Server 2012 and Windows 8 64-bit work fine inside.
|
# ? Aug 13, 2013 01:23 |
|
Swink posted:Can I use openfiler or similar to learn about storage? I know virtually nothing and dont know where to start.
|
# ? Aug 13, 2013 03:20 |
|
Swink posted:Can I use openfiler or similar to learn about storage? I know virtually nothing and dont know where to start. Openfiler, FreeNAS and Microsoft's iSCSI target are all solid ways to get into centralized storage using iSCSI.
|
# ? Aug 13, 2013 05:56 |
|
Swink posted:The dude reckons you need an i7 Nehalam core. gently caress that guy. It's not even remotely true on KVM, and I don't see why it would be on VMware, either. It performs better with EPT (and the list of processors with EPT includes loving Celerons), but it's not a requirement. VMware has a checkbox now. You don't need to manually edit configs.
|
# ? Aug 13, 2013 16:42 |
|
My lab is a cluster of stuff I bribed the recyecler out of over the last year. Im sure I will get "That's overkill you moron." or "That must kill your power bill." I host all of my VM's on just two boxes that I payed a whopping 200$ for, so I can't complain. A Sunfire x4600 w/ 64 GB ram and 8 AMD 8220 Dual cores. It's one of those machines that seems like its overkill but if you can wrangle the power management not terrible. Lots of old servers are still viable for certain applications if you wrestle with power-on scheduling and usage times. a QNAP 8 bay NAS w/ 6 TB Split between my archive and VM's. I've been trying to get my hands on some larger Cisco hardware, but for switching I use an old AT-8000s that has plenty of life left in it. Hook up with your local PC recycler. They usually are the repository for your cities servers, most of which are usually old crappy G3 series HP, and Pentium III Dell servers. Sometimes there is something worth getting.
|
# ? Aug 14, 2013 03:31 |
|
Swink posted:Can I use openfiler or similar to learn about storage? I know virtually nothing and dont know where to start.
|
# ? Aug 17, 2013 02:01 |
|
Packet tracer talk: it's good for the CCENT/CCNA, terrible for labbing anything real. At least they now offer IOS 15 and 29xx series routers. You definitely don't need to share it, plenty of others have. Stay safe.
|
# ? Aug 17, 2013 12:23 |
|
Has anyone built a homebrew fiber channel or infiniband SAN? I'm getting the itch to redo my current home/lab iSCSI SAN and experiment with Server 2012 and its storage capabilities and am thinking about building a dedicated storage box and a switched FC/IB environment. eBay shows some aging Mellanox infiniband gear for a couple hundred bucks, but before I jump down the rabbit hole and start investigating component compatibility I want to see if someone else has a trip report. I want to play with some new gear, not reinvent the wheel here.
|
# ? Aug 19, 2013 18:38 |
|
Last night I purchased a Dell PowerEdge C6100 off eBay for my home lab. I was originally going to build two boxes with Core i5s and ~32GB of ram each until I stumbled across this gem. For about $770 I got a chassis with four independent nodes each with 2 Quadcore Xeon L5520s and 24GB of ram. Total that makes it 8 physical CPUs (32 cores) and 96GB between all four server nodes. Each node can be powered up independently from the rest, and they all share the same power supply. According to ITPro's review on this model, all four nodes at idle will draw only 348W (going up to 964W at full utilization). What am I going to use this for? I'll pop a USB stick into each server and install VMWare ESXi on them. Also I'll throw in a spare hard drive in each server and install HyperV on two and XenServer on the other two. I'm planning on going through lots of different scenarios that I encounter in my job - SBS migrations, Exchange upgrades, Citrix XenApp deployments, VMware View, XenDesktop, etc etc. In case anyone is interested this is the unit I purchased - http://www.ebay.com/itm/251283578250?ssPageName=STRK:MEWNX:IT&_trksid=p3984.m1439.l2649
|
# ? Aug 21, 2013 01:29 |
|
Tekhne posted:Last night I purchased a Dell PowerEdge C6100 off eBay for my home lab. I was originally going to build two boxes with Core i5s and ~32GB of ram each until I stumbled across this gem. For about $770 I got a chassis with four independent nodes each with 2 Quadcore Xeon L5520s and 24GB of ram. Total that makes it 8 physical CPUs (32 cores) and 96GB between all four server nodes. Each node can be powered up independently from the rest, and they all share the same power supply. According to ITPro's review on this model, all four nodes at idle will draw only 348W (going up to 964W at full utilization). Oh oh, people here dont like refurb servers so dont expect praise or anything. I learned the hard way...either way good find, let me know how the sound is.
|
# ? Aug 21, 2013 03:19 |
|
People bitch a lot but the hardware is less important than just spinning up the software and learning/testing. Whatever hardware you have, post about what you're learning.
|
# ? Aug 21, 2013 04:21 |
|
I was about to go to bed but my sperg kicked in, OP here you go SO YOU WANT TO BY A DELL POWEREDGE: EASY GUIDE TO SAVING A BUTT-TONE OF MONEY So you want to buy a poweredge for your lab? Cool here are some points to think about prior to buying that waste of money. First of let me say when I was first getting into VMware and such I thought getting a Dell Poweredge/HP Proliant/etc would be the poo poo and be MUCH more valuable learning than a server, then I ran the facts and figures. PROTIP: No one gives a poo poo you can install an OS/Hypervisor onto a hardware platform Seriously; installing ESXi is like, Enter, F11, Enter, F11, Enter, and Enter. Hyper-V 2012 is similar even less clicks. Citrix is similar to ESXi but feels a bit more linuxy but is incredibly straight forward. Congratulations, you are not able to install ESXi/hyper-V/Citrix on HP/Dell/IBM/UCS/other. The important part of a lab is not how to install an OS on a HW platform unless you are shooting for your A+ and A+ job, that is probably the only time an employer will care. The important part of setting up a HyperVisor/Server OS is not the "can you install it" but "can you make it usable and understand what you did". Hardware platform familiarity is becoming less and less of a requirement as we move more and more into the virtualization realm. Today most of my installs are scripted, to the point where I boot off USB and let the .KS/unattend.xml finish it, comeback in 5 minutes and configure anything else. While you may need to understand the importance of auto-deployments of windows/linux/Vmware, realize you can do this all in ESXi running on a Cheap rear end 600 dollar build which will curb stomp your Dell server you are getting that shipped with no HDD's, hope you have some good network storage! Common misconceptions of LAB environments
Remember your lab environment is to teach you the concepts and to familiarize yourself with the Software and Services you are configuring. It does not have to be better than your production environment. TIPS OF A VIRTUAL ENVIRONMENT Only assign what the VM needs, this is also true in a production environment. If it only is running AD/DNS/DHCP, it could probably run happy on 512MB and 1 vCPU, You'll probably run out of RAM/DISK IOPS BEFORE you congest your CPU. Unless you are doing some really crazy poo poo or have a 2-3 year old server/pc. Invest in SSD's, SATA disks are SLOOOW for VM's that require shared resources, invest in some SSD's Don't overbuy, this is a really common mistake, buy what you need for what you are doing and upgrade as needed. Look into things like VirtualBox or VMwareWorkstation, and updating your Gaming rig, PRIOR to spending 800 on some dell HW. I have built many PoC labs for my VCP/VCP-DT in workstation, it's a bit slower than ESXi white boxing but 100% DOABLE ESXi can run ESXi on top of ESXi, it can also run Hyper-V and Citrix. Often building 1 beefy box can outweigh multiple lower end boxes. Dilbert As FUCK fucked around with this message at 04:46 on Aug 21, 2013 |
# ? Aug 21, 2013 04:32 |
|
^^^ Good post, but I don't understand the hate against refurb server hardware for a lab. I'm assuming your post is at least in partial reaction to my previous one since you mentioned a Dell without HDDs. While I did enjoy your sperg on installing the hypervisors, I too can do them in my sleep. That being said there isn't much reason to put someone down over something like that. I'm sure no one gives a poo poo that you can tie your own shoes (I'm assuming here) but I bet you were pretty proud the first time you did it. One thing I didn't mention is I already have a FreeNAS setup with bonded NICs. I also have two PCs with i7 3370's and 16GB of ram. These are my gaming machines and like you suggested in your thread I've been using Workstation to build up my lab on top of them for quite a while. Its been working fine, however an upgrade was in order as I am wanting to get into performance testing, DirectPath scenarios, automation, etc. Due to these needs I wanted to move away from the inception build and have the hypervisor on the physical hardware. The obvious choice was build up two new whiteboxes and dedicate them for my lab. The cost would have been roughly $1100 or so. Once I found the C6100 for $770 and that it contains four independent servers within its chassis, I was sold. Sure the L5520 line was released in 2009, but its got plenty of power for what I am trying to do. The power consumption is low and the noise / heat won't be an issue as I have a dry basement that could use some heating in the winter.
|
# ? Aug 21, 2013 14:14 |
|
Different people have different needs and wants. But this is SH/SC and we have to turn everything into a rant If you're an IT geek the thought of your own home server rack probably sounds cool and everyone gravitates toward that without always knowing the downsides. I think multiple posters in this thread have been burned by buying a sweet eBay server farm only to end up never using it because it's ear-piercingly loud and adds a small fortune to their power bill. So there's some backlash against that. Of course there are some people who know what they're getting into, and don't care. Maybe you can rack everything out in a detached garage where you'll never hear it. Or you want to play with Fiber Channel or something and you just can't do that easily with a couple white boxes. Fine, great, go hog wild. You are the 1% that doesn't need to be saved from your inner sperg because you actually have a reason to buy that poo poo.
|
# ? Aug 21, 2013 16:25 |
|
Tekhne posted:^^^ Good post, but I don't understand the hate against refurb server hardware for a lab. I'm assuming your post is at least in partial reaction to my previous one since you mentioned a Dell without HDDs. While I did enjoy your sperg on installing the hypervisors, I too can do them in my sleep. That being said there isn't much reason to put someone down over something like that. I'm sure no one gives a poo poo that you can tie your own shoes (I'm assuming here) but I bet you were pretty proud the first time you did it. One thing I didn't mention is I already have a FreeNAS setup with bonded NICs. I also have two PCs with i7 3370's and 16GB of ram. These are my gaming machines and like you suggested in your thread I've been using Workstation to build up my lab on top of them for quite a while. Its been working fine, however an upgrade was in order as I am wanting to get into performance testing, DirectPath scenarios, automation, etc. I really don't even have the words. I work on RHEV/oVirt, from home. I have a lab. I have L5520s literally sitting on the floor because it's not worth the power bill and added runtime of the AC to have them on. I have a full-height rack in my office and it's not worth my time to have L5520s racked up because IPC is horrifyingly low, nested virtualization on them sucks, and performance is worse than my W530. To some point more cores buys you more vCPUs without hammering on interrupts, but I'm not sure why the next step "modern hardware" is automatically "5 year old decommissioned hardware". Hint: they're not using it any longer for a reason. For the cost of your C6100, you could have 2 hex core Visheras with 32GB of memory each, which will support advancements in virtualization over the intervening 4 years (there are a lot), cost you 1/4 of the power bill, generate 1/4 of the heat, 10% of the noise, and generally run circles around those L5520s on anything other than distributed compiles and cluster databases (but realistically, you probably don't have the IOPS to make either of those relevant). How is it a "gem"? evol262 fucked around with this message at 17:31 on Aug 21, 2013 |
# ? Aug 21, 2013 17:28 |
|
I think it would be cool to get one of those C6100s with 8x six core L5639s but only because all those cores... I bought a $300 C1100 with dual X5570s at 2.93ghz (a little faster then the L5520s) and it has been a great lab pc but that's also $300 I could have put towards getting a haswell based server. Like that's the cost of an e3-1270v3. Don't get me wrong, I like my dual X5570 setup but it definitely is old. I kind of want to collocate it at one of those $50 a month cheap places and use it to host something but meh. Also good luck if you ever want to collocate a c1100, they use 1-2 amps of power alone which will make it totally not worth it. I heard the sandy bridges use like half an amp for a single processor setup in a typical 1u setup.
|
# ? Aug 21, 2013 22:05 |
|
Is there any Juniper router or switch that doesn't cost a million dollars*? *(a million figurative dollars) Now that I have juniper routers set up in GNS3 and working, I might do a write up. I'm pretty impressed with JunOS so far.
|
# ? Aug 22, 2013 05:34 |
|
QPZIL posted:Is there any Juniper router or switch that doesn't cost a million dollars*?
|
# ? Aug 22, 2013 14:37 |
|
Dilbert As gently caress posted:I was about to go to bed but my sperg kicked in, OP here you go I'd say the first thing to do is scout around where you work. Due to virtualization consolidation plenty of orgs have spare PE2950's full of 10k drives lying around doing nothing. Find one with enough ram and you're good to go, I can't imagine many managers would have a problem with one of their guys rebuilding an old server and running it in their rack if it's for legit self improvement and not a torrent box/irc server.
|
# ? Aug 22, 2013 16:54 |
|
So I totally agree that being able to install ESXi on a box isn't that impressive just like being able to install Windows 7 isn't either but I would like to know once you have it installed and a few VMs running what would you consider an "accomplishment" in regards to actual VMWare work? Is it getting them networked and talking to each other? I've been "deploying" VMs for a little while now but I'd like to get some more knowledge and working into what makes a good VMWare admin
|
# ? Aug 22, 2013 17:46 |
|
|
# ? May 4, 2024 08:57 |
|
smokmnky posted:So I totally agree that being able to install ESXi on a box isn't that impressive just like being able to install Windows 7 isn't either but I would like to know once you have it installed and a few VMs running what would you consider an "accomplishment" in regards to actual VMWare work? Is it getting them networked and talking to each other? I've been "deploying" VMs for a little while now but I'd like to get some more knowledge and working into what makes a good VMWare admin What makes a good VMware admin subject matter knowledge of: SANs (FC and/or iSCSI), including best practices for multipathing, how to handle LUN masking and replication, etc Scripting -- PowerCLI is the standard, but you can use anything you want Systems Administration -- you're almost certainly going to end up hands-on with some of your VMs, and you should be comfortable in any OS running on your VMware environment, especially sysprep if you deal with Windows Networking -- Know when to use link aggregation and when not to. Understand VLANs and how they work, as well as how to segment your network and troubleshoot problems. Disaster recovery -- enough said; large VMware environments almost always have a DR site somewhere, and you should be familiar with scoping the required resources and setting up processes to ensure that a hot (or cold, depending on your environment) environment is ready Performance tuning -- know how the VMware scheduler works, and when 2 vCPUs are actually better than one. Know how dense you can make your environment. Get a handle on how many IOPS you need. Resiliency -- keeping critical services up through failures. Nobody wants your virtualized AD controllers to die. VDI -- plays into performance tuning/density/systems admin Imaging -- fading, but "golden images", templates, linked clones, and other ready-to-go images are still important. Nobody is going to hand you a configured environment and say "plug in your servers, assign these addresses, and collect a paycheck". Realistically, you'll help design the environment and administer it on a day-to-day basis, probably including the guests. A good virtualization admin has (or has had in the past) a hand in every pot.
|
# ? Aug 22, 2013 18:49 |