Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
^ Pretty much that, when I review candidates and such at my place I never ask "can you install esxi", I may however ask about configuring Auto-Deploy, or Boot From SAN/Embedded

Adbot
ADBOT LOVES YOU

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Dilbert As gently caress posted:

Boot From SAN/Embedded

Do people do this anymore?

I never thought it made sense to use pricey SAN space to host a boot image if a USB key or a pair of tiny drives in RAID-1 will do the job for far cheaper.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Agrikk posted:

Do people do this anymore?

I never thought it made sense to use pricey SAN space to host a boot image if a USB key or a pair of tiny drives in RAID-1 will do the job for far cheaper.

Mostly cisco Blade servers, as some don't have internal USB drives. Can't say I am a fan of using any mechanical pieces in hosts.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

QPZIL posted:

Now that I have juniper routers set up in GNS3 and working, I might do a write up. I'm pretty impressed with JunOS so far.

I have only been working with Juniper stuff for about a month now, but absolutely love JunOS compaired to IOS. Currently working with a 2xEX4550 and 2xEX4200 in a virtual chassis along with 2xSRX240H at each of our main sites.

Dilbert As gently caress posted:

Mostly cisco Blade servers, as some don't have internal USB drives. Can't say I am a fan of using any mechanical pieces in hosts.

I don't ever get to deal with blades, but are internal USB slowing being replaced with internal SD cards?

Moey fucked around with this message at 20:14 on Aug 22, 2013

Count Thrashula
Jun 1, 2003

Death is nothing compared to vindication.
Buglord

Moey posted:

I have only been working with Juniper stuff for about a month now, but absolutely love JunOS compaired to IOS. Currently working with a 2xEX4550 and 2xEX4200 in a virtual chassis along with 2xSRX240H at each of our main sites.

Yeah ditto. JunOS is just so sexy. Those nested configs, mmm...

code:
system {
    host-name Router;
    login {
        user QPZIL {
            uid 2000;
            class super-user;
            authentication {
                encrypted-password "!@%!^#^@#@#butts!@#!%@#^";
            }
        }
    }
}
interfaces {
    /* uplink to the matrix */
    fe-0/0/0 {
        unit 0 {
            family inet {
                address 10.69.69.69/24;
            }
        }
    }
    /* connection to VLAN 10 */
    fe-0/0/1 {
        unit 10 {
            family inet {
                address 10.10.10.1/24;
            }
        }
    }
}
I just pulled that out of my rear end, but come on... comments! Nesting! Prefix length instead of subnet mask!

Ah, it's wonderful.

ate shit on live tv
Feb 15, 2004

by Azathoth
IOS, JunOS, FoS, NX-OS, it doesn't really matter after about a week.

Also for a fun JunOS excercise assign 24 ports to a vlan, then assign 12 of those ports to a different vlan.

How many strokes did that take?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Moey posted:

I don't ever get to deal with blades, but are internal USB slowing being replaced with internal SD cards?

Internal USB is cheaper and easier to manage, I could rant about blades but I don't think anyone wants to hear it.

smokmnky
Jan 29, 2009

Dilbert As gently caress posted:

Internal USB is cheaper and easier to manage, I could rant about blades but I don't think anyone wants to hear it.

ABR man, Always, Be, Ranting

evol262 posted:

What makes a good VMware admin subject matter knowledge of:

SANs (FC and/or iSCSI), including best practices for multipathing, how to handle LUN masking and replication, etc
Scripting -- PowerCLI is the standard, but you can use anything you want
Systems Administration -- you're almost certainly going to end up hands-on with some of your VMs, and you should be comfortable in any OS running on your VMware environment, especially sysprep if you deal with Windows
Networking -- Know when to use link aggregation and when not to. Understand VLANs and how they work, as well as how to segment your network and troubleshoot problems.
Disaster recovery -- enough said; large VMware environments almost always have a DR site somewhere, and you should be familiar with scoping the required resources and setting up processes to ensure that a hot (or cold, depending on your environment) environment is ready
Performance tuning -- know how the VMware scheduler works, and when 2 vCPUs are actually better than one. Know how dense you can make your environment. Get a handle on how many IOPS you need.
Resiliency -- keeping critical services up through failures. Nobody wants your virtualized AD controllers to die.
VDI -- plays into performance tuning/density/systems admin
Imaging -- fading, but "golden images", templates, linked clones, and other ready-to-go images are still important.

Nobody is going to hand you a configured environment and say "plug in your servers, assign these addresses, and collect a paycheck". Realistically, you'll help design the environment and administer it on a day-to-day basis, probably including the guests. A good virtualization admin has (or has had in the past) a hand in every pot.

So where would you suggest starting? Any good beginner books to read up on? Like I said we do a very, very limited amount of virtualization right now but I find it fascinating and would love to dig into it more.

smokmnky fucked around with this message at 00:04 on Aug 23, 2013

H.R. Paperstacks
May 1, 2006

This is America
My president is black
and my Lambo is blue

Powercrazy posted:

Also for a fun JunOS excercise assign 24 ports to a vlan, then assign 12 of those ports to a different vlan.

How many strokes did that take?

It's a pain in the rear end, just like dealing with port security is handled in a different stanza, but I'll take those extra strokes any day in order to have things like nested prefix-lists that can be used everywhere in the config and apply-paths to name a few.

The switching side of JunOS is still evolving and I've made numerous feature requests our rep team related to it.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

smokmnky posted:

So where would you suggest starting? Any good beginner books to read up on? Like I said we do a very, very limited amount of virtualization right now but I find it fascinating and would love to dig into it more.
I have some in the OP of the VMware mega thread, look at Mastering vSphere 5by Scott Lowe, it will give you a solid foundation on virtualization

Docjowles
Apr 9, 2009

smokmnky posted:

ABR man, Always, Be, Ranting


So where would you suggest starting? Any good beginner books to read up on? Like I said we do a very, very limited amount of virtualization right now but I find it fascinating and would love to dig into it more.

How many physical servers do you manage right now, and how many of those do you hope to virtualize? Do you already have any sort of SAN/central storage in place? Answers to questions like that will help posters decide if you need "loving Guru" or "Intro to VMware" level advice :)

Pile Of Garbage
May 28, 2007



Dilbert As gently caress posted:

Internal USB is cheaper and easier to manage, I could rant about blades but I don't think anyone wants to hear it.

I'd like to hear a rant about blades. I loved working with IBM BladeCentre chassis and blade servers in my last job.

Here's two IBM HS22 blades I did last year, internal USB is highlighted:



Another benefit of booting ESXi from USB is that it's a hell of a lot cheaper than having HDDs (In the above picture you can see that both blades have empty HDD slots).

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

cheese-cube posted:

I'd like to hear a rant about blades. I loved working with IBM BladeCentre chassis and blade servers in my last job.


Should have specified CISCO blades in paticular, but yeah there are a few things I don't care about blades I may type up tomorrow for lunch

Dilbert As FUCK fucked around with this message at 03:58 on Aug 23, 2013

Docjowles
Apr 9, 2009

Dilbert As gently caress posted:

Should have specified CISCO blades

So far we really like our Cisco UCS blades, although the management software and paradigm is a little weird. Granted I'm not the primary guy that manages them, but the guy who is likes 'em too.

The few C-series we bought as a trial (standard rack mount form factor), on the other hand, have been the worst shitshow known to man.

Wicaeed
Feb 8, 2005

Docjowles posted:

So far we really like our Cisco UCS blades, although the management software and paradigm is a little weird. Granted I'm not the primary guy that manages them, but the guy who is likes 'em too.

The few C-series we bought as a trial (standard rack mount form factor), on the other hand, have been the worst shitshow known to man.

Ouch, good to know. We were thinking about getting some C-series Dell blades, but I think we're probably going to end up sticking with the standard E1000 chassis & M600 series blades.

smokmnky
Jan 29, 2009

Docjowles posted:

How many physical servers do you manage right now, and how many of those do you hope to virtualize? Do you already have any sort of SAN/central storage in place? Answers to questions like that will help posters decide if you need "loving Guru" or "Intro to VMware" level advice :)

So that's the thing, my specific job and department is a little weird. We have ~100 colos in 34 countries that run on what we would call cookie cutter boxes. They are the same everywhere and when one goes down we just put a new one in it's place. Our expansion/capacity or redundancy is just adding more of that type of server into the rack. We do web monitoring and metrics based off that specific hardware so turning it into a VM isn't possible because data consistency is what we sell and the whole point of the business. Basically if you get metrics from a location in Chicago and Beijing it's the same other than the actual network.

We run two esxi servers that vm our centos dns servers and a couple other support style machines that don't collect data like the main service machines.

/edit
That's two esxi servers per location with 9 vms loaded but only 4-5 running per server. They mirror each other in the VMs but we run different VMs on the A and B servers

smokmnky fucked around with this message at 21:24 on Aug 23, 2013

Count Thrashula
Jun 1, 2003

Death is nothing compared to vindication.
Buglord
Finally decided to sketch out my home lab since I was bored enough today.



Everything's set up so I can do labbing without my WiFi interfering with anything.

So I've got one 1841 dedicated to NAT translation, as sort of a border between my ISP router (some $20 WalMart special) and my lab network.
Then the next 1841 (C_Router_1) is my router-on-a-stick and routes to NATRouter via OSPF.

The top switch (C_Switch_1) is the only one worth a drat, because the XL switches are so bad and are stuck on IOS 11.2. But VLAN 10 (ports 17-23) is the only one with access to get NAT'ed, so I can plug my laptop into one of those ports and have internet while I gently caress around and experiment with the rest of the setup.

It works for me, and segments things well enough that I don't feel like I'm messing up anything by connecting GNS3 networks and firewalls and IP phones and Juniper bullshit and whatnot.

Overkill? Maybe, but it's fun :)

Not pictured:
- 2610 router
- 1721 router
I haven't figured out what fun things to do with those yet.

And I still need to buy a couple 3550s if I'm going to be studying up for the CCNP exams.

Count Thrashula fucked around with this message at 20:51 on Aug 23, 2013

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

QPZIL posted:


Overkill? Maybe, but it's fun :)


It's my opinion that home labs, when done properly, are the most overcomplicated and overbuilt environments ever on a per user basis. :)

I was bitching in the daily poo poo thread about how complicated and cluttered my lab had become so I tore it all down, set it up in its current configuration and vowed not to touch it until my MRTG installation fills up the Yearly Graph (1 Day Average).

Except that yesterday I ordered three FC HBAs and am toying with the idea of building a new storage server around Windows Server 2012 R2 and converting everything from iSCSI to 4GB FC.

And maybe some Infiniband for giggles.

gently caress.

evol262
Nov 30, 2010
#!/usr/bin/perl

Agrikk posted:

It's my opinion that home labs, when done properly, are the most overcomplicated and overbuilt environments ever on a per user basis. :)

I was bitching in the daily poo poo thread about how complicated and cluttered my lab had become so I tore it all down, set it up in its current configuration and vowed not to touch it until my MRTG installation fills up the Yearly Graph (1 Day Average).

Except that yesterday I ordered three FC HBAs and am toying with the idea of building a new storage server around Windows Server 2012 R2 and converting everything from iSCSI to 4GB FC.

And maybe some Infiniband for giggles.

gently caress.

iSCSI over IPoIB. gently caress FC.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

evol262 posted:

iSCSI over IPoIB. gently caress FC.

IB is next.

I'm comfortable on FC so I want to get familiar with a new technology (tiered storage in Server 2012 R2) while refreshing on FC. Plus FC gear (HBAs and switches) is a lot less expensive than IB gear.

But why the FC hate? It's not that complicated to manage and has been bombproof in all of my past deployments. If I had any complaint about it would be the lack of insight into actual traffic utilization over your FC fabric that I'm not sure has been resolved.

evol262
Nov 30, 2010
#!/usr/bin/perl

Agrikk posted:

IB is next.

I'm comfortable on FC so I want to get familiar with a new technology (tiered storage in Server 2012 R2) while refreshing on FC. Plus FC gear (HBAs and switches) is a lot less expensive than IB gear.

But why the FC hate? It's not that complicated to manage and has been bombproof in all of my past deployments. If I had any complaint about it would be the lack of insight into actual traffic utilization over your FC fabric that I'm not sure has been resolved.

I actually like FC. But FCoE is abortive, and ethernet just isn't going away. I like the idea of having a segmented storage network on a separate protocol layer, but the reality is that almost all the advantages of FC can be accomplished with iSCSI, MPIO, and VLANs. Not that FC is bad, just that it's dying. FC shops will stay FC, but new deployments will probably be iSCSI until it takes over the world. Maybe just my opinion.

smokmnky
Jan 29, 2009

smokmnky posted:

So that's the thing, my specific job and department is a little weird. We have ~100 colos in 34 countries that run on what we would call cookie cutter boxes. They are the same everywhere and when one goes down we just put a new one in it's place. Our expansion/capacity or redundancy is just adding more of that type of server into the rack. We do web monitoring and metrics based off that specific hardware so turning it into a VM isn't possible because data consistency is what we sell and the whole point of the business. Basically if you get metrics from a location in Chicago and Beijing it's the same other than the actual network.

We run two esxi servers that vm our centos dns servers and a couple other support style machines that don't collect data like the main service machines.

/edit
That's two esxi servers per location with 9 vms loaded but only 4-5 running per server. They mirror each other in the VMs but we run different VMs on the A and B servers

think my post got swallowed up by QPZIL awesome graph. I'd love some feedback from Docjowles and Dilbert(Corvette)

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Agrikk posted:

IB is next.

I'm comfortable on FC so I want to get familiar with a new technology (tiered storage in Server 2012 R2) while refreshing on FC. Plus FC gear (HBAs and switches) is a lot less expensive than IB gear.

But why the FC hate? It's not that complicated to manage and has been bombproof in all of my past deployments. If I had any complaint about it would be the lack of insight into actual traffic utilization over your FC fabric that I'm not sure has been resolved.

Well gently caress. Two of my FC bids on eBay for sets of FC cards got sniped in the closing seconds of the auction. It looks like destiny is telling me to jump on IB.

edit: or to invest in an eBay sniper app.

Sepist
Dec 26, 2005

FUCK BITCHES, ROUTE PACKETS

Gravy Boat 2k
I was all excited about a 3750 I got off ebay for my home lab and after they sent me the shipping info they sent me a followup email saying that they didn't actually have the switch and refunded my money :smith:

Count Thrashula
Jun 1, 2003

Death is nothing compared to vindication.
Buglord

Sepist posted:

I was all excited about a 3750 I got off ebay for my home lab and after they sent me the shipping info they sent me a followup email saying that they didn't actually have the switch and refunded my money :smith:

:smith:

Could be worse, I bought a 3550 (i.e. layer 3 switch) on eBay for about $50 after shipping was factored in, and what they sent me was a 3524XL (a layer 2 switch that doesn't even run IOS version 12).

They seller told me that it was basically the same thing and that I was complaining about nothing :downs: A week later, eBay reviewed the case and refunded my full payment amount. I still have the switch, but I don't do a drat thing with it.

kill your idols
Sep 11, 2003

by T. Finninho
Finally got everything installed and wrapped up. Another 16GB of ram came yesterday, so this setup is done.

Focus: :eng101: VMware.

Setup: Beefy AIO host, shared storage out by NFS/iSCSI, back to host, and x3 vESXi hosts for vGoodies.

Hardware:
  • Intel Xeon E3-1230 V2 Ivy Bridge 3.3GHz
  • SUPERMICRO MBD-X9SCL-F-O
  • 32 GB 1333MHz DDR3 PC3-10666 ECC
  • x5 2TB 7200rpm, for Datastore02/Datastore03 Raid10
  • x1 M4 128GB SSD, for Datastore01
  • x1 HBA, for VT-d
  • x2 Intel NICs



I had some parts, so total: $750ish

Gonna start installing some goodies tomorrow!

Docjowles
Apr 9, 2009

For the OP, I got some valuable info from evol262 in the "IT 2.0" thread. If you're studying for Red Hat's RHCSA/RHCE Linux certs, some of the published exam objectives include launching and managing VM's using the KVM hypervisor. If you want to practice this for free without buying new hardware or software, VMware Player is your best option. It supports running another VM host inside of itself (insert Yo Dawg image macro here), known as Nested Virtualization. So if you have enough horsepower, you can lab up everything you need for the exam right from your primary PC. The only special setup required is to edit the VM while it's powered down, browse to the CPU and make sure to check "Virtualize Intel VT-x/EPT or AMD-V/RVI". To test that it's working, from within your Linux VM, open a terminal and run "lsmod | grep kvm". You should see two modules, kvm and either kvm_intel or kvm_amd. If you see nothing, or just kvm, something's set up incorrectly.

Pretty much VMware's entire product lineup (based on Google, ESXi 5.1+ and Workstation 9+ for sure) supports this feature. The great thing about Player is that it's free, and doesn't require dedicating an entire physical host like ESXi would. Some other popular products, notably Oracle VirtualBox, do not support nested virtualization. Nor will they anytime soon, based on public statements. I learned this the hard way after wasting several hours trying to get it working.

Basically, if your goal is running "VM's inside VM's", VMware Player owns. All it requires is Windows or Linux as your host OS (no Mac support), and a 64-bit CPU released within the last few years. It's very bare-bones feature wise but it does the job.

kill your idols
Sep 11, 2003

by T. Finninho

Docjowles posted:

"VM's inside VM's", VMware Player owns. All it requires is Windows or Linux as your host OS (no Mac support), and a 64-bit CPU released within the last few years. It's very bare-bones feature wise but it does the job.

Workstation 9 also does this as well.

Free 30 Day Trial / Academic Discounts give you alot of cool stuff for the vNested installs. For home lab testing and breaking or just some basic VM's for other non-host applications; I feel it is the best bet. It has come along way since v6.0.

This product killed Windows Virtual PC for me years ago.

BurgerQuest
Mar 17, 2009

by Jeffrey of YOSPOS
If anyone wants GNS3 and virtual box advice feel free to PM me.

thegoat
Jan 26, 2004

I've just ordered basically the exact same thing. Just no drives. Hope it comes soon!

evol262
Nov 30, 2010
#!/usr/bin/perl

Docjowles posted:

Basically, if your goal is running "VM's inside VM's", VMware Player owns. All it requires is Windows or Linux as your host OS (no Mac support), and a 64-bit CPU released within the last few years. It's very bare-bones feature wise but it does the job.

I should mention that KVM can also do this. And Parallels can, I think. So you have the ability to do it on every major OS. I'd recommend using KVM's nested virt over Player's on Linux (just because everything Workstation/Player related is a huge PITA compared to KVM), but eh.

Docjowles
Apr 9, 2009

I'm sure there are easier and more power solutions. However, as far as I know, Player's the only product that satisfies my two very specific goals of being 1) free, and 2) running on Windows. Longer term I definitely want to build a dedicated lab box, but I can build all the "lab" I need right now in VM's for free, which is good enough for me.

evol262
Nov 30, 2010
#!/usr/bin/perl

Docjowles posted:

I'm sure there are easier and more power solutions. However, as far as I know, Player's the only product that satisfies my two very specific goals of being 1) free, and 2) running on Windows. Longer term I definitely want to build a dedicated lab box, but I can build all the "lab" I need right now in VM's for free, which is good enough for me.

No, I mean, Player's a great solution for Windows.

Nested VM support:

Windows
  • VMware Player
  • VMware Workstation
Linux
  • VMware Player
  • VMware Workstation
  • KVM
OSX
  • Parallels
  • VMware Fusion

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

kill your idols posted:

Workstation 9 also does this as well.

Free 30 Day Trial / Academic Discounts give you alot of cool stuff for the vNested installs. For home lab testing and breaking or just some basic VM's for other non-host applications; I feel it is the best bet. It has come along way since v6.0.

This product killed Windows Virtual PC for me years ago.

Liking their FB page also gives very modest discounts

kill your idols
Sep 11, 2003

by T. Finninho
Thinking about this HP ProCurve 1810G-8 v2 for $135 shipped. Worth the upgrade from my NetGear GS108T?

kill your idols fucked around with this message at 17:30 on Aug 29, 2013

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
Mac Minis make awesome lab machines. Tiny, quiet, and cheap. Downside is they max out at 16GB of RAM.

evol262
Nov 30, 2010
#!/usr/bin/perl

kill your idols posted:

Thinking about this HP ProCurve 1810G-8 v2 for $135 shipped. Worth the upgrade from my NetGear GS108T?

Yes. But 1810-24Gs are $50 more. A PowerConnect 5324 and (replacement) quiet 40mm fans is less and more capable in every possible way.

kill your idols
Sep 11, 2003

by T. Finninho

evol262 posted:

Yes. But 1810-24Gs are $50 more. A PowerConnect 5324 and (replacement) quiet 40mm fans is less and more capable in every possible way.

So I wanna grab something like this guy:

http://www.ebay.com/itm/DELL-POWERCONNECT-5324-24-PORT-GIGABIT-NETWORK-SWITCH-/151101407666?pt=LH_DefaultDomain_0&hash=item232e5881b2

?
edit: https://www.andovercg.com/store/Del...itch-p6638.html

better deal, so seems.

evol262
Nov 30, 2010
#!/usr/bin/perl

Pretty much, yeah. And update the firmware first thing so it actually works with IE.

It's also really goddamn loud (like a lot of switches). Get 2 quiet 40mm fans and 4 solderless quick splices and fix that (it's the usual dell problem where hot and neutral are actually reversed for no reason). The biggest win of the 1810-24G is that it's fanless, but it's honestly much less capable than a 5324.

Adbot
ADBOT LOVES YOU

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

evol262 posted:

Pretty much, yeah. And update the firmware first thing so it actually works with IE.

It's also really goddamn loud (like a lot of switches). Get 2 quiet 40mm fans and 4 solderless quick splices and fix that (it's the usual dell problem where hot and neutral are actually reversed for no reason). The biggest win of the 1810-24G is that it's fanless, but it's honestly much less capable than a 5324.

5324 owner seconding the awesomeness of the 5324 and how loud the fans are.

I solved that problem like this:



where I took the 40mm leads and used them to power a pair of 80mm fans. The lower current slows the 80mm fans down a lot, but they run super quiet.

Disadvantages are that the fan-alert red LED is on all of the time now and you lose the 1U of space above the switch due to the fans.

Temps of my switch dropped significantly with this mod, though.


edit: I am assuming that this discussion is for a home lab. If this is in your office, just stick it in your wiring closet and close the door. :)


edit2: Another smaller switch to look into is the PowerConnect 2716. Dunno if it does LAGs or alternate MTU sizes, though. You might need to check on that.


Agrikk fucked around with this message at 20:39 on Aug 29, 2013

  • Locked thread