Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
skipdogg
Nov 29, 2004
Resident SRT-4 Expert

What is this 'overtime' you speak of?

Adbot
ADBOT LOVES YOU

Goon Matchmaker
Oct 23, 2003

I play too much EVE-Online
SPB on my VNXe3300 just took a poo poo. So far work hasn't called me so I think SPA took over but we'll see tomorrow :/

I'm really starting to hate these VNX SANs.

Nomex
Jul 17, 2002

Flame retarded.

skipdogg posted:

What is this 'overtime' you speak of?

This email that I just got right here is what it is:

"Thanks Nomex. At this time I am leaning towards servers loosing connection/communication and not noticing. One of the major changes was in the teaming of the Nexus switches....."

evol262
Nov 30, 2010
#!/usr/bin/perl

Nomex posted:

This email that I just got right here is what it is:

"Thanks Nomex. At this time I am leaning towards servers loosing connection/communication and not noticing. One of the major changes was in the teaming of the Nexus switches....."

I think what he meant is "salaried employees don't get overtime. Working more sucks, because all you get is 'comp time' that you can never comp because you're busy".

Nomex
Jul 17, 2002

Flame retarded.
I know what he meant, It was just good timing with that email.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Corvettefisher posted:

Might be more a networking question but,

Is anyone here actively using FC for anything other than high end transaction servers? I've had a few customers who have existing 4Gb/s FC networks for their SAN looking to upgrade and I usually just go with 10Gb to run ISCSI or FCoE. But I get those "Is this guy really suggesting that?" moments from some people on ISCSI/FCoE. It might just be dinosaurs in the IT management sector who haven't looked at what is going on around them, and think FC @ 8Gb/s is the poo poo. Normally I'll sell the admins pretty fast when I show them the performance numbers, and sell the managers when I show them the cost, performance, and manageability of ISCSI/FCoE.

Just gets annoying having to repeat myself over and over, didn't know if anyone had some view points they could shed some light on.

If I'm using FCP today I'm not sure why I'd want to replace it with 10gbE and iSCSI though I might consider FCoE at the edge if I'm trying to cut down on cabling costs/simplify my network design. Realistically speaking most end hosts would be fine with just 4gbps of storage bandwidth and even then few come close to the upper ceiling of that.

Personally if I was doing block storage today I'd probably still prefer to use 8gb FCP in my core storage network because it's very reliable and will generally push more traffic than 10gbE when I start bundling links together. I'm probably also invested in some tools that integrate nicely with native FCP that don't have an iSCSI equivalent yet.

It also means I can keep using any storage I'm trying to replace for other less important things. I don't buy the performance or management argument since in either case (iSCSI or FCoE) I still have to learn a new technology and truth be told we're talking about pretty high levels of bandwidth for your typical small to mid-sized customer.

Unless you're selling brocade VCS ethernet fabric switches (or maybe Juniper QFabric) in which case you can build a pretty awesome ethernet storage network core.

Amandyke
Nov 27, 2004

A wha?

Goon Matchmaker posted:

SPB on my VNXe3300 just took a poo poo. So far work hasn't called me so I think SPA took over but we'll see tomorrow :/

I'm really starting to hate these VNX SANs.

Keep in mind the VNXe is the baby brother of the VNX. It's the replacement for the AX series SAN's. Not to excuse it for having an issue, but hey, that's why you have redundant SP's.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Goon Matchmaker posted:

SPB on my VNXe3300 just took a poo poo. So far work hasn't called me so I think SPA took over but we'll see tomorrow :/

I'm really starting to hate these VNX SANs.

Those should be active/active just not ALUA so I doubt your systems new much of what occured. Personally I think they are nice for smaller customers.

1000101 posted:

Bunch of view points

Before I dive into this do you work in an internal department of a company or for an IT firm servicing different customers headed up against bid deals and SLA's?

GrandMaster
Aug 15, 2004
laidback

1000101 posted:

Realistically speaking most end hosts would be fine with just 4gbps of storage bandwidth and even then few come close to the upper ceiling of that.

We still run a lot of 2G FC (not for much longer thankfully) and we still don't come close to maxing out. The biggest problem we have are the SFPs burning out because they are so old.

Then again, we are still on crappy old CX3 arrays which probably aren't even capable of pushing data out faster than that so YMMV.

Amandyke
Nov 27, 2004

A wha?

GrandMaster posted:

Then again, we are still on crappy old CX3 arrays which probably aren't even capable of pushing data out faster than that so YMMV.

Do you have the 4Gbps to your DAE's? You'd be surprised how much data those can put out... That is if you're running a CX3-80.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Nomex posted:

I know the server will take it, but will the software support it? Hey, any Netapp engineers in here?

There aren't any officially qualified designs yet, and won't be until closer to it's actual release, so I can't say for sure whether there is a card limit. Thus far the only limit I've seen is 2TB per server. It will support Fusion-IO hardware since those are the cards that will be resold through NetApp for Flash Accel sales. It should also support most any other enterprise quality pci flash or SSD.

It's also a free product, so no need to hit up your CTO for money, as long as you have NetApp filers under support you have access to it.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Nomex posted:

To the post above me, nexus gear may be expensive, but so are fabric switches. I think our last cost for brocade licensing was about $1300/port.
Our nexus switches, layer 2 only, cost us somewhere around $30k for the pair, each with 32 ports. Why would you even consider looking at dedicated fibre channel infrastructure when you would need to spend over $1k per port for FC?

GrandMaster posted:

We still run a lot of 2G FC (not for much longer thankfully) and we still don't come close to maxing out. The biggest problem we have are the SFPs burning out because they are so old.
Last time i checked, we were maxing our 10G ports out at around 1gbps each. During the backups we spiked a little higher, but not much. I somewhat regret spending the cash for 10G, until I look at the back of my racks and see half as many cables.

Goon Matchmaker
Oct 23, 2003

I play too much EVE-Online

Amandyke posted:

Keep in mind the VNXe is the baby brother of the VNX. It's the replacement for the AX series SAN's. Not to excuse it for having an issue, but hey, that's why you have redundant SP's.

Corvettefisher posted:

Those should be active/active just not ALUA so I doubt your systems new much of what occured. Personally I think they are nice for smaller customers.


They may be redundant but the last time this happened SPA did not take over and anything talking to LUNs on SPB died. Which took down half of our vmware environment.

We've got multiple issues with our storage environment we can't fix at this time because it's a federal system and once its in production you can't tinker with it without going through this annoying as gently caress process that takes months to get all the sign offs and approvals required.

Edit: I'm looking at the health page this morning on the VNXe and according to it, nothing happened. Nada. If nothing happened why the hell did it send me an email saying SPB faulted? :argh:

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
What software version are you running?

Goon Matchmaker
Oct 23, 2003

I play too much EVE-Online

Corvettefisher posted:

What software version are you running?

2.2.0.17384 (Thu Mar 22 02:37:03 GMT+0000 2012)

It's not the latest, and I can't upgrade due to the previously mentioned approval process.

evil_bunnY
Apr 2, 2003

Transfering some old crap to our 2240 from an old MD3000 is hilarious: reads from Raid 5 peg the MD's cpu, writes to Raid DP (6, basically) barely register on the filer's.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

evil_bunnY posted:

Transfering some old crap to our 2240 from an old MD3000 is hilarious: reads from Raid 5 peg the MD's cpu, writes to Raid DP (6, basically) barely register on the filer's.

Not sure if you know it but you can view multiprocessor stats on multi-core systems using the "sysstat -m" command. There are also some diag level commands that give you more insight into how much CPU time each processing domain is using, which can be more helpful in looking at processor utilization.

That said, CPU utilization is rarely a leading indicator of problems on a filer, but it's always nice to know that you've got headroom there. The 2240s are pretty nice boxes, definitely much better than the 2040s that they replace.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Holy poo poo Avamar systems are expensive.

90k starting?
:drat:

Rhymenoserous
May 23, 2008

Corvettefisher posted:

Holy poo poo Avamar systems are expensive.

90k starting?
:drat:

It's EMC dude. The VNX and VNXe series are actually the odd ones out due to being affordable. Everything else EMC does is expensive as hell. And that's before they charge you for the software that you thought came with it.

Amandyke
Nov 27, 2004

A wha?

Rhymenoserous posted:

It's EMC dude. The VNX and VNXe series are actually the odd ones out due to being affordable. Everything else EMC does is expensive as hell. And that's before they charge you for the software that you thought came with it.

That said you're a fool if you're paying list. 50% off is the defacto starting point for negotiations.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Ah I just saw they didn't add our partner discount on... That explains it.

E: Okay <90k for 2 with 7.8TB is fine

Dilbert As FUCK fucked around with this message at 21:05 on Oct 19, 2012

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

Amandyke posted:

That said you're a fool if you're paying list. 50% off is the defacto starting point for negotiations.

True. I can't discuss what we paid for our EMC systems, but there's lots of ways to make a deal before the end of a quarter, especially when you're unseating your primary competitor. We went from NetApp to EMC and the deal was insanely sweet. Marketing dollars, trade in credit, whatever. They'll get creative if you play hardball long enough.

Rhymenoserous
May 23, 2008

Amandyke posted:

That said you're a fool if you're paying list. 50% off is the defacto starting point for negotiations.

True: With a big enough enterprise they'll practically hand you the hardware but you are going to eat it in support and software costs. I don't really miss dealing with EMC.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Corvettefisher posted:

Those should be active/active just not ALUA so I doubt your systems new much of what occured. Personally I think they are nice for smaller customers.

FYI the only really "active/active" storage array that EMC sells is the VMAX/Symmtrix line. VNX and VNXe's are more or less active/passive|passive/active and in fact do support ALUA.

quote:

Before I dive into this do you work in an internal department of a company or for an IT firm servicing different customers headed up against bid deals and SLA's?

I work for a professional services company with a large chunk of my client base in the fortune 500. I specifically handle architecture and design work and have done a lot of capacity planning over the years. I've worked with financial services, health care, commercial and retail customers.

Nomex
Jul 17, 2002

Flame retarded.

adorai posted:

Our nexus switches, layer 2 only, cost us somewhere around $30k for the pair, each with 32 ports. Why would you even consider looking at dedicated fibre channel infrastructure when you would need to spend over $1k per port for FC?
Last time i checked, we were maxing our 10G ports out at around 1gbps each. During the backups we spiked a little higher, but not much. I somewhat regret spending the cash for 10G, until I look at the back of my racks and see half as many cables.

If you have the end to end hardware to support FCoE, there really isn't much reason to have a dedicated FC network anymore IMO.

Nahrix
Mar 17, 2004

Can't afford to eat out
I'm looking for a smaller solution like the VNXe, but after reading a bit over this thread, I've seen a lot of complaints about it. Can anyone offer a solution that fits the following:

[*] Reliable. Having this never go down is the single most important factor.
[*] Performance to handle ~2TB backups nightly, file sharing for 200 users that save exclusively on network shares, and hosting the images of 4 virtual servers (2 domain controllers, 2 application servers)
[*] In the sub-$10,000 range.

Is this feasible?

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

You know that old car analogy where people say you can have "cheap, fast, reliable" pick 2?

Yeah, same thing here. Your budget is way to low for any real big name enterprise SAN provider to hit with any of their entry level stuff. You might be able to whitebox open source something together under 10K but I would never put my job on the line with it being reliable.

Honestly with support and maintenance costs included for 3 years you're probably looking at 25K to start.

skipdogg fucked around with this message at 23:55 on Oct 23, 2012

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Nahrix posted:

I'm looking for a smaller solution like the VNXe, but after reading a bit over this thread, I've seen a lot of complaints about it. Can anyone offer a solution that fits the following:

[*] Reliable. Having this never go down is the single most important factor.
[*] Performance to handle ~2TB backups nightly, file sharing for 200 users that save exclusively on network shares, and hosting the images of 4 virtual servers (2 domain controllers, 2 application servers)
[*] In the sub-$10,000 range.

Is this feasible?
You could probably build a sub 10k ha netapp 2050 with ebay parts.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Nahrix posted:

I'm looking for a smaller solution like the VNXe, but after reading a bit over this thread, I've seen a lot of complaints about it. Can anyone offer a solution that fits the following:

[*] Reliable. Having this never go down is the single most important factor.
[*] Performance to handle ~2TB backups nightly, file sharing for 200 users that save exclusively on network shares, and hosting the images of 4 virtual servers (2 domain controllers, 2 application servers)
[*] In the sub-$10,000 range.

Is this feasible?

Yeah, the VNXe's aren't terrible you can get one for about ~16000 loaded with 6x600 15k drives, 6x2TB drives, should have 2 or 4 gig up links per SP, and 3yr 8x5 NBD support. I actually kinda like them, haven't had too much to complain about with them, so long as you keep them updated and not over taxed. The 3150's are a nice revision to the series.

Might want to play who can go lower with with Netapp and EMC VNXe vs. FAS 2200. Call up some resellers and bargain with them

If you mean 10K tops for a NAS that might be dicey, Dell has some solutions around there but I wouldn't cheap out on storage.


You Might be able to do an MD1220, if your servers have eSAS connections. I popped one with 12 300Gb SAS 10k and 12 1TB 7.2 NL SAS + dual raid H800 controllers for ~15k list

Dilbert As FUCK fucked around with this message at 00:07 on Oct 24, 2012

evil_bunnY
Apr 2, 2003

Metrics. Do you have them? Anyone recommending stuff now is basically taking pot shots into the darkness.

In any case, $10k isn't really SAN money.

The_Groove
Mar 15, 2003

Supersonic compressible convection in the sun
Every time I have DDN gear fail, it just makes me more impressed by it. I had BOTH raid controllers for a DDN9900 storage system fail after being powered on following building maintenance. A "disk chip" in each controller had failed. DDN shipped out 2 replacements, I swapped them in, re-cabled everything, and they booted fine, reading their config (zoning, network, syslog forwarding, etc.) off one of the disks. I didn't have to do anything other than turn them on!

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

The_Groove posted:

I had BOTH raid controllers for a DDN9900 storage system fail after being powered on following building maintenance.
I'm curious why you consider this in any way acceptable for a storage vendor.

The_Groove posted:

DDN shipped out 2 replacements, I swapped them in, re-cabled everything, and they booted fine, reading their config (zoning, network, syslog forwarding, etc.) off one of the disks. I didn't have to do anything other than turn them on!
And swap in and recable controllers, something which any reputable storage vendor would have sent out their own field technicians to do.

Vulture Culture fucked around with this message at 20:23 on Oct 24, 2012

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Misogynist posted:

I'm curious why you consider this in any way acceptable for a storage vendor.

Well, you see, all 4 wheels fell off my car, but they put new wheels on and it runs fine, I LOVE THIS CAR!

M@
Jul 10, 2004

Nahrix posted:

I'm looking for a smaller solution like the VNXe, but after reading a bit over this thread, I've seen a lot of complaints about it. Can anyone offer a solution that fits the following:

[*] Reliable. Having this never go down is the single most important factor.
[*] Performance to handle ~2TB backups nightly, file sharing for 200 users that save exclusively on network shares, and hosting the images of 4 virtual servers (2 domain controllers, 2 application servers)
[*] In the sub-$10,000 range.

Is this feasible?

Used FAS2050A with "eBay parts"
20x 300GB 15K $7950
20x 450GB 15K $10,950

But, yeah, this is just a pot shot. No idea if this will work for you. Also have VNXe line but the FAS20x0s are much more popular these days.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

NippleFloss posted:

Well, you see, all 4 wheels fell off my car, but they put new wheels on and it runs fine, I LOVE THIS CAR!
Hey, what's half a day of unplanned downtime between friends?

Rhymenoserous
May 23, 2008

The_Groove posted:

Every time I have DDN gear fail, it just makes me more impressed by it. I had BOTH raid controllers for a DDN9900 storage system fail after being powered on following building maintenance. A "disk chip" in each controller had failed. DDN shipped out 2 replacements, I swapped them in, re-cabled everything, and they booted fine, reading their config (zoning, network, syslog forwarding, etc.) off one of the disks. I didn't have to do anything other than turn them on!

I have to say this post has left an impression on me too.

The_Groove
Mar 15, 2003

Supersonic compressible convection in the sun
Haha well poo poo happens, I wouldn't necessarily blame DDN for components failing in an unlikely combination resulting in unscheduled downtime. They offered to send a tech, but we decided to do the replacement ourselves since their procedure was so simple. I was pretty impressed but apparently this is nothing special!

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

The_Groove posted:

They offered to send a tech, but we decided to do the replacement ourselves since their procedure was so simple. I was pretty impressed but apparently this is nothing special!

What? While doing it yourself might be a moral/ego booster, I would much rather have one of their techs(unless it would take a day or several hours for them to get to you) to change it out, that way if anything isn't working or doesn't work right it is not your rear end, it this someone elses.

Hell when one of my customers servers goes down I usually drive out and watch the tech change the part(s), restack, and verify availability.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

The_Groove posted:

Haha well poo poo happens, I wouldn't necessarily blame DDN for components failing in an unlikely combination resulting in unscheduled downtime. They offered to send a tech, but we decided to do the replacement ourselves since their procedure was so simple. I was pretty impressed but apparently this is nothing special!

I think that there are a lot of benefits to maintaining all configuration information on disk rather than in controller modules. It's nice to just swap in a new head and be up and running again.

But having BOTH controllers fail simultaneously is absolutely unacceptable, and there's just no way you blow that off as a "random happenstance." Either the failure rate on the controllers is way too high, or they aren't truly independent. Either is a big problem.

Adbot
ADBOT LOVES YOU

The_Groove
Mar 15, 2003

Supersonic compressible convection in the sun
Yeah we did the work because it would be finished before a tech could get out here. It is annoying that while the 2 controllers had different "disk chips" fail, between the two they could have still seen all the disks. But, the failed disk chip on the A controller prevented access to the disk holding the configuration (it's not some internal disk in the controllers, it uses a disk in the arrays it couldn't talk to). It's too late now, but I wonder if swapping the A/B controllers would have worked temporarily, since the new A controller should have been able to read the config.

I have heard some stories about these 9900 controllers having serious issues during bootup, usually firmware related and not multiple failures though, but maybe that's changing as they age.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply