|
What is this 'overtime' you speak of?
|
# ? Oct 18, 2012 23:05 |
|
|
# ? May 23, 2024 16:14 |
|
SPB on my VNXe3300 just took a poo poo. So far work hasn't called me so I think SPA took over but we'll see tomorrow :/ I'm really starting to hate these VNX SANs.
|
# ? Oct 18, 2012 23:11 |
|
skipdogg posted:What is this 'overtime' you speak of? This email that I just got right here is what it is: "Thanks Nomex. At this time I am leaning towards servers loosing connection/communication and not noticing. One of the major changes was in the teaming of the Nexus switches....."
|
# ? Oct 18, 2012 23:18 |
|
Nomex posted:This email that I just got right here is what it is: I think what he meant is "salaried employees don't get overtime. Working more sucks, because all you get is 'comp time' that you can never comp because you're busy".
|
# ? Oct 18, 2012 23:20 |
|
I know what he meant, It was just good timing with that email.
|
# ? Oct 18, 2012 23:22 |
|
Corvettefisher posted:Might be more a networking question but, If I'm using FCP today I'm not sure why I'd want to replace it with 10gbE and iSCSI though I might consider FCoE at the edge if I'm trying to cut down on cabling costs/simplify my network design. Realistically speaking most end hosts would be fine with just 4gbps of storage bandwidth and even then few come close to the upper ceiling of that. Personally if I was doing block storage today I'd probably still prefer to use 8gb FCP in my core storage network because it's very reliable and will generally push more traffic than 10gbE when I start bundling links together. I'm probably also invested in some tools that integrate nicely with native FCP that don't have an iSCSI equivalent yet. It also means I can keep using any storage I'm trying to replace for other less important things. I don't buy the performance or management argument since in either case (iSCSI or FCoE) I still have to learn a new technology and truth be told we're talking about pretty high levels of bandwidth for your typical small to mid-sized customer. Unless you're selling brocade VCS ethernet fabric switches (or maybe Juniper QFabric) in which case you can build a pretty awesome ethernet storage network core.
|
# ? Oct 18, 2012 23:56 |
|
Goon Matchmaker posted:SPB on my VNXe3300 just took a poo poo. So far work hasn't called me so I think SPA took over but we'll see tomorrow :/ Keep in mind the VNXe is the baby brother of the VNX. It's the replacement for the AX series SAN's. Not to excuse it for having an issue, but hey, that's why you have redundant SP's.
|
# ? Oct 19, 2012 01:26 |
|
Goon Matchmaker posted:SPB on my VNXe3300 just took a poo poo. So far work hasn't called me so I think SPA took over but we'll see tomorrow :/ Those should be active/active just not ALUA so I doubt your systems new much of what occured. Personally I think they are nice for smaller customers. 1000101 posted:Bunch of view points Before I dive into this do you work in an internal department of a company or for an IT firm servicing different customers headed up against bid deals and SLA's?
|
# ? Oct 19, 2012 01:30 |
|
1000101 posted:Realistically speaking most end hosts would be fine with just 4gbps of storage bandwidth and even then few come close to the upper ceiling of that. We still run a lot of 2G FC (not for much longer thankfully) and we still don't come close to maxing out. The biggest problem we have are the SFPs burning out because they are so old. Then again, we are still on crappy old CX3 arrays which probably aren't even capable of pushing data out faster than that so YMMV.
|
# ? Oct 19, 2012 02:27 |
|
GrandMaster posted:Then again, we are still on crappy old CX3 arrays which probably aren't even capable of pushing data out faster than that so YMMV. Do you have the 4Gbps to your DAE's? You'd be surprised how much data those can put out... That is if you're running a CX3-80.
|
# ? Oct 19, 2012 04:07 |
|
Nomex posted:I know the server will take it, but will the software support it? Hey, any Netapp engineers in here? There aren't any officially qualified designs yet, and won't be until closer to it's actual release, so I can't say for sure whether there is a card limit. Thus far the only limit I've seen is 2TB per server. It will support Fusion-IO hardware since those are the cards that will be resold through NetApp for Flash Accel sales. It should also support most any other enterprise quality pci flash or SSD. It's also a free product, so no need to hit up your CTO for money, as long as you have NetApp filers under support you have access to it.
|
# ? Oct 19, 2012 04:27 |
|
Nomex posted:To the post above me, nexus gear may be expensive, but so are fabric switches. I think our last cost for brocade licensing was about $1300/port. GrandMaster posted:We still run a lot of 2G FC (not for much longer thankfully) and we still don't come close to maxing out. The biggest problem we have are the SFPs burning out because they are so old.
|
# ? Oct 19, 2012 04:43 |
|
Amandyke posted:Keep in mind the VNXe is the baby brother of the VNX. It's the replacement for the AX series SAN's. Not to excuse it for having an issue, but hey, that's why you have redundant SP's. Corvettefisher posted:Those should be active/active just not ALUA so I doubt your systems new much of what occured. Personally I think they are nice for smaller customers. They may be redundant but the last time this happened SPA did not take over and anything talking to LUNs on SPB died. Which took down half of our vmware environment. We've got multiple issues with our storage environment we can't fix at this time because it's a federal system and once its in production you can't tinker with it without going through this annoying as gently caress process that takes months to get all the sign offs and approvals required. Edit: I'm looking at the health page this morning on the VNXe and according to it, nothing happened. Nada. If nothing happened why the hell did it send me an email saying SPB faulted?
|
# ? Oct 19, 2012 14:22 |
|
What software version are you running?
|
# ? Oct 19, 2012 14:46 |
|
Corvettefisher posted:What software version are you running? 2.2.0.17384 (Thu Mar 22 02:37:03 GMT+0000 2012) It's not the latest, and I can't upgrade due to the previously mentioned approval process.
|
# ? Oct 19, 2012 15:42 |
|
Transfering some old crap to our 2240 from an old MD3000 is hilarious: reads from Raid 5 peg the MD's cpu, writes to Raid DP (6, basically) barely register on the filer's.
|
# ? Oct 19, 2012 15:47 |
|
evil_bunnY posted:Transfering some old crap to our 2240 from an old MD3000 is hilarious: reads from Raid 5 peg the MD's cpu, writes to Raid DP (6, basically) barely register on the filer's. Not sure if you know it but you can view multiprocessor stats on multi-core systems using the "sysstat -m" command. There are also some diag level commands that give you more insight into how much CPU time each processing domain is using, which can be more helpful in looking at processor utilization. That said, CPU utilization is rarely a leading indicator of problems on a filer, but it's always nice to know that you've got headroom there. The 2240s are pretty nice boxes, definitely much better than the 2040s that they replace.
|
# ? Oct 19, 2012 18:40 |
|
Holy poo poo Avamar systems are expensive. 90k starting?
|
# ? Oct 19, 2012 19:35 |
|
Corvettefisher posted:Holy poo poo Avamar systems are expensive. It's EMC dude. The VNX and VNXe series are actually the odd ones out due to being affordable. Everything else EMC does is expensive as hell. And that's before they charge you for the software that you thought came with it.
|
# ? Oct 19, 2012 20:27 |
|
Rhymenoserous posted:It's EMC dude. The VNX and VNXe series are actually the odd ones out due to being affordable. Everything else EMC does is expensive as hell. And that's before they charge you for the software that you thought came with it. That said you're a fool if you're paying list. 50% off is the defacto starting point for negotiations.
|
# ? Oct 19, 2012 20:36 |
|
Ah I just saw they didn't add our partner discount on... That explains it. E: Okay <90k for 2 with 7.8TB is fine Dilbert As FUCK fucked around with this message at 21:05 on Oct 19, 2012 |
# ? Oct 19, 2012 20:42 |
|
Amandyke posted:That said you're a fool if you're paying list. 50% off is the defacto starting point for negotiations. True. I can't discuss what we paid for our EMC systems, but there's lots of ways to make a deal before the end of a quarter, especially when you're unseating your primary competitor. We went from NetApp to EMC and the deal was insanely sweet. Marketing dollars, trade in credit, whatever. They'll get creative if you play hardball long enough.
|
# ? Oct 19, 2012 21:04 |
|
Amandyke posted:That said you're a fool if you're paying list. 50% off is the defacto starting point for negotiations. True: With a big enough enterprise they'll practically hand you the hardware but you are going to eat it in support and software costs. I don't really miss dealing with EMC.
|
# ? Oct 19, 2012 21:33 |
|
Corvettefisher posted:Those should be active/active just not ALUA so I doubt your systems new much of what occured. Personally I think they are nice for smaller customers. FYI the only really "active/active" storage array that EMC sells is the VMAX/Symmtrix line. VNX and VNXe's are more or less active/passive|passive/active and in fact do support ALUA. quote:Before I dive into this do you work in an internal department of a company or for an IT firm servicing different customers headed up against bid deals and SLA's? I work for a professional services company with a large chunk of my client base in the fortune 500. I specifically handle architecture and design work and have done a lot of capacity planning over the years. I've worked with financial services, health care, commercial and retail customers.
|
# ? Oct 20, 2012 16:48 |
|
adorai posted:Our nexus switches, layer 2 only, cost us somewhere around $30k for the pair, each with 32 ports. Why would you even consider looking at dedicated fibre channel infrastructure when you would need to spend over $1k per port for FC? If you have the end to end hardware to support FCoE, there really isn't much reason to have a dedicated FC network anymore IMO.
|
# ? Oct 20, 2012 17:39 |
|
I'm looking for a smaller solution like the VNXe, but after reading a bit over this thread, I've seen a lot of complaints about it. Can anyone offer a solution that fits the following: [*] Reliable. Having this never go down is the single most important factor. [*] Performance to handle ~2TB backups nightly, file sharing for 200 users that save exclusively on network shares, and hosting the images of 4 virtual servers (2 domain controllers, 2 application servers) [*] In the sub-$10,000 range. Is this feasible?
|
# ? Oct 23, 2012 23:47 |
|
You know that old car analogy where people say you can have "cheap, fast, reliable" pick 2? Yeah, same thing here. Your budget is way to low for any Honestly with support and maintenance costs included for 3 years you're probably looking at 25K to start. skipdogg fucked around with this message at 23:55 on Oct 23, 2012 |
# ? Oct 23, 2012 23:53 |
|
Nahrix posted:I'm looking for a smaller solution like the VNXe, but after reading a bit over this thread, I've seen a lot of complaints about it. Can anyone offer a solution that fits the following:
|
# ? Oct 23, 2012 23:56 |
|
Nahrix posted:I'm looking for a smaller solution like the VNXe, but after reading a bit over this thread, I've seen a lot of complaints about it. Can anyone offer a solution that fits the following: Yeah, the VNXe's aren't terrible you can get one for about ~16000 loaded with 6x600 15k drives, 6x2TB drives, should have 2 or 4 gig up links per SP, and 3yr 8x5 NBD support. I actually kinda like them, haven't had too much to complain about with them, so long as you keep them updated and not over taxed. The 3150's are a nice revision to the series. Might want to play who can go lower with with Netapp and EMC VNXe vs. FAS 2200. Call up some resellers and bargain with them If you mean 10K tops for a NAS that might be dicey, Dell has some solutions around there but I wouldn't cheap out on storage. You Might be able to do an MD1220, if your servers have eSAS connections. I popped one with 12 300Gb SAS 10k and 12 1TB 7.2 NL SAS + dual raid H800 controllers for ~15k list Dilbert As FUCK fucked around with this message at 00:07 on Oct 24, 2012 |
# ? Oct 23, 2012 23:59 |
|
Metrics. Do you have them? Anyone recommending stuff now is basically taking pot shots into the darkness. In any case, $10k isn't really SAN money.
|
# ? Oct 24, 2012 00:17 |
|
Every time I have DDN gear fail, it just makes me more impressed by it. I had BOTH raid controllers for a DDN9900 storage system fail after being powered on following building maintenance. A "disk chip" in each controller had failed. DDN shipped out 2 replacements, I swapped them in, re-cabled everything, and they booted fine, reading their config (zoning, network, syslog forwarding, etc.) off one of the disks. I didn't have to do anything other than turn them on!
|
# ? Oct 24, 2012 18:48 |
|
The_Groove posted:I had BOTH raid controllers for a DDN9900 storage system fail after being powered on following building maintenance. The_Groove posted:DDN shipped out 2 replacements, I swapped them in, re-cabled everything, and they booted fine, reading their config (zoning, network, syslog forwarding, etc.) off one of the disks. I didn't have to do anything other than turn them on! Vulture Culture fucked around with this message at 20:23 on Oct 24, 2012 |
# ? Oct 24, 2012 19:46 |
|
Misogynist posted:I'm curious why you consider this in any way acceptable for a storage vendor. Well, you see, all 4 wheels fell off my car, but they put new wheels on and it runs fine, I LOVE THIS CAR!
|
# ? Oct 24, 2012 20:11 |
|
Nahrix posted:I'm looking for a smaller solution like the VNXe, but after reading a bit over this thread, I've seen a lot of complaints about it. Can anyone offer a solution that fits the following: Used FAS2050A with "eBay parts" 20x 300GB 15K $7950 20x 450GB 15K $10,950 But, yeah, this is just a pot shot. No idea if this will work for you. Also have VNXe line but the FAS20x0s are much more popular these days.
|
# ? Oct 24, 2012 20:12 |
|
NippleFloss posted:Well, you see, all 4 wheels fell off my car, but they put new wheels on and it runs fine, I LOVE THIS CAR!
|
# ? Oct 24, 2012 20:24 |
|
The_Groove posted:Every time I have DDN gear fail, it just makes me more impressed by it. I had BOTH raid controllers for a DDN9900 storage system fail after being powered on following building maintenance. A "disk chip" in each controller had failed. DDN shipped out 2 replacements, I swapped them in, re-cabled everything, and they booted fine, reading their config (zoning, network, syslog forwarding, etc.) off one of the disks. I didn't have to do anything other than turn them on! I have to say this post has left an impression on me too.
|
# ? Oct 24, 2012 20:36 |
|
Haha well poo poo happens, I wouldn't necessarily blame DDN for components failing in an unlikely combination resulting in unscheduled downtime. They offered to send a tech, but we decided to do the replacement ourselves since their procedure was so simple. I was pretty impressed but apparently this is nothing special!
|
# ? Oct 24, 2012 21:10 |
|
The_Groove posted:They offered to send a tech, but we decided to do the replacement ourselves since their procedure was so simple. I was pretty impressed but apparently this is nothing special! What? While doing it yourself might be a moral/ego booster, I would much rather have one of their techs(unless it would take a day or several hours for them to get to you) to change it out, that way if anything isn't working or doesn't work right it is not your rear end, it this someone elses. Hell when one of my customers servers goes down I usually drive out and watch the tech change the part(s), restack, and verify availability.
|
# ? Oct 24, 2012 21:18 |
|
The_Groove posted:Haha well poo poo happens, I wouldn't necessarily blame DDN for components failing in an unlikely combination resulting in unscheduled downtime. They offered to send a tech, but we decided to do the replacement ourselves since their procedure was so simple. I was pretty impressed but apparently this is nothing special! I think that there are a lot of benefits to maintaining all configuration information on disk rather than in controller modules. It's nice to just swap in a new head and be up and running again. But having BOTH controllers fail simultaneously is absolutely unacceptable, and there's just no way you blow that off as a "random happenstance." Either the failure rate on the controllers is way too high, or they aren't truly independent. Either is a big problem.
|
# ? Oct 24, 2012 21:19 |
|
|
# ? May 23, 2024 16:14 |
|
Yeah we did the work because it would be finished before a tech could get out here. It is annoying that while the 2 controllers had different "disk chips" fail, between the two they could have still seen all the disks. But, the failed disk chip on the A controller prevented access to the disk holding the configuration (it's not some internal disk in the controllers, it uses a disk in the arrays it couldn't talk to). It's too late now, but I wonder if swapping the A/B controllers would have worked temporarily, since the new A controller should have been able to read the config. I have heard some stories about these 9900 controllers having serious issues during bootup, usually firmware related and not multiple failures though, but maybe that's changing as they age.
|
# ? Oct 24, 2012 21:59 |