Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

Agrikk posted:

My plan is to use Server 2012 R2 as my FC storage target, so it looks like these cards are supported on both sides of the link.

All my tests resulted in utter crap performance when Windows was acting as a storage target.

quote:

If I get that working, I'll add in a silkworm switch to experiment with cluster storage groups and failover clustering under 2012.

I run an active-active file sharing cluster on Storage Server 2012 Standard, feel free to ask if you want to know something. :)

Adbot
ADBOT LOVES YOU

Wicaeed
Feb 8, 2005

szlevi posted:

^^^THIS - end of the year is THE BEST and end of a quarter is second best time to buy anything and all things you've mentioned will make them even sweeter. I literally almost never buy anything in-between quarters.

Can pretty much confirm this. Our company was testing the waters with EMC to see what they could get us, and while we were asking for a VNX5200 (new model) it seems that EMC is desperate to make a deal as they bumped us up to a VNX5400 w/20TB usable & ~37k IOPS for around $60,000 price point.

Can anyone confirm if that's as big of a deal as I think it is?

evil_bunnY
Apr 2, 2003

How many drives is that 37k figure?

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

szlevi posted:

All my tests resulted in utter crap performance when Windows was acting as a storage target.

Interesting. I posted the results of a direct comparison between FreeNAS, Openfiler and Microsoft's iSCSI target and surprisingly, MSiST performed well (similar to Openfiler) with FreeNAS a distant third.

(I thought I'd posted the same results in a thread here, but I can't find it.)

Agrikk fucked around with this message at 23:55 on Sep 18, 2013

Wicaeed
Feb 8, 2005

evil_bunnY posted:

How many drives is that 37k figure?

Disk Tab information on their quote is:

600 GB 10k Vault Pack (4 Drives)
11 x 600GB 10K SAS Disk Drives
13 x 2TB NL-SAS Disk Drives
3 x 100GB FAST CACHE EFD
6 x 400GB FAST VP EFD

+ 36 Months Enhanced HW\SW Support

KS
Jun 10, 2003
Outrageous Lumpwad

Agrikk posted:

Interesting. I posted the results of a direct comparison between FreeNAS, Openfiler and Microsoft's iSCSI target and surprisingly, MSiST performed well (similar to Openfiler) with FreeNAS a distant third.

(I thought I'd posted the same results in a thread here, but I can't find it.)

That squares with what I saw using some really high performance ZFS hardware: 32 drives, server hardware, and a ZeusRAM ZIL. FreeNAS and other FreeBSD 8 derivatives were stable but had lovely performance. NAS4Free and FreeBSD 9 were fast but crashed routinely under load, and I talked to someone else experiencing the same issues under Stable/9. We had to go to Illumos to get a reliable, well-performing ZFS system.

That said, Nas4free has been completely stable for me on my little home system.

KS fucked around with this message at 22:02 on Sep 19, 2013

Man Yam
Aug 31, 2004
Pickle. No! You pickle!
Too many choices in terms of SAN storage and I am starting to drink the NetApp kool-aid.

We currently use HP Lefthand which we are outgrowing and trying to find nodes to match our current setup (2.4 TB nodes) is getting more difficult. We can find larger nodes, but then they have to be bought in a pair and set up as different management group to take advantage of all the space or add one node to our existing setup and lose x TB of space.

No budget, but the company is willing to purchase almost anything if IT can justify it. The HP SAN has 5.6 TB usable in production with 600 GB free and the DR site has 4.8 TB usable with 1 TB free. SAN has been in place for over 3 years now, but we are moving to more scanned documents and added a Data Warehouse this past summer.

Looking at 2 of whatever, 1 for production (SAS drives) and 1 for our DR/small office site (SATA drives), the quotes are comparable from a price and usable space (between $50-70k & 18 TB usable):

EMC VNXe3150
NetApp 2220
IBM V3700

iSCSI connections since our current HP setup uses iSCSI on a 1 Gb backbone, with replication over a private fiber link to our DR center (50 Mb). Riverbed's on both sides for WAN optimization. We currently use snapshots as backup for data volumes, but I will be redoing the whole VMware setup to move the data volumes off of iSCSI LUNs to RDM (using .vmdk) for the VMs, so that I can take full advantage of our PHDVirtual backup solution, which backs up to really cheap 16 TB Buffalo Terastations. 39 VM guests in production, mostly Windows 2003/2008R2 running Domino, SQL Express, MySQL, and SAP Business Objects.

I really like the NetApp integration with vCenter, and implementation engineers from the non EMC vendors, stated outright that while they would prefer we use their recommended solution, we should avoid EMC like the plague.

I read a lot of EMC love in this thread, but should I just choose whichever solution is least expensive from a "Will it work?" standpoint?

evil_bunnY
Apr 2, 2003

The EMC love is very, very recent.

madsushi
Apr 19, 2009

Baller.
#essereFerrari
If I had my way, I would be using 100% NetApp all the time, purely from a management + capabilities standpoint. Of course, I have also used a lot of Nimble (which is a reasonably fast iSCSI box, but ONLY that) and my new company is buying an Isilon, so my horizons are expanding.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."


I work in Professional Services for NetApp, so if you have any questions about their stuff feel free to ask me. As far as your choices go, they will probably all do what you want at a basic level so you should really decide which "nice-to-have" features mean the most to you. NetApp snapshots, application integration, and replication are generally excellent. Deduplication savings can be substantial but it's post-process and can be impactful so YMMV. On the smaller boxes like the 2220 disks lost to parity and spares can be a problem due to the need to run in an active/active pair. You can end up losing a lot of your raw capacity and performance to overhead How many disks and what type of disks are you being quoted?

As far as EMC, most of the recent love seems to be about the VNX2 boxes which I don't believe have made it into the wild. I think Skipdogg and maybe someone else in here has some VNXe stuff and can probably comment on that.

Make sure you leverage the fact that you are talking to other vendors to get them to come down in price. You should never take the first quote that they provide as you can routinely get well below that if you're willing to haggle some, and let them each vendor know that you're talking to the others and that you've had very competitive quotes from them.

Also, if you do go with NetApp you should definitely consider doing NFS for your datastores and backing them up through VSC. It's just so quick and efficient that it's a waste to buy NetApp storage and not use it that way.

Docjowles
Apr 9, 2009

NippleFloss posted:

Make sure you leverage the fact that you are talking to other vendors to get them to come down in price. You should never take the first quote that they provide as you can routinely get well below that if you're willing to haggle some, and let them each vendor know that you're talking to the others and that you've had very competitive quotes from them.

Something like this should be in the OP of every IT thread. It's totally nuts how flexible pricing is on hardware. The power of the "... thanks, but we have a much better quote from <direct competitor>" card cannot be overstated. I know we all start out hating Call For Pricing, but it's a two way street and you can usually do far better than what the list price would be even if they did publish it.

Amandyke
Nov 27, 2004

A wha?
Not sure why you would avoid EMC storage when running VMware considering VMware is a subsidiary of EMC...

evil_bunnY
Apr 2, 2003

Amandyke posted:

Not sure why you would avoid EMC storage when running VMware considering VMware is a subsidiary of EMC...
Have you seen the support-related EMC posts in here? You must have, since you've replied to a good few of them.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Amandyke posted:

Not sure why you would avoid EMC storage when running VMware considering VMware is a subsidiary of EMC...
This is like asking why non-smokers would avoid Marlboros when they like Kraft Macaroni and Cheese

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

NippleFloss posted:

As far as EMC, most of the recent love seems to be about the VNX2 boxes which I don't believe have made it into the wild. I think Skipdogg and maybe someone else in here has some VNXe stuff and can probably comment on that.

We're an EMC shop for the most part. I personally manage a VNXe 3300 in the small engineering office I work in, and we have 2 VNX5500's in production, one in each US data center. I know our UK site has an aging V7000 IBM system we'll probably be replacing next year with something EMC.

I wish to qualify my comments about the kit with the following statement: I am not a storage professional, I know the fundamentals and theories behind storage, but do not have the hands on years of experience other posters in this thread have.

My thoughts about the VNXe: I really like this little box. Software is super friendly to use, and it seems fast enough. I'm not throwing anything intense at it though. A few light VM's over iSCSI and some NFS and CIFS shares hosted directly from the filer. It seems really well built, and I really like how redundant it can be. I would highly recommend that if you fit the use case for a VNXe, to check them out. It really is a SMB SAN for dummies to be honest and the pricing is not bad at all. I was able to snag a VNXe with 2 extra ethernet I/O modules, 8x600GB 15K and 7 x 2TB 7.2K NL-SAS with all software licensing and upgraded 3 years support for a good bit under 30K. (don't want to get to specific with pricing).

caveat: Obviously my environment is pretty light duty, I don't know any of those 'gotchas' that only get found out once you've been in production for a while. I'm also not doing anything other than basic stuff with it. No dedupe, snapshots, data protection, replication, nada.

The VNX5500's in our larger sites. We replaced a 4 year old NetApp 30xx series with the VNX5500. We were at the point where we couldn't add any more disk shelves to the NetApp and the year 5 support was going to be astronomical.

I do not know why we didn't replace with another NetApp. I can postulate that some very aggressive courting from a Bay Area EMC VAR to some upper level IT folks took place (event tickets, golf, wine tastings have been rumored), as well as a lot of marketing dollars were spent and a generous trade in credit on the NetApp was given to make this deal happen. There may have also been some drama over the maintenance renewal costs and NetApp not budging on the price souring them on NetApp in general.

I don't have specifics on the VNX5500's but I do know they have 2TB of flash drives, a couple shelves of 15K and a couple shelves of 7.2K drives and the OTD price was in the neighboorhood of 250K or with all the licensing. I think we have ballpark 40TB usable on each one, I could be wrong though.

We haven't had any major issues with them. There was some kind of firmware bug that kept reporting a bad fan module or something that gave us some grief for a while, but EMC Support is pretty good, as it should be considering the price they command.

Enterprise hardware sales is a hilarious game to play. We upgraded 300 desktop computers in one of our call centers and I was throwing Acer and HP pricing against Dell and the price kept falling. I was going to buy the Dell's anyway but they didn't know what. Dell Kace found out about the opportunity and they came up with 15K in marketing dollars to take off the price of the purchase order if we bought one of their KACE 1000 appliances as well for like 10K. We did it, and the thing is still in the sealed box almost 2 years later. They only gave us 100 licenses and figured we would love it and buy the other 350 licenses we needed. They guessed wrong.

We're an HP server shop, always have been, always will be. (ProLiant 4 life!) Doesn't stop us from getting stupid aggressive quotes from Dell on servers and then beating HP down on pricing until they match or beat Dell. We're Dell for workstations and laptops, and they want our server business so bad it's ridiculous. We just used that method to get 6 fully loaded DL380's for under 75K. Dual 8 core w/ 384GB RAM. Insane.

Man Yam
Aug 31, 2004
Pickle. No! You pickle!

NippleFloss posted:

...How many disks and what type of disks are you being quoted?

Make sure you leverage the fact that you are talking to other vendors to get them to come down in price. You should never take the first quote that they provide as you can routinely get well below that if you're willing to haggle some, and let them each vendor know that you're talking to the others and that you've had very competitive quotes from them...

Looking at the NetApp quote it's a FAS2220, HA, 1x12x1TB in the main unit and a shelf of 24x600GB 10K, same for the 2nd unit. I really liked the tight integration with vCenter, but have not seen how EMC and IBM handle their integration. Although all the vendors' engineers agreed that NetApp has a very good interface through vCenter, and I like the same software across all the models.

My boss and I definitely do play the vendor pricing game, along with quarter-end/year-end purchase time frames, we just have not done that yet trying to get higher budgetary numbers for the CFO to sign on. That way we can look like the purchasing gurus when we come in 15-25% under the budget.

At this point I like NetApp, and I am sure my boss likes NetApp. We just do not want to regret our purchase decision like we do with the Lefthand stuff.

Maneki Neko
Oct 27, 2000

Man Yam posted:

Looking at the NetApp quote it's a FAS2220, HA, 1x12x1TB in the main unit and a shelf of 24x600GB 10K, same for the 2nd unit. I really liked the tight integration with vCenter, but have not seen how EMC and IBM handle their integration. Although all the vendors' engineers agreed that NetApp has a very good interface through vCenter, and I like the same software across all the models.

My boss and I definitely do play the vendor pricing game, along with quarter-end/year-end purchase time frames, we just have not done that yet trying to get higher budgetary numbers for the CFO to sign on. That way we can look like the purchasing gurus when we come in 15-25% under the budget.

At this point I like NetApp, and I am sure my boss likes NetApp. We just do not want to regret our purchase decision like we do with the Lefthand stuff.

We've had a pretty rough time with cluster mode, but 7-mode is still pretty solid.

Man Yam
Aug 31, 2004
Pickle. No! You pickle!

skipdogg posted:

We're an EMC shop for the most part. I personally manage a VNXe 3300 in the small engineering office I work in, and we have 2 VNX5500's in production, one in each US data center. I know our UK site has an aging V7000 IBM system we'll probably be replacing next year with something EMC.

I wish to qualify my comments about the kit with the following statement: I am not a storage professional, I know the fundamentals and theories behind storage, but do not have the hands on years of experience other posters in this thread have.

My thoughts about the VNXe: I really like this little box. Software is super friendly to use, and it seems fast enough. I'm not throwing anything intense at it though. A few light VM's over iSCSI and some NFS and CIFS shares hosted directly from the filer. It seems really well built, and I really like how redundant it can be. I would highly recommend that if you fit the use case for a VNXe, to check them out. It really is a SMB SAN for dummies to be honest and the pricing is not bad at all. I was able to snag a VNXe with 2 extra ethernet I/O modules, 8x600GB 15K and 7 x 2TB 7.2K NL-SAS with all software licensing and upgraded 3 years support for a good bit under 30K. (don't want to get to specific with pricing).

caveat: Obviously my environment is pretty light duty, I don't know any of those 'gotchas' that only get found out once you've been in production for a while. I'm also not doing anything other than basic stuff with it. No dedupe, snapshots, data protection, replication, nada.

The VNX5500's in our larger sites. We replaced a 4 year old NetApp 30xx series with the VNX5500. We were at the point where we couldn't add any more disk shelves to the NetApp and the year 5 support was going to be astronomical.

I do not know why we didn't replace with another NetApp. I can postulate that some very aggressive courting from a Bay Area EMC VAR to some upper level IT folks took place (event tickets, golf, wine tastings have been rumored), as well as a lot of marketing dollars were spent and a generous trade in credit on the NetApp was given to make this deal happen. There may have also been some drama over the maintenance renewal costs and NetApp not budging on the price souring them on NetApp in general.

I don't have specifics on the VNX5500's but I do know they have 2TB of flash drives, a couple shelves of 15K and a couple shelves of 7.2K drives and the OTD price was in the neighboorhood of 250K or with all the licensing. I think we have ballpark 40TB usable on each one, I could be wrong though.

We haven't had any major issues with them. There was some kind of firmware bug that kept reporting a bad fan module or something that gave us some grief for a while, but EMC Support is pretty good, as it should be considering the price they command.

Enterprise hardware sales is a hilarious game to play. We upgraded 300 desktop computers in one of our call centers and I was throwing Acer and HP pricing against Dell and the price kept falling. I was going to buy the Dell's anyway but they didn't know what. Dell Kace found out about the opportunity and they came up with 15K in marketing dollars to take off the price of the purchase order if we bought one of their KACE 1000 appliances as well for like 10K. We did it, and the thing is still in the sealed box almost 2 years later. They only gave us 100 licenses and figured we would love it and buy the other 350 licenses we needed. They guessed wrong.

We're an HP server shop, always have been, always will be. (ProLiant 4 life!) Doesn't stop us from getting stupid aggressive quotes from Dell on servers and then beating HP down on pricing until they match or beat Dell. We're Dell for workstations and laptops, and they want our server business so bad it's ridiculous. We just used that method to get 6 fully loaded DL380's for under 75K. Dual 8 core w/ 384GB RAM. Insane.

Looking at our EMC quote it looks like more raw storage than the NetApp for a lower price, VNXe3150 with 7x2TB 7.2k, 10x900GB 10K. Also, our EMC quote included 3 years of support the NetApp included 1 year of support and was about $20k higher. These are not the best prices we can get, but it is an interesting starting point.

I am no means an expert in anything, I am more of a general purpose, jack-of-all trades type with strong Google-fu.

Thank you for your responses.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

Worth mentioning, is the config on the VNXe's. Those 7 2TB drives will become 6 drives in Raid 6 plus a hot spare for a total of appx 7.157TB usable. My 8 x 600GB 15K drives became 7 drives in Raid5 + hot spare for 2.8TB usable in the 'performance pool' So definitely make sure you're getting the USABLE space you need. I thought I would be getting closer to 9TB usable but the VNXe won't let me use that 7th 2TB drive. It want's it's raid 6 groups to be 4 + 2.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Maneki Neko posted:

We've had a pretty rough time with cluster mode, but 7-mode is still pretty solid.

Can I ask what issues you have had with Clustered ONTAP? Did you deploy when 8.1 was still relatively new, before there was feature parity for a lot of 7-mode features?


Man Yam posted:

Looking at our EMC quote it looks like more raw storage than the NetApp for a lower price, VNXe3150 with 7x2TB 7.2k, 10x900GB 10K. Also, our EMC quote included 3 years of support the NetApp included 1 year of support and was about $20k higher. These are not the best prices we can get, but it is an interesting starting point.

Raw storage numbers aren't terribly meaningful when compared between storage vendors because different vendors will recommend different raid levels depending on how their technology works and what your performance goals are. For the NetApp gear in question you will end up with about 8T usable from the SATA disk, with double parity protection and a single spare, and as much as 11.5TB from the SAS disk if you use one large raid group and hold back only one spare. You need to ask that raw space will be turned into usable space. This is particularly important if they recommend raid-1 for performance workloads.

Given that you are getting a full shelf of SAS disk I think you would be pretty happy with the 2220. That's plenty of spindles for what you want to run.

skipdogg posted:

I do not know why we didn't replace with another NetApp. I can postulate that some very aggressive courting from a Bay Area EMC VAR to some upper level IT folks took place (event tickets, golf, wine tastings have been rumored), as well as a lot of marketing dollars were spent and a generous trade in credit on the NetApp was given to make this deal happen. There may have also been some drama over the maintenance renewal costs and NetApp not budging on the price souring them on NetApp in general.

EMC has war chest funds devoted specifically to taking NetApp out of accounts, so it wouldn't surprise me if you guys got the gear for near free or got a lot of perks on top of it. As far as the maintenance renewal, my customer is going through that right now and they, similarly, balked at the sticker shock. Our account team has been pretty good about trying to be flexible on price, but it sounds like yours wasn't. So much of the customer experience with storage hardware is tied into how well the team that sells it does their job (do they understand the customer's needs properly, is the solution they are presenting capable of meeting those technical needs, are they flexible enough to work around the customer's budget while still meeting those needs, will they be honest when they simply can't do what is being asked for the amount of money required, do they continue to support the customer once the sale is made) that I think that a lot of dissatisfaction with storage isn't due to problems with the hardware, it's due to being sold the wrong thing by someone who just wanted to make a sale and then bolted out the door with the money and left it for someone else to fix.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

My friend and trusted VAR though has said EMC is even worse when it comes to year 4 and 5 support renewals on their gear. Everything is rosy the first 3 years, and like you said, sticker shock happens.

I just looked at our 5500 and we're got about 100TB usable, they've expanded it since I last looked. 8 shelves total with a mix of 3TB NLSAS and 600/900/SATA Flash. I'm sure it cost a fortune.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

NippleFloss posted:

Our account team has been pretty good about trying to be flexible on price, but it sounds like yours wasn't. So much of the customer experience with storage hardware is tied into how well the team that sells it does their job (do they understand the customer's needs properly, is the solution they are presenting capable of meeting those technical needs, are they flexible enough to work around the customer's budget while still meeting those needs, will they be honest when they simply can't do what is being asked for the amount of money required, do they continue to support the customer once the sale is made) that I think that a lot of dissatisfaction with storage isn't due to problems with the hardware, it's due to being sold the wrong thing by someone who just wanted to make a sale and then bolted out the door with the money and left it for someone else to fix.

I can tell you that nearly everyone on the NetApp sales team (that I worked with as a partner/VAR) did nearly everything in their power to ruin deals. Always felt like we were fighting them instead of working together.

Examples:
-Going to a customer directly and telling them that they NEEDED to upgrade to cluster mode, and saying that we (the VAR) were holding out on our customer by not upgrading them. This was during the 8.1 days when SnapVault wasn't present in cluster mode, but was the cornerstone of the client's backup/recovery strategy.

-A customer called NetApp directly to get a quote for a new storage shelf to extend their retention at their DR site (to make sure we were price competitive), the NetApp sales rep (that we knew, and knew us) told them they NEEDED to upgrade to cluster mode to expand and quoted them a brand new system for prod/DR and said that currently they were on "unsupported" tech. When the customer called up frantic, we explained that none of the gear was anywhere near EOA/EOS/EOL, and we were able to order the shelf for them.

-A customer contacted us to get a quote for 12 disks (to fill a half-full shelf). We talked to NetApp, and the price of the disk was more than the price of a brand new full shelf. We couldn't figure it out, so we quoted the customer a brand new shelf and explained the situation. Customer called NetApp for an explanation (since frankly it's pretty dumb that 12 disk > shelf + 24 disk) and NetApp sales said that wasn't the case and gave them a sweetheart quote for 12 disk that made us look really bad, despite our sales reps not giving us the same deal when we asked for our client.

As a vendor/partner/VAR, I understood that there are a lot of pricing games at play, and that nobody wants to drop their pricing on the first quote. But at the same time, that environment is what kills trust and relationships. I can't tell you how dumb it is to present a quote to the client, tell them "this isn't the best price, but I can't get you the best price, you need to ask for it" and then ask for a new quote, repeat until reasonable.

The problem is that every storage vendor does it, and the differences we're talking about (50-80%) are non-trivial and so you have to play the game. That's why it is easy for EMC to walk in and steal a deal, all they have to do is be reasonable for a couple of quotes (i.e. give the 'real' price the first time) and you make the competition look like chumps. But then in a year, NetApp or HP or Nimble comes in and does the same thing, etc.

Nimble, as a new company, has been super aggressive and was giving us very good pricing on the first quote, just to try to cause sticker shock from anyone that looked at NetApp or EMC's first quote.

Mykkel
Oct 8, 2012


we were somewhere around hesaim on the edge of the spinward marches when the drugs began to take hold.

skipdogg posted:

My friend and trusted VAR though has said EMC is even worse when it comes to year 4 and 5 support renewals on their gear. Everything is rosy the first 3 years, and like you said, sticker shock happens.

I just looked at our 5500 and we're got about 100TB usable, they've expanded it since I last looked. 8 shelves total with a mix of 3TB NLSAS and 600/900/SATA Flash. I'm sure it cost a fortune.

When we last purchased EMC, we bundled support years 4 and 5 in the original quote. Not sure if EMC is still willing to do that, so YMMV.

Maneki Neko
Oct 27, 2000

NippleFloss posted:

Can I ask what issues you have had with Clustered ONTAP? Did you deploy when 8.1 was still relatively new, before there was feature parity for a lot of 7-mode features?

That may have been a chunk of it. Another has been that it seems like Netapp support and our VAR just weren't familiar with how to do things in cluster mode, so felt like we were paying to have them learn, and our deployment took a lot longer than we had planned.

Otherwise we've just been hitting bugs of various severity, including at least one that has caused us to stop serving data.

parid
Mar 18, 2004

Maneki Neko posted:

That may have been a chunk of it. Another has been that it seems like Netapp support and our VAR just weren't familiar with how to do things in cluster mode, so felt like we were paying to have them learn, and our deployment took a lot longer than we had planned.

Otherwise we've just been hitting bugs of various severity, including at least one that has caused us to stop serving data.

Clustered ontap support, for me, has been noticeably better in the last two months. I'm seeing signs that they are actually fixing that problem.

We have had similar issues with bugs. The latest, some dedupe issue, is blocking our 8.2 upgrade.

Maneki Neko
Oct 27, 2000

parid posted:

Clustered ontap support, for me, has been noticeably better in the last two months. I'm seeing signs that they are actually fixing that problem.

We have had similar issues with bugs. The latest, some dedupe issue, is blocking our 8.2 upgrade.

I like the idea of cluster mode, but honestly we've hit more bugs in the 6 months we've been running cluster mode than I have in the last 10 years of running NetApp gear.

evil_bunnY
Apr 2, 2003

skipdogg posted:

My friend and trusted VAR though has said EMC is even worse when it comes to year 4 and 5 support renewals on their gear. Everything is rosy the first 3 years, and like you said, sticker shock happens.
Everybody does that. We got 5 years to begin with and it cost us peanuts. I don't even want to think about trying to negotiate another 2 when they have you by the balls.

Mykkel posted:

When we last purchased EMC, we bundled support years 4 and 5 in the original quote. Not sure if EMC is still willing to do that, so YMMV.
If they're not and you need it (because there's no budget for a 3 year cycle), someone's going to wake up with a sore anus at some point.

On cluster mode: I wouldn't even get near it unless a specific feature just made the business case. 3.0 of anything is when it gets good.

On netapp sales: those guys are positively retarded.

evil_bunnY fucked around with this message at 22:29 on Sep 20, 2013

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Honestly, many the bugs in Clustered ONTAP aren't unique to Clustered ONTAP. The dedupe bug in 8.2 mentioned above (if it's the one I'm thinking of) affects both CDOT and 7-mode. The WAFL layer and pretty much everything below it is the same in 7-mode and CDOT, so when you get bugs that deal with things like efficiency features, CP processing, readahead and a many of the other ones that have come up recently you're going to be in the same boat regardless of which version you're using.

The main reason they seem more prevalent in CDOT is that most CDOT customers are on very new versions of ONTAP to take advantage of the rapid addition of features between releases. But the 7-mode releases of 8.1 and 8.2 have been just as buggy. There have been a lot of new features and scalability enhancements in 8.1 and 8.2 that introduced plenty of room for problems. I expect it to improve because it hasn't gone unnoticed by upper management that customers aren't happy with the quality of the releases over the past year or two.

theperminator
Sep 16, 2009

by Smythe
Fun Shoe
Does anyone know how difficult it is to replace the Standby powersupply on an EMC AX4-5?

I've had an SPS failure and have to replace it, but I can't really find any documentation of the process other than a note that the SPS can be replaced without powering off the SAN

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.
Not exactly SAN related but I need some opinions. I'm currently in the early stages of an infrastructure refresh for our two offices. After my CFO balked at an initial ~130k cost to implement new backups, storage, servers and networking at once, I've broken the project down into stages. The first stage being backups. We're paying $2300/mo for lovely backups that aren't at the VM level and we need to get rid of that cost. I want to get our backups at the vmdk level, then replicate between our two offices.

After threatening to go with an equalogic/powervault SAN paired with Veeam, EMC lowered their price down to 30K for a pair of 12tb DD620's. I was just about ready to pull the trigger when Dell made an offer worth considering. They're trying to sell me a couple VRTX boxes with two M520 blades and 12tb (I think) of disk paired with AppAssure as a backup solution with the ability for more than just backup.

The original idea was to replace our backups then move on and replace our core switching, SAN (possibly implement one in Portland as well) and grab a couple R720's or something similar for both offices. However, this whole VRTX "shared infrastructure" thing at least sounds nice and sounds like a really simple way to tackle this huge project. The idea of having one hardware vendor is also enticing. Should I be running away as fast as I can or is this a viable solution for a production environment? Is AppAssure a complete piece of poo poo? Should I be looking at two VRTX chassis in each offices for more of a redundant setup?

Here's an idea of our Windows environment:

The main office has:
3x IBM x-series servers running ESXi 5.0
IBM DS3300 w/ EXP3000 iSCSI SAN (~9tb raw)
14 VM's including Exchange 2010, several DC's, several SQL application servers, and some various file servers.

Smaller 2nd office has:
1x IBM x-series server running ESX 5.0
4 VM's which, including the DC are pretty much all just file servers.

MPLS connection has a maximum throughput of 12mb between the offices, but is affected by internet and site-to-site network usage.

Erwin
Feb 17, 2006

theperminator posted:

Does anyone know how difficult it is to replace the Standby powersupply on an EMC AX4-5?

I've had an SPS failure and have to replace it, but I can't really find any documentation of the process other than a note that the SPS can be replaced without powering off the SAN

Is it under support? They sent a dude out to do ours for us.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

The VRTX chassis are new and I doubt there is any real world feedback about them. Good idea, but did Dell execute properly?

I may be in the minority these days, but VM Replication is great for DR purposes, but nothing lets me sleep better at night than good old fashion tapes snuggled away in an offsite location. Maybe I'm being old school when it comes to all the new fancy stuff. I just worry about the data important, not the entire VM.

Nukelear v.2
Jun 25, 2004
My optional title text

goobernoodles posted:

Should I be looking at two VRTX chassis in each offices for more of a redundant setup?

VRTX shares a single raid controller. Besides drives, this is literally the only component that ever breaks in my Dell servers, so making it the single point of failure in a chassis is pretty meh for availability.

For a production I'd still prefer 1u's backed with EQL/MD arrays. Fairly cheap, no major single points of failure.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


The presentation of VRTX I sat through said they were planning on coming out with a redundant controller version sometime in the future.

May take on it is do a 4 blade configuration, 3 vmware essentials plus and the 4th is a backup server node running off of internal drives connected into cheap iSCSI or DA storage for the backups. Run the remainder off of internal SD cards and make sure you have the 4hr mission critical support.

Worse case, you lose the entire storage of the cluster. You get dell to fix the hardware problem within 4 hours and then blow the VMs back on to the rebuilt storage from the backup server. You won't even have to redo vmware configuration.

Obviously, that's not ideal if you can't have any downtime at all, but that's not the target market for these devices anyways.

Jadus
Sep 11, 2003

I'm curious what others do when their SAN comes up to end of warranty (specifically the controller/head)?

I don't have any experience with high end storage, but do have a Dell MD3220i with 3 MD1220 disk shelves.
The MD3220i warranty expires in December 2015, along with one disk shelf. Two disk shelves were bought in 2012, and we're considering a 4th shelf in 2014.
I don't want to waste money on a 4th shelf if it won't be usable 12 months later, but even if I can just replace the MD3220i, the model itself will be quite old and presumably nearing EOL.

Is it common to just drastically oversize capacity for a full warranty term, and then do a complete replacement every 5 years?

KS
Jun 10, 2003
Outrageous Lumpwad
It is definitely common to refresh every x years. Usually in a good organization that's between 3 and 5.

In my experience, support for expansion shelves usually gets co-termed with the array for the higher end stuff. On the lower end stuff, yeah, you end up with staggered warranties. Not much you can do about it. You won't have to replace the MD3220i, though -- Dell should happily quote and sell you a 4th year of warranty/support. Then you put a new system in the budget for next FY.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

Dell doesn't tend to publish EOL/EOS dates but 5 years is usually the longest warranty you can get on enterprise kit. I find 4 years to be the sweet spot and I always size equipment based on projected needs going into year 3/month 36. So yeah you overpay a little at the beginning but its better than replacing kit after only 18 months.

parid
Mar 18, 2004
Even if they did let you quote 6th year support, you wouldn't want to pay for it. They normally structure the pricing to make it cheaper to upgrade. I recently did a head uplift on a fas3070 netapp. One additional year of support was %70 of the cost of a pair of new fas3250s with PAM cards. It was a no brainer.

Jadus
Sep 11, 2003

parid posted:

Even if they did let you quote 6th year support, you wouldn't want to pay for it. They normally structure the pricing to make it cheaper to upgrade. I recently did a head uplift on a fas3070 netapp. One additional year of support was %70 of the cost of a pair of new fas3250s with PAM cards. It was a no brainer.

On a head upgrade like that, do you normally just keep the disks and shelves running regardless of warranty, since they're in a redundant state anyways?

Adbot
ADBOT LOVES YOU

parid
Mar 18, 2004

Jadus posted:

On a head upgrade like that, do you normally just keep the disks and shelves running regardless of warranty, since they're in a redundant state anyways?

The shelf support is tied to the controllers (on these systems at least). Im sure something is done with the pricing on the backend but i never get to see it. It hasn't been exorbitant . I run disks till they literally won't send me replacement drives anymore. Spindle is a spindle! Even after production use, they end up on a test or temporary system. That system had space, but no iops left. The controller had 4x more nvram already and the PAM card was just gravy on top. Ironically this system is now has more capacity and lower latencies (due to the cache) than our fabric metrocluster with all fc drives.

I have a bunch of old EOL'd 300a drives (300gig ata) in ds14 trays. They still work great even on modern netapp releases. I would want to promise anyone service off of them though.

parid fucked around with this message at 05:03 on Sep 26, 2013

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply