Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
KS
Jun 10, 2003
Outrageous Lumpwad
I'm looking for alternatives to Compellent. With controller upgrades, capacity expansion, and the support renewal at the 3-year mark, we're going to send $150k to Dell this year. Lots of things have happened in the storage space over the last 3 years, and I'm not necessarily happy with sinking that kind of money into buying more 2TB/450GB hard drives for a 3 year old system.

For an entirely virtualized dataset using a 10g network, what stuff should I be looking at that might be competitive price-wise? Nimble? Tintri?

Adbot
ADBOT LOVES YOU

KS
Jun 10, 2003
Outrageous Lumpwad
What I'm looking for would need to offload 20-30TB of datastores from the Compellent array for <125k, while supporting VAAI and all the other shiny VMware-related features. Definitely needs replication support, although I'll probably buy one up front. I don't care what protocol. We already support multiple. <10k IOPS total, so I don't think any of the SATA+cache arrays from the various vendors would have a problem with it. I want to throw a half dozen dev environments on this thing and not have to worry about it.

I'm looking at upgrading the Compellent controllers to SC8000s to support VAAI, plus adding 25TB more disk for around 100k. IMO that's too much. Just shopping around for alternatives. I've certainly heard good things about Nimble, but looking for others experiences.

KS
Jun 10, 2003
Outrageous Lumpwad
Thanks, I'll definitely talk to Tintri as well.

Mierdaan posted:

You know you can get full VAAI support on series 40 controllers, right?

We're on 30s. We got them right before they EOLed. Whoops. Turns out our VAR was garbage, and switching isn't easy. That's part of my motivation.

adorai posted:

Pre or post deduplication? If you don't need dedupe, look at Oracle (I know, I keep pimping this).

~25 TB pre-dedupe, no compression. I imagine it'd dedupe pretty well as there are 100+ OS instances in there.

KS
Jun 10, 2003
Outrageous Lumpwad

Mierdaan posted:

Wow, total garbage. We bought CML like 2 years ago and they were already hinting at the successor to the series 40. Have you priced out moving the 30s to replication targets where you don't care about VAAI, and buying the 8000s new?

Edit: not having been through it yet, what is making an upgrade from 30s to 8000s difficult?

We do replication between two arrays, both with series 30s. There's some reticence about just upgrading the prod side because testing firmware updates on DR first is a useful exercise.

Price for four new controllers is $48k. I suspect we're being gouged because we're asking to be released from our VAR (Cambridge Computer, stay the gently caress away) to go with another, and Dell is requiring us to do this one deal with them first since it originated with them, back when we first talked about upgrading to SC40s a year and a half ago.

IT purchasing is bullshit all the way down, but talking about real prices paid on forums like this one takes a lot of their power away. I'm looking forward to the negotiation now that we have several realistic alternatives.

Edit: I don't think the upgrade is that hard. It's a two-step firmware upgrade process and they budget a bunch of hours for it, but no downtime. Just a mandatory professional install at $3450 per array.


KS fucked around with this message at 03:56 on Jul 2, 2013

KS
Jun 10, 2003
Outrageous Lumpwad
So we use the Supermicro SC847 running the Solaris-derived OmniOS for d2d backup storage. It's 96 TB raw with 32 3TB drives, 72TB usable, and it cost about 16k with some fancy caching devices. You could add in a Nexenta license if you didnt want to deal with the OS and be up around 36k. Expansion beyond that single box sucks for sure, at least if you're talking about a shared namespace or something. Performance, however, kicks rear end for the price. There's a recently released successor to the SC847 with updated internals.

A Nimble CS460 with two expansion shelves would be in the $190k range for 126 TB with 3 years of support and would let you expand quite a bit beyond that.

In my experience, Compellent disks would be a fair bit more than that unless your system is already over 96 drives and into the enterprise license. You can expect to pay ~45k per shelf of disks up to 96 drives and ~30k beyond it.

KS
Jun 10, 2003
Outrageous Lumpwad
There are lots of video storage options that are specific to that industry as well, so network with counterparts at other stations and learn what they're using and how much they love/hate it.

efb

KS
Jun 10, 2003
Outrageous Lumpwad
The premier drive for ZIL is the STEC ZeusRAM. It's an 8 GB RAM Disk that uses a supercap to back up to an integrated SSD in case of power failure. It's very, very fast and quite expensive -- $2k+

edit: RAM is really cheap. Buy lots for ARC and skip the L2ARC.

KS fucked around with this message at 21:53 on Aug 30, 2013

KS
Jun 10, 2003
Outrageous Lumpwad

Agrikk posted:

Interesting. I posted the results of a direct comparison between FreeNAS, Openfiler and Microsoft's iSCSI target and surprisingly, MSiST performed well (similar to Openfiler) with FreeNAS a distant third.

(I thought I'd posted the same results in a thread here, but I can't find it.)

That squares with what I saw using some really high performance ZFS hardware: 32 drives, server hardware, and a ZeusRAM ZIL. FreeNAS and other FreeBSD 8 derivatives were stable but had lovely performance. NAS4Free and FreeBSD 9 were fast but crashed routinely under load, and I talked to someone else experiencing the same issues under Stable/9. We had to go to Illumos to get a reliable, well-performing ZFS system.

That said, Nas4free has been completely stable for me on my little home system.

KS fucked around with this message at 22:02 on Sep 19, 2013

KS
Jun 10, 2003
Outrageous Lumpwad
It is definitely common to refresh every x years. Usually in a good organization that's between 3 and 5.

In my experience, support for expansion shelves usually gets co-termed with the array for the higher end stuff. On the lower end stuff, yeah, you end up with staggered warranties. Not much you can do about it. You won't have to replace the MD3220i, though -- Dell should happily quote and sell you a 4th year of warranty/support. Then you put a new system in the budget for next FY.

KS
Jun 10, 2003
Outrageous Lumpwad

Caged posted:

I was going to ask this in the virtualisation thread but I have 200 posts to catch up on in there and it's on-topic now. What's the best way to connect to iSCSI storage from within a guest OS? When I've set up VMware and iSSCI the storage has always been on its own network on NICs dedicated to that task. Do I need to link the storage network physically to the same network that the VMs use or what?

With software iscsi, it usually looks a little something like this:



Guests connect to the VM port groups and the host uses the VMKernel port in the same VLAN. Just give the guest a dedicated NIC (or two for redundancy) for ISCSI.

KS
Jun 10, 2003
Outrageous Lumpwad

FISHMANPET posted:

So in this case would you give your guest 2 iSCSI nics, one on iscsi_B and one on iscsi_A?

I would, yeah, and run appropriate MPIO on the guest. We use this a fair amount to present SAN snapshots to dev machines, etc.

Our actual config is somewhat more complex, as we run network and storage over the same 10GE pipes, using a distributed switch and LACP teams to a pair of Nexus 5Ks. You can LACP the network/NFS vlans and still use MPIO for the iscsi stuff if you do your bindings correctly.

KS
Jun 10, 2003
Outrageous Lumpwad
Not really experienced with Equallogic but I seem to recall they're a special snowflake where one network is the correct config. Maybe someone can confirm. That's unusual, though. Most vendors and I believe the MS software iscsi initiator want two networks to do MPIO.

KS
Jun 10, 2003
Outrageous Lumpwad
Dell and Spectra Logic sell iscsi tape libraries. I can't vouch for the quality of either of them, but I researched it last year before ultimately ditching tape entirely.

KS
Jun 10, 2003
Outrageous Lumpwad

Spudalicious posted:

What are the main options for storage arrays that allow use of any SATA 6GB/s drive?

Seconding that nobody does this because it's a stupid idea. Nexenta, a company that tried, now has an HCL that you have to adhere to if you want to buy support from them. They had too many problems with customers running consumer grade HDs.

You do not want to be stuck self-supporting a storage solution for a campus. You will realize your mistake when it breaks and you have nobody to turn to. There are few faster ways to get fired.

There's definitely a new breed of storage (Nimble, Tintri, etc) that is a bit cheaper than the price you'd pay for EMC/Netapp/Compellent/etc, but you don't want to go cheaper than that. If you're an enterprise, pay for enterprise storage.

e: wow, 3 years to run on a system that was bought with 2TB drives? How old is that? Did they buy 5-year support for it or something? That's nuts.

KS fucked around with this message at 19:52 on Feb 18, 2014

KS
Jun 10, 2003
Outrageous Lumpwad

Richard Noggin posted:

Just to give everyone an idea, here's our "standard" two host cluster:

I tend to agree that running multiple small clusters is non-optimal. You have to reserve 1 host's worth -- or 1/n of the workload where n = number of hosts -- of capacity in the cluster in case of failure. The bigger your clusters get, the more efficient you are. It is not efficient to run a bunch of 2-node clusters at 50% util, compared to one big cluster at (n-1)/n percent util.

You also lose efficiency from all those unnecessary array controllers and switches. This is not how anyone sane scales out.

KS
Jun 10, 2003
Outrageous Lumpwad

three posted:

This reads: "I don't want to learn new things."

It can also read "not many of these new trendy vendors support {SRM, SQL snapshotting, Exchange snapshotting, API or scriptability}" which is a pretty tough feature set to give up once you build around it. I'm replacing a Compellent and am limited because of it too. Netapp, 3PAR, and Compellent are waaaaay ahead of most here.

KS
Jun 10, 2003
Outrageous Lumpwad

Oh jesus christ. Now's the time to tell the story of how Unitrends decided an anime woman with fox ears was a good, professional corporate mascot. They pulled most of it, including a godawful youtube video, but some evidence remains.

So now that we got that out of the way, this is the most active this thread ever gets and it's kinda lame.

I read three's post and it was bagging on Netapp. None of the other mainstream vendors, just Netapp. And he's right: they're fantastically behind, and unless you're heavily invested in their products and especially their toolset already they're almost certainly the wrong choice for a new deployment. VNXs and Compellents just use hybrid flash better.

Which is too bad, because it leaves a serious lack of mature NFS storage out there, and NFS rocks for VMware. Tintri is still months away from being feature complete even if they hit all their deadlines.

KS
Jun 10, 2003
Outrageous Lumpwad

NippleFloss posted:

On another note, I do think it's sort of funny that some people here would NEVER recommend NetApp. Like...why? I work for NetApp and I can still see why EMC or Pure or Hitachi or whoever might have an appealing product, particularly for certain customers. I'm genuinely curious about what the perceived gaps are.

You just listed a bunch of customers with a bunch of money. Perhaps the disconnect is that while the systems are great in the petabyte range with professional services and large staffs, they're not cost effective in the 50-500 TB SME space where a bunch of us do business.

I've been through the Netapp sales process twice in two years and they were not price competitive in either case. They spec out a pure 10k system with flash cache. When I tell them they're high, they try to add SATA disks instead of going pure 10k. However unlike VNX or Compellent or 3Par, Netapp can't autotier. I don't have the time or the inclination to deal with multiple aggregates and moving VMs between them when everyone else does it for me. Compellent showed me a better way four years ago. Why would I go back?

CDOT is a bunch of features I don't need.

I'm STILL considering buying Netapp so I can put it on my resume and go work at some of those big players you mentioned (Amazon and Google are on there too!). But it would not be the best choice for my company.

KS fucked around with this message at 05:10 on Jul 31, 2014

KS
Jun 10, 2003
Outrageous Lumpwad

adorai posted:

I have to ask, are you factoring in the added value that NetApp gives you with the tools, it's zero cost snapshots, and it's nifty cloning? NetApp is not the only vendor that has these things, but they add value to the package that not all storage vendors have. I have two NetApp SANs, and I don't need an extra backup product.
...
If someone contacts the helpdesk to restore an email, it takes someone with the proper rights about 5 minutes to restore it. No tape necessary, just run restore wizard and create a PST file.

I was going to break this down further but realized it doesn't matter. Doing all of the above with a 4-year-old Compellent array. I have powershell scripts that take SQL- and Exchange-consistent zero-cost snapshots and mount them to other servers for dev/test and backups. I do async replication with 15-minute consistent checkpoints, and I can spin up the DR site using SRM. Some of these are definitely features that Netapp had first, but a bunch of vendors have caught up and surpassed them, I think.

Compellent's not perfect: they're missing a fairly basic feature like compression just like Netapp is missing auto tiering. But they were 40% cheaper this time around, with a much bigger SSD tier.

This is definitely where the new players fall down, though. Tintri, for instance, is completely missing SRM support. Many don't have scriptable interfaces or the VSS integration with Exchange and SQL.

adorai posted:

I can live migrate my data to new hardware and retire the old without the clients noticing.
100% virtualized so I can do that with any vendor. And I don't have to buy or borrow extra 10gbit switches for the cluster interconnect.

KS fucked around with this message at 05:39 on Jul 31, 2014

KS
Jun 10, 2003
Outrageous Lumpwad

NippleFloss posted:

Those are generally problems with channel. How much you like NetApp is often extremely dependent on whether or not you end up with a good partner or a bad partner. You ended up with a bad one.

I actually went through the sizing/config process with a Netapp sales engineer and only involved a "VA"R for pricing. I trust my reseller pretty well, but granted maybe this is the time they jacked up the margins on the Netapp to encourage me towards something they want to sell.

madsushi posted:

See, I feel the exact opposite way. Compellent is a company that was hot poo poo for 12-24 months, then got bought by Dell and now they're dead. Everyone that will ever own a Compellent already bought it. 3PAR is in a similar place with HP.

e: for the record, I hate auto-tiering and I think it's a bad technology. You under-buy on fast disk in order to save money but get burned by any anomalous usage patterns (I guarantee you have some). It was all the rage, and now nobody even talks about it any more. It's all SSD/Flash-based caching now instead of actual tiering.

What Tintri does is still tiering in my mind. I mean, they write to flash, then move less-used data in chunks down to 7k. How's that not tiering? Caching, to me, means the data lives on disk but a copy is maintained in flash for read acceleration of repeatedly accessed data. By that measure, I think it's a pretty even split. Hell, even Netapp can do both caching (flash cache) and tiering (flash pools) with SSD -- just not with disk.

In terms of disk tiering, yes, you can get a lovely partner that undersizes to win a bid -- but you can also do it right, and when it's done right, there are no drawbacks.

But saying 3PAR and Compellent are dead since acquisition is laughable. Compellent sold 8500ish arrays prior to acquisition and another 20000+ since. 3PAR gained more market share than anyone last year.

KS
Jun 10, 2003
Outrageous Lumpwad

I'm surprised this is even article worthy, because I thought this was just a given. Compellent has done this for years. So has Netapp. So has 3Par. There's no special sauce required here.

KS
Jun 10, 2003
Outrageous Lumpwad
Good time to move to virtual ports mode on the Compellent if for some reason you haven't. Also remember to create two zones containing just the physical front end ports, and just the virtual front end ports if you haven't.

KS
Jun 10, 2003
Outrageous Lumpwad
You do need to get another vendor into the picture to make the price magically fall further. I thought that was IT purchasing 101.

But I can't imagine supporting anything homegrown for primary storage without Amazon-levels of scale and talent. One guy isn't going to cut it, because he needs to sleep. You need to stock spares. You need a test system to roll updates to first.

I do homegrown for a few hundred TB of D2D backup and it's a huge pain already. I would never want to be on the hook for an outage to primary storage.

KS
Jun 10, 2003
Outrageous Lumpwad

KennyG posted:

No one likes to talk actual costs paid but I'm trying to figure out how far I can push our VAR or if it's even worth the effort. In the realm of production NFS/CIFS appliance on a 500+TB scale what is a reasonable cost per gig? 40-50 apps driving 100-200 iops each - call it 10k total tops. I have some wildly different quotes. Last year we did a deal at $1.15/gb at about 120tb. I have a quote from Dell for a new deployment that's $.25, yet EMCs is $2.02. I have talked to EMC about dells quote and they seem uninterested in changing the pricing to meet the market. I like EMC more than dell but not 8x.

Where I sit, looking at the market, disks in the capacities I'm looking at should be ~$0.50 for hdd and approaching $2.50 per effective flash gig. I know everyone has strengths and weaknesses but I can't help but think they are trying to exploit what they incorrectly feel is imperfect market information.

Anyone run Isilon or .75+pb Compellent arrays behind fluidfs?

I definitely like to talk actual costs paid, because it shifts the power away from the vendors and towards the consumers. We're not signing NDAs here.

Compellent is decent block storage, but layering a NAS head in front of it doesn't put it in the same class as Isilon or Netapp for scale out NAS. That's probably a big reason for the price gap. That said, it doesn't look like you have high IO requirements or the need to scale, and you might be fine with the cheaper solution.

I've never seen Dell's NAS head in the wild. I'd guess at 500+ TB you'd be one of their bigger customers -- might want to set up reference calls with other installations that size.

KS
Jun 10, 2003
Outrageous Lumpwad

FISHMANPET posted:

Are they finally putting the controller heads into Dell Chassis?

The SC8000 controllers are in Dell chassis and have been out for over two years. This looks like the new-ish 4020 that integrates the controllers and a disk shelf into one 2u unit.

bigmandan posted:

I've asked this before, but has anyone had any experience with the Dell compellent synchronous live volumes? I'd like to hear some experiences with using it in a production environment.

Synchronous replication is a big-boy feature and you need to make sure your network is rock solid. Remember, the remote array has to acknowledge the write before it completes. Any kind of latency and you can kiss performance goodbye. A single storage switch plus a small SAN and talk of sync replication are usually not things that go together well.

There are very specific use cases for it, like split metro clusters. Async replication is good enough for DR and backup. What's your use case?

KS
Jun 10, 2003
Outrageous Lumpwad
For ASync, a product like SRM breaks the replication relationship and re-signatures the datastores automatically. It also has far more robust DR handling than a stretched cluster.

Here's the VMware whitepaper with metro cluster requirements. Check out page 12 for the "When to Use/When not to use" discussion.

There is also an entry in the VMware Storage HCL for "iscsi metro cluster storage." It appears the Compellent is not on it.

KS
Jun 10, 2003
Outrageous Lumpwad

Rhymenoserous posted:

That's a Scott Alan Miller Storage Device you goony gently caress.

That guy singlehandedly makes Spiceworks a terrible place.

KS
Jun 10, 2003
Outrageous Lumpwad
Have you considered just buying Crashplan? Problems like this are already solved, the solutions are cheap, and reinventing the wheel is a bad idea. Crashplan (and its competitors) give you reports you can use to be sure everything's backed up and address problems before they become liabilities.

Also, yes, this:

Nystral posted:

Currently the drives are configured as RAID (assuming 6, but no confirmation currently) then that RAID presented to the FreeNAS OS which then formatted as a ZFS pool with snapshots enabled. Is this as dumb I think it is? AFAIK I should have the RAID card in JBOD / IT mode then recreate the RAIDZx (thinking RAIDZ3) within FreeNAS.

is a bad idea. ZFS is supposed to see raw devices.

KS fucked around with this message at 02:57 on Dec 28, 2014

Adbot
ADBOT LOVES YOU

KS
Jun 10, 2003
Outrageous Lumpwad
Or cname it.

Or drop a standalone DFS root in front of it.

Without knowing more, what you're describing is possibly overly complex.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply