Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

adorai posted:

Quote doesn't specify rotational speed, I am going to assume they are 10k.

C
DSK SHLF,24x900GB,6G,0P,-C

I don't know the exact number of IOPS but it is a mixed workload including SQL, exchange, VMware, and CIFS. No VDI.

Well assuming that I'd probably go with 3x14 volumes.

Probably do

1xRAID 10 with 12 disks ~ 5TB per level + HS SQL and PROD data
1x RAID 5 with 8 disks ~ 6TB + HS Exchange
1x RAID 6 with 12 drives ~ 9TB + File server and exchange bleed over
1xRAID 10 with 4 drives+HS ~ 18TB OS drives

2 drives reserved for "oh poo poo my san is out of space!" and "oh poo poo that 4 hour support won't be here soon enough!"

Not counting any dedupe, compression, or caching in the environment.

Again this is a shot in the dark because I dunno the environment

adorai posted:

I don't know the exact number of IOPS but it is a mixed workload including SQL, exchange, VMware, and CIFS. No VDI.
I'm not bashing it, but having ONLY iSCSI available is a restriction. We use iSCSI, NFS, and CIFS, and not having all them is a strike against the unit.

Ah okay I can get that, iSCSI is good, but doing CIF's and NFS straight off the box does have nice advantages. The last place I worked, anytime I brought up iSCSI it was shot down because "iSCSI is a broken protocol" Even though I still to this day don't understand how, maybe I am missing something.

Dilbert As FUCK fucked around with this message at 01:03 on Jul 31, 2014

Adbot
ADBOT LOVES YOU

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Dilbert As gently caress posted:

1xRAID 10 with 12 disks ~ 5TB per level + HS SQL and PROD data
1x RAID 5 with 8 disks ~ 6TB + HS Exchange
1x RAID 6 with 12 drives ~ 9TB + File server and exchange bleed over
1xRAID 10 with 4 drives+HS ~ 18TB OS drives
It's all going into 1 big aggregate, it's just a matter of whether I should use 3 raidgroups or 2. 2 will be more space, 3 have a lower chance of data loss. I am sure there is a performance tradeoff as well but I am not sure which way that swings.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

Dilbert As gently caress posted:

Again this is a shot in the dark because I dunno the environment

NetApps don't do RAID 10 or RAID 5 (or even RAID 6, really).

The 900GB SAS drives are indeed 10K.

I would probably err on the side of smaller raid groups, just because rebuild times on a bigger group can get Real Bad. It's not worth 1.2TB of space to have really, really slow performance all day when a disk fails.

One thing to remember is that your new shelf additions will almost always be in 24-disk increments. So there's also the option of trying to angle your raid group size to match.

madsushi fucked around with this message at 01:19 on Jul 31, 2014

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

adorai posted:

Important information I suppose. Clustered mode, 900GB SAS

On Clustered OnTAP you are required to have a dedicated 3 disk aggregate for each node to hold it's node root volume, which contains all of the cluster ring database information and runs the node SVM (formerly known as a Vserver), and no user data can reside on that. So you need to take 6 disks off the top for the node root in a two node HA pair. Beyond that you should hold back two spares and then build out the largest single aggregate you can with the largest raid group size that divides evenly into what's left. It doesn't have to be perfectly even but you don't want raid groups in a single aggregate to be off by more than a couple of disks from one another for performance reasons. Since this is SAS you can go up to 28 disks in an RG. For SATA disks there are some practical reasons why smaller RGs might be better, as the rebuild time goes up as RG size increases, but for 900GB SAS rebuild times shouldn't be a factor and failure rates will be somewhat lower anyway.

adorai posted:

Quote doesn't specify rotational speed, I am going to assume they are 10k.

C
DSK SHLF,24x900GB,6G,0P,-C

They are 10k. The only 15k drives we still sell are 600GB because those are the only ones we can still source from Seagate or Hitachi. They tell us to stop selling them so they can stop making them, but we're going to keep selling them until they stop making them so they may exist FOREVER. But generally speaking 15k SAS is being phased out as a drive technology and 10k SAS probably isn't far behind it as SSD is only a couple of years away from meeting it on the price/capacity curve.


Dilbert As gently caress posted:

Well assuming that I'd probably go with 3x14 volumes.

Probably do

1xRAID 10 with 12 disks ~ 5TB per level + HS SQL and PROD data
1x RAID 5 with 8 disks ~ 6TB + HS Exchange
1x RAID 6 with 12 drives ~ 9TB + File server and exchange bleed over
1xRAID 10 with 4 drives+HS ~ 18TB OS drives

2 drives reserved for "oh poo poo my san is out of space!" and "oh poo poo that 4 hour support won't be here soon enough!"

Not counting any dedupe, compression, or caching in the environment.

Again this is a shot in the dark because I dunno the environment


Ah okay I can get that, iSCSI is good, but doing CIF's and NFS straight off the box does have nice advantages. The last place I worked, anytime I brought up iSCSI it was shot down because "iSCSI is a broken protocol" Even though I still to this day don't understand how, maybe I am missing something.





Dilbert As gently caress posted:

10K or 15K

Also what kind of IOPS am I looking for and what level of redundancy? Prod DB's, VDI, etc

Really NetApp over EMC?

also why the bashing of post processed ISCSI? It owns.

It's NetApp, so his choices are Raid-DP, Raid-DP, or Raid-DP. He's going to dump it all into a big aggregate because that's what our best practice is 99% of the time.

And ISCSI is fine, but NFS (particularly on NetApp where you can do instant file level no space cloning) is fantastic for VMWare datastores, it's a better fit for unix and linux in most cases, and a lot of places still like FC for the rock solid stability and fairly predictable storage network behavior.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

adorai posted:

It's all going into 1 big aggregate, it's just a matter of whether I should use 3 raidgroups or 2. 2 will be more space, 3 have a lower chance of data loss. I am sure there is a performance tradeoff as well but I am not sure which way that swings.

Oh welp, I'd personally then do 3 raid groups. While 3 volumes may not be ideal, you are limiting the catastrophic of a dual drive failure. You are focusing on up time, redundancy, and availability; which may hold more benefit for you.

NippleFloss posted:

They are 10k. The only 15k drives we still sell are 600GB because those are the only ones we can still source from Seagate or Hitachi. They tell us to stop selling them so they can stop making them, but we're going to keep selling them until they stop making them so they may exist FOREVER. But generally speaking 15k SAS is being phased out as a drive technology and 10k SAS probably isn't far behind it as SSD is only a couple of years away from meeting it on the price/capacity curve.

I could be mistaken, but aren't 15k being phased out because of the caching that can be implemented on 10K, making it the best bang for buck? I could be wrong but whatever.



quote:

It's NetApp, so his choices are Raid-DP, Raid-DP, or Raid-DP. He's going to dump it all into a big aggregate because that's what our best practice is 99% of the time.

And ISCSI is fine, but NFS (particularly on NetApp where you can do instant file level no space cloning) is fantastic for VMWare datastores, it's a better fit for unix and linux in most cases, and a lot of places still like FC for the rock solid stability and fairly predictable storage network behavior.

I like NFS, I really do but until Vmware supports NFS v4.x, which may be well you know. I like the post processing of Nimble and iSCSI and the L2ARC, I can't see a difference between iSCSI and NFSv3(aside from chap), as a right now standpoint of VMware 5.x. In the future I might like NFS more, but until then iSCSI owns, the data calls are better(especially with post processing), but NFSv4 has some nice features to look forward to.

Personally Hersey and I put a Netapp fas with 7.2k drives with a Nexenta front end. poo poo owns, 20K+ OI. This semester will rock. But I am not a fan of NetApp, currently. They have promising stuff but I am always the last to hear about there poo poo, maybe because there is only one vendor in my region, not because it is small but because they did some deal with netapp.

Dilbert As FUCK fucked around with this message at 01:27 on Jul 31, 2014

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug



Well I screwed up that "edit post"

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Dilbert As gently caress posted:

Oh welp, I'd personally then do 3 raid groups. While 3 volumes may not be ideal, you are limiting the catastrophic of a dual drive failure. You are focusing on up time, redundancy, and availability; which may hold more benefit for you.

You need to lose three drives in a single raid group to lose data. The chances of data loss at that level don't get meaningful until you start getting to 6TB drives that are taking weeks to rebuild. Big raid groups are fine in Raid-DP.

quote:

I could be mistaken, but aren't 15k being phased out because of the caching that can be implemented on 10K, making it the best bang for buck? I could be wrong but whatever.

I like NFS, I really do but until Vmware supports NFS v4.x, which may be well you know. I like the post processing of Nimble and iSCSI and the L2ARC, I can't see a difference between iSCSI and NFSv3(aside from chap), as a right now standpoint of VMware 5.x. In the future I might like NFS more, but until then iSCSI owns, the data calls are better(especially with post processing), but NFSv4 has some nice features to look forward to.

Personally Hersey and I put a Netapp fas with 7.2k drives with a Nexenta front end. poo poo owns, 20K+ OI. This semester will rock. But I am not a fan of NetApp, currently. They have promising stuff but I am always the last to hear about there poo poo, maybe because there is only one vendor in my region, not because it is small but because they did some deal with netapp.

The performance difference between 10k and 15k SAS is there, but it isn't huge, at least from the internal testing I've seen. The additional density you can achieve from the smaller form factor makes it a wash.

As far as NFS and VMWare, the benefits aren't performance, where all of the protocols perform within a few percentage points of one another when properly configured, it's simply that on NetApp specifically, having VMDKs in the native WAFL format, instead of buried in a LUN with VMFS on top of it, allows us to do some neat stuff with our cloning engine and makes data management as a whole easier. Space savings from deduplication are transparent to vSphere, snapshot space utilization is visible and snapshots can be browsed from within vSphere, growing a datastore simply requires growing the underlying volume with no dealing with extending VMFS, datastores can be as large as the underlying hardware supported instead of being limited by VMFS, there's no need to run space reclamation tools to get back blocks on thin provisioned LUNs that VMFS has released, it is EXTREMELY tolerate of network inconsistency. Not to mention that you didn't have to worry about any of the scalability problems around locking with NFS that ATS is meant to address on VMFS.

I'm not really sure what you mean about the data calls being better, or what you mean by post-processing. Perhaps you can elaborate.

NetApp (along with most everyone else) supports flash as read cache now. That's not really unique to Nimble. We've done it for at least five years in the form of PCI flash, and for two or three years now with SSD as well. Nimble has some pretty good algorithms from what I hear, but the idea itself isn't novel.

regarding NFS 4.1, obv

YOLOsubmarine fucked around with this message at 01:53 on Jul 31, 2014

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

NippleFloss posted:

You need to lose three drives in a single raid group to lose data. The chances of data loss at that level don't get meaningful until you start getting to 6TB drives that are taking weeks to rebuild. Big raid groups are fine in Raid-DP.

Wow I am dumb, sorry about that I was over doing poo poo.

Yeah that is true, 2 volumes would be easier. 3 Volumes would only be better if you weren't sure who is in control of the arrays and monitoring it. 3 volumes might buy you some fail over time when someone is dumb but I am sure, adorai is more than smart enough. So 2 volumes would probably be best if both volumes were in the specified option 1

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
I don't know about you guys, but if I was speccing out entirely new storage for my company, my first bet would be the company that has the weakest flash and next generation story in the entire industry while also being one of the most expensive. That flash stuff won't catch on.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

three posted:

I don't know about you guys, but if I was speccing out entirely new storage for my company, my first bet would be the company that has the weakest flash and next generation story in the entire industry while also being one of the most expensive. That flash stuff won't catch on.

Wow I had a hard laugh at this, thanks three.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

three posted:

I don't know about you guys, but if I was speccing out entirely new storage for my company, my first bet would be the company that has the weakest flash and next generation story in the entire industry while also being one of the most expensive. That flash stuff won't catch on.

I disagree, I think you should plan your next five years around a bunch of half baked products, 90 percent of which won't exist in recognizable form by the time your service agreement ends. I don't even know how you can run an application anymore without a next generation story behind it.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

NippleFloss posted:

On Clustered OnTAP you are required to have a dedicated 3 disk aggregate for each node to hold it's node root volume, which contains all of the cluster ring database information and runs the node SVM (formerly known as a Vserver), and no user data can reside on that. So you need to take 6 disks off the top for the node root in a two node HA pair. Beyond that you should hold back two spares and then build out the largest single aggregate you can with the largest raid group size that divides evenly into what's left. It doesn't have to be perfectly even but you don't want raid groups in a single aggregate to be off by more than a couple of disks from one another for performance reasons. Since this is SAS you can go up to 28 disks in an RG. For SATA disks there are some practical reasons why smaller RGs might be better, as the rebuild time goes up as RG size increases, but for 900GB SAS rebuild times shouldn't be a factor and failure rates will be somewhat lower anyway.
So since I was only referencing half my storage, I am going to end up with 4 SSDs for read/write cache, 2 spare 900GB SAS, aggr0 with 3x 900GB SAS and a big aggregate with two raid groups, one with 20 disks and one with 19. On each controller. Seem like the way to go?


three posted:

I don't know about you guys, but if I was speccing out entirely new storage for my company, my first bet would be the company that has the weakest flash and next generation story in the entire industry while also being one of the most expensive. That flash stuff won't catch on.
I'm not worried about the next generation, I am worried about the generation I am buying today. I already have a netapp infrastructure that is built around the tools that netapp provides. If you have never used them, I don't blame you for dismissing them, but after using them moving to something that does not have them is just such a pain in the rear end that it is a consideration.

Number19
May 14, 2003

HOCKEY OWNS
FUCK YEAH


Another thing to remember about NetApp (and hopefully any other array worth a poo poo) is that it will predictively fail a drive if it starts misbehaving. I had this happen recently (my first NetApp drive failure). The controller sent me a support message saying my drive count had changed. It then copied the data off the drive to a spare (I'm assuming, it took a couple of hours for the drive status to change from present to failed), failed the drive for real and ordered me a replacement.

Of course a drive can just outright die too but the diagnostics on these things are pretty good and a lot of drive failures can be predicted these days.

Number19
May 14, 2003

HOCKEY OWNS
FUCK YEAH


adorai posted:

So since I was only referencing half my storage, I am going to end up with 4 SSDs for read/write cache, 2 spare 900GB SAS, aggr0 with 3x 900GB SAS and a big aggregate with two raid groups, one with 20 disks and one with 19. On each controller. Seem like the way to go?
I'm not worried about the next generation, I am worried about the generation I am buying today. I already have a netapp infrastructure that is built around the tools that netapp provides. If you have never used them, I don't blame you for dismissing them, but after using them moving to something that does not have them is just such a pain in the rear end that it is a consideration.

I'd even out the raid groups and have a third hot spare.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

adorai posted:

So since I was only referencing half my storage, I am going to end up with 4 SSDs for read/write cache, 2 spare 900GB SAS, aggr0 with 3x 900GB SAS and a big aggregate with two raid groups, one with 20 disks and one with 19. On each controller. Seem like the way to go?
I'm not worried about the next generation, I am worried about the generation I am buying today. I already have a netapp infrastructure that is built around the tools that netapp provides. If you have never used them, I don't blame you for dismissing them, but after using them moving to something that does not have them is just such a pain in the rear end that it is a consideration.

This reads: "I don't want to learn new things."

KS
Jun 10, 2003
Outrageous Lumpwad

three posted:

This reads: "I don't want to learn new things."

It can also read "not many of these new trendy vendors support {SRM, SQL snapshotting, Exchange snapshotting, API or scriptability}" which is a pretty tough feature set to give up once you build around it. I'm replacing a Compellent and am limited because of it too. Netapp, 3PAR, and Compellent are waaaaay ahead of most here.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

three posted:

This reads: "I don't want to learn new things."

"Why keep doing what works for us when we can shake things up for no reason!?"

-A horrible organization

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

adorai posted:

So since I was only referencing half my storage, I am going to end up with 4 SSDs for read/write cache, 2 spare 900GB SAS, aggr0 with 3x 900GB SAS and a big aggregate with two raid groups, one with 20 disks and one with 19. On each controller. Seem like the way to go?

Yup, that's what you'll end up with. You could even the raid groups out by keeping an extra spare as was mentioned by another poster, but it's not really necessary. The slight imbalance won't create any performance issues and a little extra space is always nice.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

NippleFloss posted:

"Why keep doing what works for us when we can shake things up for no reason!?"

-A horrible organization

The person in question is already replacing their storage. He recommended going with a legacy vendor that is falling behind.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Declustered RAID is so cool.

Number19 posted:

I'd even out the raid groups and have a third hot spare.

3 is way too many hot spares for ~45 disks. 1 per 30 to 60 drives is normal.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

NippleFloss posted:

I disagree, I think you should plan your next five years around a bunch of half baked products, 90 percent of which won't exist in recognizable form by the time your service agreement ends. I don't even know how you can run an application anymore without a next generation story behind it.

I understand where three is coming from and will respect his opinion about things. He's a good guy and I won't disrespect him


I have a few things I'd like to state but won't. I'd like to debate it personally with him.

adorai posted:

So since I was only referencing half my storage, I am going to end up with 4 SSDs for read/write cache, 2 spare 900GB SAS, aggr0 with 3x 900GB SAS and a big aggregate with two raid groups, one with 20 disks and one with 19. On each controller. Seem like the way to go?

The way to go is on the determination of the engineer to decide if the larger volumes vs smaller volumes are better for the business continuity. I'd like to think my suggestions had some merrit but I wasn't aware of caching or the faults in place of it.

quote:

I'm not worried about the next generation, I am worried about the generation I am buying today. I already have a netapp infrastructure that is built around the tools that netapp provides. If you have never used them, I don't blame you for dismissing them, but after using them moving to something that does not have them is just such a pain in the rear end that it is a consideration.

This is the problem VAR's fail to see, I'd love to work for one in my next job but I am more concerned with the end goal of the customer than I am with "hell this is an easy work, we should push it because $reasons$" to me you need to look at the customer's goals, their limitations, budget, and determine realistic from optimistic goals. With out it any solution will do; regarless if the signed on the dotted-line; you should always strive to give the customer the best for all their needs.

Dilbert As FUCK fucked around with this message at 03:42 on Jul 31, 2014

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
I never said EMC was the answer. :colbert:

Also, "best for the customers needs" is not always what the young sysadmin thriving to prove how clever he is wants.

three fucked around with this message at 03:44 on Jul 31, 2014

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

three posted:

The person in question is already replacing their storage. He recommended going with a legacy vendor that is falling behind.

They're adding storage to an environment where they already have an incumbent whose specific tools they leverage heavily. When a lot of your processes are built around a particular set of tools, and you're happy with the way they work, vague platitudes about strategic vision aren't a compelling argument for change. There is a reason that incumbency is so powerful.

If you're doing green field then sure, look around, but if you have an incumbent and they work well in your environment then you should probably just stick with them. It's easy to get caught up talking about technology, but the ultimate goal of IT is to improve business process, not to have have cool poo poo. If the cool poo poo doesn't help you do it any better then why waste the resources migrating data and retraining everyone and rewriting scripts and updating your monitoring tools and so on...

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

NippleFloss posted:

They're adding storage to an environment where they already have an incumbent whose specific tools they leverage heavily. When a lot of your processes are built around a particular set of tools, and you're happy with the way they work, vague platitudes about strategic vision aren't a compelling argument for change. There is a reason that incumbency is so powerful.

If you're doing green field then sure, look around, but if you have an incumbent and they work well in your environment then you should probably just stick with them. It's easy to get caught up talking about technology, but the ultimate goal of IT is to improve business process, not to have have cool poo poo. If the cool poo poo doesn't help you do it any better then why waste the resources migrating data and retraining everyone and rewriting scripts and updating your monitoring tools and so on...

pixaal posted:

I've been recently tasked with getting new storage entirely.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

three posted:

This reads: "I don't want to learn new things."

It isn't so much this as "how can I integrate this into my environment, without major disruptions"; The goal should be no downtime, and you should look at all aspects to minimize this.


I fail to see the point, they want an easy upgrade, with low admin overhead. Honestly, what do you suggest?

Dilbert As FUCK fucked around with this message at 03:48 on Jul 31, 2014

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
Are we even talking about the same person? I'm talking about the guy that is replacing his entire storage. How are NetApp's existing capabilities relevant? He uses Equallogic now.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
I suggested three possibilities -- one with great supporting tools, one that is the cool new kid on the block with "a great story" and one that is fairly inexpensive with a nice web interface. Since I don't know anything about his environment, I gave only a brief blurb about each, allowing him to take the advice of those three in particular to look at while making his decision. Frankly, if you are doing a rip and replace I am not sure why you wouldn't look at a wide variety of options to get an idea of what exactly is out there. It seems however that suggesting a proven solution instead of just sticking to the new hipster storage arrays was somehow being lazy or something along those lines.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

three posted:

The person in question is already replacing their storage. He recommended going with a legacy vendor that is falling behind.

Three seriously, if they fit the customer requirements, price and 3+ year goals how is that a wrong suggestion?


Dude I think the best of you but wow, come on! Fit the customer needs not what your ego thinks storage should be. Remember 'it's about the customer'.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
If you are going to replace your storage and pick a vendor to hop on board with, pick one with a bright future. I don't particularly care if you stick with a vendor like NetApp vs a Nimble/Tintri/Whatever; I just would never recommend NetApp.

As mentioned "how can I integrate this into my environment, without major disruptions" is going to occur in that 3+ years, and they don't want to be stuck with a vendor that is trailing the market.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

three posted:

Are we even talking about the same person? I'm talking about the guy that is replacing his entire storage. How are NetApp's existing capabilities relevant? He uses Equallogic now.

Adorai posted about purchasing NetApp shortly before you made that post, and then responded to your post with his reasons for buying NetApp. It wasn't clear from the context (to me, at least) who you were talking about. So, no, we aren't.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

three posted:

If you are going to replace your storage and pick a vendor to hop on board with, pick one with a bright future. I don't particularly care if you stick with a vendor like NetApp vs a Nimble/Tintri/Whatever; I just would never recommend NetApp.

Yeah but the market is booming right now and unless you follow the faults and capabilities of each, it's kinda hard. I agree with never recommending NetApp but forgetting things like Pure, EMC, Compellent, EQL, and 3Par will only hurt the customer. It's not about favorites it's about choosing what is best fit for the customers' needs, that's why we get paid the mega bucks.

quote:

As mentioned "how can I integrate this into my environment, without major disruptions" is going to occur in that 3+ years, and they don't want to be stuck with a vendor that is trailing the market.

How do you operate then?

I respect the whole "you are too out of date to support" but come on. Is it really that good, or is it you saying that?

NippleFloss posted:

Adorai posted about purchasing NetApp shortly before you made that post, and then responded to your post with his reasons for buying NetApp. It wasn't clear from the context (to me, at least) who you were talking about. So, no, we aren't.

Wasn't very clear for me either...

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Dilbert As gently caress posted:

Yeah but the market is booming right now and unless you follow the faults and capabilities of each, it's kinda hard. I agree with never recommending NetApp but forgetting things like Pure, EMC, Compellent, EQL, and 3Par will only hurt the customer. It's not about favorites it's about choosing what is best fit for the customers' needs, that's why we get paid the mega bucks.

I never said to forget Pure, EMC, Compellent, EQL, 3Par, or anything other than NetApp. Where did I pick a favorite? I don't even have a favorite storage vendor.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

three posted:

I never said to forget Pure, EMC, Compellent, EQL, 3Par, or anything other than NetApp. Where did I pick a favorite? I don't even have a favorite storage vendor.

Honestly, your posts did not come off like that just saying, dude.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Dilbert As gently caress posted:

Honestly, your posts did not come off like that just saying, dude.

Please post where I picked a vendor and said to forget Pure, EMC, Compellent, EQL, and 3Par.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

I know we've talked about this before, but what does an entry-level Nimble CS210 or CS220G go for?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

three posted:

Please post where I picked a vendor and said to forget Pure, EMC, Compellent, EQL, and 3Par.

I said they came off a certain way, not you said it directly but please realize this is not the first time your tone has come off wrongly.

PCjr sidecar posted:

I know we've talked about this before, but what does an entry-level Nimble CS210 or CS220G go for?

From what I am currently quoting <40k

PM me if you want more info

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

three posted:

If you are going to replace your storage and pick a vendor to hop on board with, pick one with a bright future. I don't particularly care if you stick with a vendor like NetApp vs a Nimble/Tintri/Whatever; I just would never recommend NetApp.

As mentioned "how can I integrate this into my environment, without major disruptions" is going to occur in that 3+ years, and they don't want to be stuck with a vendor that is trailing the market.

If you recommended the market leader in Flash last year you would have recommended Violin Memory, a company that can't give away stock at this point and will be lucky to last another 3 years. There is no hierarchy in the flash or even hybrid market, there is just a ton of churn and a lot of buzz for a market that is a rounding error on the larger storage market. The only thing you can say with any certainty is that EMC, NetApp, IBM, and HP will be around long enough to support your warranty and sell you new hardware in 3 years. That's not to say you should avoid the other vendors, but acting like going with Nimble or Tintri or whatever is the safe choice for future-proofing is completely backwards.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

NippleFloss posted:

If you recommended the market leader in Flash last year you would have recommended Violin Memory, a company that can't give away stock at this point and will be lucky to last another 3 years. There is no hierarchy in the flash or even hybrid market, there is just a ton of churn and a lot of buzz for a market that is a rounding error on the larger storage market. The only thing you can say with any certainty is that EMC, NetApp, IBM, and HP will be around long enough to support your warranty and sell you new hardware in 3 years. That's not to say you should avoid the other vendors, but acting like going with Nimble or Tintri or whatever is the safe choice for future-proofing is completely backwards.

For the big vendors, I think EMC is best positioned for the future. I also like Dell's solutions for the more cost-conscious.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

NippleFloss posted:

If you recommended the market leader in Flash last year you would have recommended Violin Memory, a company that can't give away stock at this point and will be lucky to last another 3 years. There is no hierarchy in the flash or even hybrid market, there is just a ton of churn and a lot of buzz for a market that is a rounding error on the larger storage market. The only thing you can say with any certainty is that EMC, NetApp, IBM, and HP will be around long enough to support your warranty and sell you new hardware in 3 years. That's not to say you should avoid the other vendors, but acting like going with Nimble or Tintri or whatever is the safe choice for future-proofing is completely backwards.


I'd really like you to look at the new storage thread OP I wrote. 10011101 said you should be included, and I trust him a lot(mostly because he didn't make fun of me and wanted to talk the tech poo poo).

I just hope I can live up to his inspiration and be a VCDX fairly soon.

Maybe when I get back from bronycon I'll just loving post it instead of trying to research ever sentence to the 9th degree.

three posted:

For the big vendors, I think EMC is best positioned for the future. I also like Dell's solutions for the more cost-conscious.

EMC just has more money to throw at poo poo, doesn't mean they are the best at it. Hybrid arrays are the best bang for buck right now.

Dilbert As FUCK fucked around with this message at 04:18 on Jul 31, 2014

Adbot
ADBOT LOVES YOU

KS
Jun 10, 2003
Outrageous Lumpwad

Oh jesus christ. Now's the time to tell the story of how Unitrends decided an anime woman with fox ears was a good, professional corporate mascot. They pulled most of it, including a godawful youtube video, but some evidence remains.

So now that we got that out of the way, this is the most active this thread ever gets and it's kinda lame.

I read three's post and it was bagging on Netapp. None of the other mainstream vendors, just Netapp. And he's right: they're fantastically behind, and unless you're heavily invested in their products and especially their toolset already they're almost certainly the wrong choice for a new deployment. VNXs and Compellents just use hybrid flash better.

Which is too bad, because it leaves a serious lack of mature NFS storage out there, and NFS rocks for VMware. Tintri is still months away from being feature complete even if they hit all their deadlines.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply