Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

NippleFloss posted:

NetApp has an all flash version of both FAS and E-Series that are based on existing platforms and codelines and don't have the same first adopter worries. HP has something similar with 3PAR. There are options to get an AFA without being a guinea pig for an entirely new platform. Internal testing on the all flash FAS has actually shown some incredibly good performance despite not being a platform developed for flash from the ground up. It's definitely competitive with Pure, for instance.

Problem I have with all flash arrays is that to use them effectively you need things like inline compression or inline duplication, otherwise you are spending a bunch of money to obtain the required TB/PB. While there are a bunch of vendors offering these features some of the large, and even small ones still seem like a rushed to market product where, full support and features are coming soon****, but still far away. ExtremIO, is honestly a very awesome product, it has a lot going for it and has a high level of redundancy; however lots of the features aren't fully implemented yet or arrays have reported just full melt downs; that is concerning.

NetApp, while I like it the monopolized netapp vendor in the area is ehhhhh; 3Par, decent products, if I have to call support I would rather get a root canal without painkillers.

Personally I like the idea of all flash arrays, but I don't think they have matured quite yet, and vendors are throwing too much new tech on them in order to compete with the exploding storage vendor fiasco that is happening. That is why I am more or less looking into a hybrid array setup, while I realize we don't get the ultra performance of an all flash san, you are maintaining a viable cost/TB ratio.


Personally I'd love to throw 4xCS460 nimble boxes in our DC and call it a day, but the problem is the question of local vendor, onsite tech, and phone support; not to mention to some extent documentation even though nimble doesn't need much.

adorai posted:

I think it would probably perform quite well in raid 10 for these workloads, but do not have empirical evidence for it. The thing that most people do not realize about ZFS is that it was not designed with speed as the primary or even secondary feature. It was designed for data integrity and scalability. Speed was bolted on later. It does a very good job of hiding behind immense amounts of relatively inexpensive cache to hide this fact.

I think the same thing, we are looking for a vendor on the east coast. I am trying to track down a reseller, not sure if oracle is a buy direct from. Might ask my sister if she can local something for me. Thanks for your help.

Adbot
ADBOT LOVES YOU

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Dilbert As gently caress posted:

I think the same thing, we are looking for a vendor on the east coast. I am trying to track down a reseller, not sure if oracle is a buy direct from. Might ask my sister if she can local something for me. Thanks for your help.
You can buy direct, and if you know what you want it is a breeze to do so. My VAR was not able to get me a better deal on anything other than shipping than we got for buying from Oracle direct.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

adorai posted:

You can buy direct, and if you know what you want it is a breeze to do so. My VAR was not able to get me a better deal on anything other than shipping than we got for buying from Oracle direct.

I'd like to buy direct, but my company would like a "pick up the phone and talk to the engineer who installed it" or "engineer who has does install and maintains these things" thing going. One of the reasons we haven't gone with nimble is for this reason.

I also think it has something to do with us being in this particular market.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Dilbert As gently caress posted:

ExtremIO, is honestly a very awesome product, it has a lot going for it and has a high level of redundancy

Why do you think this? I'm not being glib, I actually want to know what the appeal of ExtremeIO is because it seems really half baked even within the context of the AFA market where most things are fully cooked yet. No non-disruptive scale out is a huge thing to be missing, and I'm not terribly impressed with engineering that requires battery backup to be sold with the array just to guarantee data integrity if it loses power. What am I missing? I'd take Pure over ExtremeIO.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
The "battery backup" thing is silly. They don't trust the average person's lovely IT's departments terrible, flaky UPS. Big whoop.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

NippleFloss posted:

Why do you think this? I'm not being glib, I actually want to know what the appeal of ExtremeIO is because it seems really half baked even within the context of the AFA market where most things are fully cooked yet. No non-disruptive scale out is a huge thing to be missing, and I'm not terribly impressed with engineering that requires battery backup to be sold with the array just to guarantee data integrity if it loses power. What am I missing? I'd take Pure over ExtremeIO.

To put this in context, I am not sold on extremIO in the slightest; I think the IDEA and the physical setup of it is nice for each 'brick' is good. The problem with extreme and where it breaks down is the reliance on software and how little EMC and EMC support is able and willing to support it.

The pro I see and why I am for Extreme is this, if you are a 90% windows environment and run VDI and for most medium enterprises, a single block will do everything you want it to with IOPS to space. Some of the "not real clones of vms" is a cool feature so long as you actually have a competent storage engineer managing it. The 20TB block + a rated 3-10x inline dedupe is nice, but people calculating their storage off >30% dedupe/compression is dumb.

Battery backups are there because pure and nimble do the same thing where they just house the battery's onto the DRAM or cache, EMC just said gently caress it and did pure battery units. Which IIRC Pure and Nimble both put batteries on the DIMMS or in the chassis to power the ram state. EMC just goes overkill because I guess there engineers wanted it on the market faster or something.

Pure is nice and I actually know an engineer at pure, they are a great company but I don't know any sales reps in my area other than being the friend of a friend. But when you hear about pure melt downs it's unnerving. I do like the 420's, good boxes and seem to have more care in them than some larger vendors rushing poo poo.

ExtremIO has some amazing features promised but they are sadly still in the works, which is a down turn for me. EMC is quickly taking the Cisco route as they did on ONE and Nexus, good features, good products but not all fully workable or supported at launch.

Personally I agree Pure over ExtremIO, but for the price of Extreme we are getting it's ehhhhh. The added EMC support is obviously a plus over pure's marketspace.

three posted:

The "battery backup" thing is silly. They don't trust the average person's lovely IT's departments terrible, flaky UPS. Big whoop.

I don't think extremIO is gearing to that market, although they do make ExtremIO's battery system a selling point. The battery backup vs chassis install or dram implacement is a not really too concerning for me. Dual UPS's own, even if they take up 4u gently caress it.

Dilbert As FUCK fucked around with this message at 02:15 on Jul 23, 2014

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

three posted:

The "battery backup" thing is silly. They don't trust the average person's lovely IT's departments terrible, flaky UPS. Big whoop.

Requiring two RU worth of UPS just to gracefully destage so you don't corrupt data during a power outage is overkill and seems like last minute ad hoc engineering. It works, but it's a pretty inelegant solution to the problem and does make me wonder about how robust filesystem underneath it is.

ExtremeIO has a very rushed to market feel to it.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

NippleFloss posted:

Requiring two RU worth of UPS just to gracefully destage so you don't corrupt data during a power outage is overkill and seems like last minute ad hoc engineering. It works, but it's a pretty inelegant solution to the problem and does make me wonder about how robust filesystem underneath it is.

ExtremeIO has a very rushed to market feel to it.

Completely agree with you, it was a quick, simple, and cheapish solution to getting it out there first.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

NippleFloss posted:

Requiring two RU worth of UPS just to gracefully destage so you don't corrupt data during a power outage is overkill and seems like last minute ad hoc engineering. It works, but it's a pretty inelegant solution to the problem and does make me wonder about how robust filesystem underneath it is.

ExtremeIO has a very rushed to market feel to it.

It's not like they're charging you more for their battery brick style as opposed to other styles, so meh.

If you really want to complain about XtremIO, wait until news around what you have to do to update to the next two firmware releases makes its rounds.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

three posted:

poo poo

Do you really want to bring up that? Also isn't that NDA?

EMC dropped the ball on the software side but managed to do a decentish job on the HW.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

three posted:

It's not like they're charging you more for their battery brick style as opposed to other styles, so meh.

If you really want to complain about XtremIO, wait until news around what you have to do to update to the next two firmware releases makes its rounds.

I'm guessing this it's about the 4k to 8k block switch that requires a full wipe of all data?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

NippleFloss posted:

I'm guessing this it's about the 4k to 8k block switch that requires a full wipe of all data?

Isn't the 'rumor' that the whole array has to be offline for this, meaning ontop the appliance wipe the storage is down?

Obliterating the whole online upgrade? guess it don't matter but still, them SLA's...

Dilbert As FUCK fucked around with this message at 04:58 on Jul 23, 2014

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Dilbert As gently caress posted:

Do you really want to bring up that? Also isn't that NDA?

EMC dropped the ball on the software side but managed to do a decentish job on the HW.

I didn't mention anything, and also you apparently don't know so either way looks like I'm good!

three fucked around with this message at 14:19 on Jul 23, 2014

Internet Explorer
Jun 1, 2005





Our first generation VNX5300 also required a 1u battery shipped with the unit. Ask me what happened after I left and they shut it down by unplugging the unit from the SPSes. :allears:

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Internet Explorer posted:

Our first generation VNX5300 also required a 1u battery shipped with the unit. Ask me what happened after I left and they shut it down by unplugging the unit from the SPSes. :allears:

Didn't they also have that 92 day bug where they would both SP's would shut down?

Internet Explorer
Jun 1, 2005





Dilbert As gently caress posted:

Didn't they also have that 92 day bug where they would both SP's would shut down?

That was on the newer generation, but yeah I have run into that too. The suggestion was to reboot one SP so that they had different uptimes.

I did also run into another really odd thing with the new generation. Customer had a UPS and PDU, all hooked up properly. Had a flickering power outage and the array took like 8 drives offline. Array uptime was solid, never went down. The fix was to just force those drives back online, array didn't even have to do a consistency check. Took about 6 hours with EMC support to get them to force online the drives and we never got an explanation for the issue.

Not a huge EMC fan. The headaches that I went through early on with our VNX were absolutely insane. And the support was amazingly awful.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


Dilbert As gently caress posted:


Pure is nice and I actually know an engineer at pure, they are a great company but I don't know any sales reps in my area other than being the friend of a friend. But when you hear about pure melt downs it's unnerving. I do like the 420's, good boxes and seem to have more care in them than some larger vendors rushing poo poo.

What kind of Pure meltdowns have you heard about?

We're actually looking at a 420 right now.

Wicaeed
Feb 8, 2005
Has anyone heard of/used any Skyera products? Our Sr Engineer of our parent company suggested we buy two Skyera Skyeagle's instead of our current plan to purchase two Nimble CS220G's.

:catstare:

Yes I seriously just typed that.

Yes their product would seem to be over 10x what our budget would be.

orange sky
May 7, 2007

Wicaeed posted:

Has anyone heard of/used any Skyera products? Our Sr Engineer of our parent company suggested we buy two Skyera Skyeagle's instead of our current plan to purchase two Nimble CS220G's.

:catstare:

Yes I seriously just typed that.

Yes their product would seem to be over 10x what our budget would be.

quote:

In English, one of his products, skyEagle, boasts 500 TB in a 1U system, for $1.99 per formatted GB. That is close to one million dollars and may seem expensive, but most competitors are 72u or so, use much more power, and are much more expensive. Furthering the disruptive nature of this with proprietary and advanced compression and deduplication, Skyera achieves 2.5 PB (petabytes) effective storage, with a 5:1 ratio, at a cost of .49 per GB.

Ohhh, up to 2 million bucks for 1PB. That's a generous Sr. Engineer.

Docjowles
Apr 9, 2009

Not that the price isn't insane, but 1 PB of flash storage in 2U? :catstare: That is some density.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
Pretty interesting, but I am curious how their software behind everything is.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

bull3964 posted:

What kind of Pure meltdowns have you heard about?

We're actually looking at a 420 right now.

Basically a large school system in my area got some Pure storage in for a try and buy, after punching it in and reving it up. All the Pure storage shat itself and fun times were not had by any.

Wicaeed
Feb 8, 2005

Moey posted:

Pretty interesting, but I am curious how their software behind everything is.

Honestly their website and lack of real world reviews makes me kind of suspicious that the entire thing is snake oil.

Pile Of Garbage
May 28, 2007



Wicaeed posted:

Honestly their website and lack of real world reviews makes me kind of suspicious that the entire thing is snake oil.

Yeah from the looks of it they've been a start-up since 2012 and only last month managed to secure a partnership with Seagate to combine Xyratex ClusterStor into their products. They've only got one case-study on their website which is for an incredibly vaguely named company that really isn't identified. IMO it just looks like another outfit that wines-and-dines CTOs.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


Dilbert As gently caress posted:

Basically a large school system in my area got some Pure storage in for a try and buy, after punching it in and reving it up. All the Pure storage shat itself and fun times were not had by any.

Do you have any more details? How long ago, did they identify the bug, total data loss, models, scale of implementation?

One of the things that attracting us to them right now is one of our partners (which happens to be driving our data needs right now) currently use them.

I mean, I'm sure someone has a bad story about every product that's out there (I remember DSLReports badmouthing equallogic when an array crashed). This is just a pretty sizable purchase for us and if this implementation is anything but a hail mary, there will be blood.

chutwig
May 28, 2001

BURLAP SATCHEL OF CRACKERJACKS

bull3964 posted:

Do you have any more details? How long ago, did they identify the bug, total data loss, models, scale of implementation?

One of the things that attracting us to them right now is one of our partners (which happens to be driving our data needs right now) currently use them.

I mean, I'm sure someone has a bad story about every product that's out there (I remember DSLReports badmouthing equallogic when an array crashed). This is just a pretty sizable purchase for us and if this implementation is anything but a hail mary, there will be blood.

We use PureStorage right now and were an early beta site (I came on shortly after the product went GA). It's actually been quite stable for us; our main issue with Pure is that they want to be the Apple of enterprise SSD arrays and they tend to treat their customers like that. Oh, you have a problem? Just upgrade to Purity x.y.z! is not really an acceptable answer in the enterprise storage world. Other than that, we've had a few hiccups where one of the controllers at our backup data center crashed, and various issues with the way they calculate space utilization translating into Purity severely throttling latency on the array and us not being able to figure out why because we were at an indicated 59% used out of 10 TB, rather than the actual 102% out of 8 TB that we really were.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


Thanks for the feedback.

Our primary use case of this initially is for 3tb of MongoDB data. It's super compressible data, but there's no native solution for it right now so inline compression on the storage seems like the way to go. It's a replica set of 3 nodes, so the dedupe is very attractive as well. At least on paper, it seems like we could not only greatly shrink the size of a single datastore but also have a great reduction in total data size due to the nodes deduping.

Encryption at rest is also a necessity at this point, and that's handled natively by the array.

Assuming everything works out well with the storage, the long term plans are to move our IO intensive virtual machines to the Pure storage and our main SQL 2012 R2 database to the Pure storage as well (likely after adding more shelves) since our MSA2000 G2 SAN is getting up there in years.

Adding to the cost for us right now is we have no 10G infrastructure. To this point, our switching has simply been a bunch of HP Procurve 2810 switches with our current Equallogic SANs on their own segregated network on two Dell PowerConnect switches. The goal is to unify storage and network switching for simplification and flexibility while adding 10G capability. So, that's just an added cost on top of everything else. It will be the single largest individual infrastructure purchase we have ever done, so I'm listening to as much as I can.

Docjowles
Apr 9, 2009

We evaluated Pure for use backing a number of MySQL DB's and it tested really, really well. I don't want to throw out specific numbers because I wasn't the one running the benchmarks but we were seeing outstanding compression ratios. And being flash, write performance crushed compared to spinning disk. We ended up getting quashed from above on making the purchase for $business_reasons but on pure (heh) technical merits it was a baller device.

For what it's worth we were running it over fibre channel.

If you want I can try to ask for more details from the guys who actually put it through its paces.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

chutwig posted:

We use PureStorage right now and were an early beta site (I came on shortly after the product went GA). It's actually been quite stable for us; our main issue with Pure is that they want to be the Apple of enterprise SSD arrays and they tend to treat their customers like that. Oh, you have a problem? Just upgrade to Purity x.y.z! is not really an acceptable answer in the enterprise storage world. Other than that, we've had a few hiccups where one of the controllers at our backup data center crashed, and various issues with the way they calculate space utilization translating into Purity severely throttling latency on the array and us not being able to figure out why because we were at an indicated 59% used out of 10 TB, rather than the actual 102% out of 8 TB that we really were.

I'd be interested to hear how your experiences with failover have been. Has the process been non-disruptive for applications?

TeMpLaR
Jan 13, 2001

"Not A Crook"
Project initially scoped for 11K IOPS and 200TB. Hardware bought and delivered. Haven't installed it yet.

Today they said oh wait, actually 30K IOPS and 800TB. Oh boy.

OldPueblo
May 2, 2007

Likes to argue. Wins arguments with ignorant people. Not usually against educated people, just ignorant posters. Bing it.

TeMpLaR posted:

Project initially scoped for 11K IOPS and 200TB. Hardware bought and delivered. Haven't installed it yet.

Today they said oh wait, actually 30K IOPS and 800TB. Oh boy.

Overclocking and winzip should take care of that.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

bull3964 posted:

Do you have any more details? How long ago, did they identify the bug, total data loss, models, scale of implementation?

One of the things that attracting us to them right now is one of our partners (which happens to be driving our data needs right now) currently use them.

I mean, I'm sure someone has a bad story about every product that's out there (I remember DSLReports badmouthing equallogic when an array crashed). This is just a pretty sizable purchase for us and if this implementation is anything but a hail mary, there will be blood.


About 3 months ago

No the bug was not ID'd other than "ooops" mainly the company didn't care to ever talk to them again to find out.

Total data loss was substantual, I heard because they were fully virtualized and put a dedication into a backup group they were saved, Scale +40k end users.

I'm not saying go away from pure I still think them highly, compared to extremIO. Good products and all, just make sure you test it first. IMO unless you need 100K iops for +125TB, find a hybrid array.



Also guess I should post the full OP for the storage thread Sunday. I'd do the VM thread but as close as we are to 6.0..... EHHHHHH

chutwig
May 28, 2001

BURLAP SATCHEL OF CRACKERJACKS

NippleFloss posted:

I'd be interested to hear how your experiences with failover have been. Has the process been non-disruptive for applications?

Controlled failover during Purity upgrades has always been unnoticeable from an application standpoint. Things got a little wacky when one of the controllers crashed once because of a bug in Purity that caused the target controller to refuse to take over the failed controller's disks immediately, but that was kind of an exceptional circumstance.

bull3964 posted:

Our primary use case of this initially is for 3tb of MongoDB data. It's super compressible data, but there's no native solution for it right now so inline compression on the storage seems like the way to go. It's a replica set of 3 nodes, so the dedupe is very attractive as well. At least on paper, it seems like we could not only greatly shrink the size of a single datastore but also have a great reduction in total data size due to the nodes deduping.

Our principal use case is it hosts a pre-compressed 4+TB PostgreSQL database. There are also two replicas of this database which compress/dedupe a lot less than I'd expect given that PostgreSQL replication is block-level rather than logical, to the point where the replica volumes sometimes exceed the size of the master volume. I don't know whether it's because the database actually is deduped completely and the UI is distributing the complete size of a single copy of the database among the three volumes, or whether it's just not actually that great at dedupe under certain circumstances. I do know that once a month I usually do housekeeping on the replicas by deleting their data volumes and re-cloning from the master, and doing so usually frees up several hundred gigabytes on the array that gradually gets used up again over the course of the month.

tl;dr: sometimes the dedupe doesn't act like you expect but we still have about 31TB provisioned that's deduped down to 5TB, so overall it works pretty well.

chutwig fucked around with this message at 20:04 on Jul 26, 2014

incin
Jul 12, 2012

Windows for Workgroups
Looking for some clarity/help on a LSI MegaRAID question.

I have 24 drives in each server trying to make a RAID10. In the MegaRAID software I see them all as healthy. I go through the configuration wizard and set the following settings:

(Config #1)

1. Create 2 drive groups with 12 drives in each
2. Next screen add the two arrays (drive groups) to a span
3. Next I can set the RAID level I want. So in this case RAID10 and continue on
4. Next I accept the RAID 0 configuration screen
5. Accept the config and see that my RAID10 is healthy

However I think I just made a RAID100 using the two arrays in 1 span.

Or (Config #2)

1. Create 1 drive group with all 24 drives
2. Add the 1 array to a span
3. Select RAID1, continue to next screen
4. Accept the RAID0 configuration
5. See all the drives as healthy

This is my first time working with the MegaRAID LSI controller and want to make sure I am understanding whats happening. Doing some research my option #2 is the real RAID10.

Seems like a lot of people get confused with LSI terms and meaning or maybe its just me. :3:

Pile Of Garbage
May 28, 2007



incin posted:

Looking for some clarity/help on a LSI MegaRAID question.

I have 24 drives in each server trying to make a RAID10. In the MegaRAID software I see them all as healthy. I go through the configuration wizard and set the following settings:

(Config #1)

1. Create 2 drive groups with 12 drives in each
2. Next screen add the two arrays (drive groups) to a span
3. Next I can set the RAID level I want. So in this case RAID10 and continue on
4. Next I accept the RAID 0 configuration screen
5. Accept the config and see that my RAID10 is healthy

However I think I just made a RAID100 using the two arrays in 1 span.

Or (Config #2)

1. Create 1 drive group with all 24 drives
2. Add the 1 array to a span
3. Select RAID1, continue to next screen
4. Accept the RAID0 configuration
5. See all the drives as healthy

This is my first time working with the MegaRAID LSI controller and want to make sure I am understanding whats happening. Doing some research my option #2 is the real RAID10.

Seems like a lot of people get confused with LSI terms and meaning or maybe its just me. :3:

This takes me back, been a while since I've had to mess with LSI. Checkout this article, it's pretty straight-forward: http://xorl.wordpress.com/2012/08/30/ibm-megaraid-bios-config-utility-raid-10-configuration/

1) Create two drive groups with 12 drives each.
2) Add both drive groups to the span.
3) Select RAID10 and finalise the config.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

incin posted:

Looking for some clarity/help on a LSI MegaRAID question.

I have 24 drives in each server trying to make a RAID10. In the MegaRAID software I see them all as healthy. I go through the configuration wizard and set the following settings:

(Config #1)

1. Create 2 drive groups with 12 drives in each
2. Next screen add the two arrays (drive groups) to a span
3. Next I can set the RAID level I want. So in this case RAID10 and continue on
4. Next I accept the RAID 0 configuration screen
5. Accept the config and see that my RAID10 is healthy

However I think I just made a RAID100 using the two arrays in 1 span.

No, this is how you do it for most appliances, you create multiple RAID 1 setups and then stripe over them.


quote:

Or (Config #2)

1. Create 1 drive group with all 24 drives
2. Add the 1 array to a span
3. Select RAID1, continue to next screen
4. Accept the RAID0 configuration
5. See all the drives as healthy

This is my first time working with the MegaRAID LSI controller and want to make sure I am understanding whats happening. Doing some research my option #2 is the real RAID10.

Seems like a lot of people get confused with LSI terms and meaning or maybe its just me. :3:

To me this sounds like you are doing 0+1, where there are two RAID 0 setups, mirrored between each other. A bit more probable to fail over raid 10.

Pile Of Garbage
May 28, 2007



Dilbert strikes again, confusing the hell out of a relatively simple question.

feld
Feb 11, 2008

Out of nowhere its.....

Feldman

cheese-cube posted:

This takes me back, been a while since I've had to mess with LSI. Checkout this article, it's pretty straight-forward: http://xorl.wordpress.com/2012/08/30/ibm-megaraid-bios-config-utility-raid-10-configuration/

1) Create two drive groups with 12 drives each.
2) Add both drive groups to the span.
3) Select RAID10 and finalise the config.


I'm working with incin and here's the fun part:


Select all 24 drives and choose RAID1. You'd think it's just a giant RAID1, right? 500GB drives, you end up with only 500GB storage with a ton of redundancy. Turns out it's exactly the same volume size if it was a RAID 10 over 12 mirrors (500*12)! All volumes blink when doing writes, so it's striped across all of them.

Two drive groups with 12 drives each? Can't do it -- max of 8 drives per group. We could of 3x 8 drives and RAID10 over that, but seems unnecessary.

Check out the bottom answer here: http://serverfault.com/questions/517729/configuring-raid-10-using-megaraid-storage-manager-on-an-lsi-9260-raid-card quoted below:

quote:

It seems that LSI decided to introduce their own crazy terminology: When you really want a RAID 10 (as defined since ages), you need to choose RAID 1 (!). I can confirm that it does mirroring and striping exactly the way I would expect it for a RAID 10, in my case for a 6 disk array and a 16 disk array.

Whatever you configure as RAID 10 in the LSI meaning of the term, seems to be more something like a "RAID 100", i.e., every "span" is it's own RAID 10, and these spans will be put together as a RAID 0. (Btw that's why it seems you can't define RAID 10 for numbers of disks other than multiples of 4, or multiple of 6 when using more than 3 disks.) Nobody seems to know what the the advantage of such a "RAID 100" could be, the only thing that seems to be for sure is that it has significant negative impact on performance compared to a good old RAID 10 (that LSI for whatever reason calls RAID 1).

This is the essence of the following very long thread, and I was able to reproduce the findings I mentioned above: http://community.spiceworks.com/topic/261243-raid-10-how-many-spans-should-i-use


Was hoping someone else here ran across this and could provide another data point / confirmation about this ridiculous use of terminology.


(fyi, we're building servers with 24 500GB SSDs :c00lbert:)

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

feld posted:

(fyi, we're building servers with 24 500GB SSDs :c00lbert:)

You're doing this with a LSI RAID controller? You're going to hit its limit before you hit the capability of the SSDs.

Adbot
ADBOT LOVES YOU

Pile Of Garbage
May 28, 2007



Yeah LSI MegaRAID does things in a weird way compared to other more straight-forward controllers (i.e. Adaptec).

What are you installing the drives in? Who sold/designed this build for you and what workloads will you be putting on it? Honestly putting 24 SSDs behind a MegaRAID controller seems really ludicrous.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply