Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
feld
Feb 11, 2008

Out of nowhere its.....

Feldman

Pillar is what you want if you run Oracle

1000101 posted:

High-End:
EMC Symettrix
HDS AMS1000

Upper-mid:
NetApp 6080
3Par
Pillar Data

I challenge your heierarchy. Pillar has had features nobody else has had when we bought our SAN last year. TWO controllers per each brick, QoS on the data (now by APPLICATION!), ability to choose where on the disk the data is at (inside=slow, middle=medium, outside=faster). That was always known as "short stroking" the disk and until Pillar nobody offered it because when you started doing it you couldn't use the rest of the disk. Pillar gives you access to the whole disk still.

I'm throwing my gloves down and saying Pillar wholeheartedly deserves to be in the High-End.


FFS we paid $40,000 for 2TB. :cry:

feld fucked around with this message at 16:44 on Aug 29, 2008

Adbot
ADBOT LOVES YOU

feld
Feb 11, 2008

Out of nowhere its.....

Feldman

spoon daddy posted:

truth that. We needed the oomph of top tier netapps (17 62xx clusters) and had the $$$.



How much did that cost?!

feld
Feb 11, 2008

Out of nowhere its.....

Feldman

Oh, has anyone lately mentioned that Pillar is overpriced poo poo? OK, Pillar is overpriced poo poo. Might as well be in the title. I hated it 3 years ago and our customers have pretty much all dumped them as well.

Hilarious. Nice try, Ellison.

feld
Feb 11, 2008

Out of nowhere its.....

Feldman

Goon Matchmaker posted:

Coincidentally, the tech was stabbed in the parking lot after he pulled an all nighter trying to fix the SAN when it broke for the millionth time.

Am I a bad person for laughing at this? :ohdear:

feld
Feb 11, 2008

Out of nowhere its.....

Feldman

Someone earlier was talking about DRBD active/active, GFS, and NFS and it wasn't clear if NFS was on top of this stack.

For the record you can't do this because of file locking issues with NFS on this stack. You'll get kernel panics and eventually find the RedHat docs that say this isn't supported at all.

feld
Feb 11, 2008

Out of nowhere its.....

Feldman

adorai posted:

With an oracle zfssa with two heads, two shelves can get you 30TB usable easily, have great performance, fit in 6u, and cost well under $100k. It will support CIFS, NFS, RSYNC, ZFS replication, FTP, SFTP, and pretty much anything else that is unix-y. It comes with enough CPU that you can do on the fly compression and actually improve your performance as you reduce IO.

HA pair of controllers with 10GBe
48 disks:
2x SSD for ZIL
46x 2.5 1TB 7200 RPM disk for storage:
-2x 2.5 1TB 7200 RPM disk as spares
-4x vdevs of 9 data and 2 parity disks gives you ~32TB usable

It will be flash accelerated and lightning fast while reasonably inexpensive ($30k for the controllers, $20k ish for each shelf with disks, $10k for the write cache). You can get an identical unit and replicate offsite with the built in replication features. The biggest problem is you will like it so much you will start using it for more than you originally thought (this happened to us and we filled it up pretty quickly).

why are there no L2ARC SSDs; only SSDs for ZIL?

feld
Feb 11, 2008

Out of nowhere its.....

Feldman

MrMoo posted:

It'll be nice when presumably BTRFS is in production as we might start seeing sensible such options. I guess the NAS vendors are too scared with FreeBSD to produce anything with ZFS. It's a shame TrueNAS devices are currently $10k+.

The 10K is pretty much the cost of the hardware. And their support is phenomenal. Why would you try to hate on iXSystems?

feld
Feb 11, 2008

Out of nowhere its.....

Feldman

cheese-cube posted:

This takes me back, been a while since I've had to mess with LSI. Checkout this article, it's pretty straight-forward: http://xorl.wordpress.com/2012/08/30/ibm-megaraid-bios-config-utility-raid-10-configuration/

1) Create two drive groups with 12 drives each.
2) Add both drive groups to the span.
3) Select RAID10 and finalise the config.


I'm working with incin and here's the fun part:


Select all 24 drives and choose RAID1. You'd think it's just a giant RAID1, right? 500GB drives, you end up with only 500GB storage with a ton of redundancy. Turns out it's exactly the same volume size if it was a RAID 10 over 12 mirrors (500*12)! All volumes blink when doing writes, so it's striped across all of them.

Two drive groups with 12 drives each? Can't do it -- max of 8 drives per group. We could of 3x 8 drives and RAID10 over that, but seems unnecessary.

Check out the bottom answer here: http://serverfault.com/questions/517729/configuring-raid-10-using-megaraid-storage-manager-on-an-lsi-9260-raid-card quoted below:

quote:

It seems that LSI decided to introduce their own crazy terminology: When you really want a RAID 10 (as defined since ages), you need to choose RAID 1 (!). I can confirm that it does mirroring and striping exactly the way I would expect it for a RAID 10, in my case for a 6 disk array and a 16 disk array.

Whatever you configure as RAID 10 in the LSI meaning of the term, seems to be more something like a "RAID 100", i.e., every "span" is it's own RAID 10, and these spans will be put together as a RAID 0. (Btw that's why it seems you can't define RAID 10 for numbers of disks other than multiples of 4, or multiple of 6 when using more than 3 disks.) Nobody seems to know what the the advantage of such a "RAID 100" could be, the only thing that seems to be for sure is that it has significant negative impact on performance compared to a good old RAID 10 (that LSI for whatever reason calls RAID 1).

This is the essence of the following very long thread, and I was able to reproduce the findings I mentioned above: http://community.spiceworks.com/topic/261243-raid-10-how-many-spans-should-i-use


Was hoping someone else here ran across this and could provide another data point / confirmation about this ridiculous use of terminology.


(fyi, we're building servers with 24 500GB SSDs :c00lbert:)

feld
Feb 11, 2008

Out of nowhere its.....

Feldman

PCjr sidecar posted:

You're doing this with a LSI RAID controller? You're going to hit its limit before you hit the capability of the SSDs.

400,000 IOPS is the limit of the raid controller in question.

cheese-cube posted:

Yeah LSI MegaRAID does things in a weird way compared to other more straight-forward controllers (i.e. Adaptec).

What are you installing the drives in? Who sold/designed this build for you and what workloads will you be putting on it? Honestly putting 24 SSDs behind a MegaRAID controller seems really ludicrous.

It's a server from iXSystems. We're building a new direct attached storage virtualization cluster that will blow your balls off.

edit: with redundant 10gbit we can live migrate 100GB VMs between servers if extremely quickly, if necessary. I mean, at 1-2GB/s ... you do the math.

feld fucked around with this message at 16:06 on Jul 29, 2014

feld
Feb 11, 2008

Out of nowhere its.....

Feldman

cheese-cube posted:

That's with a single controller and 8 SSDs in RAID0. I hope you bought more controllers.

We don't need to go beyond the 400,000 IOPS limitation, so that's not a concern.

Dilbert As gently caress posted:

I would try this.

Create 12 RAID 1 volumes with 2 drives each in raid 1, after making the 12 RAID 1 volumes, go back and create a raid 0 setup to stripe across the 12 RAID 1 volumes.


Wait, doesn't FreeNAS iXsystems prefer you pass the hard drives up to the OS so ZFS can have control over the drives, and perform it's own calculations and such?

Citrix Xenserver. I wish it was FreeBSD based, but it's not.

Again, we can't do 12 RAID1 volumes because the controller caps out at 8 drives in a single volume and 8 total volumes. :v:

feld
Feb 11, 2008

Out of nowhere its.....

Feldman

Dilbert As gently caress posted:

Oh ha, misread the 8 total volumes. Derp, yeah 3x8 raid 10 drives would not be ideal, but it would give you the same total capacity as raid 10 but not on an individual datastore level. Are 2TB volumes too small for what you are trying to do?

This has been an internal debate of mine. I'd love to not have multiple local storage volumes in case we have anyone beg for large amounts of disk space, but I dont think I can have my cake and eat it too

feld
Feb 11, 2008

Out of nowhere its.....

Feldman

NippleFloss posted:

This seems wildly optimistic for a real world performance estimate. Hope all of your data is small block read or write only synthetic IO.

It doesn't matter what the VMs do for block size read or write as the hypervisor only operates at a certain fixed block size. You tune for the hypervisor, not the VMs inside.

feld fucked around with this message at 17:17 on Jul 30, 2014

Adbot
ADBOT LOVES YOU

feld
Feb 11, 2008

Out of nowhere its.....

Feldman

Dilbert As gently caress posted:

Ah okay I can get that, iSCSI is good, but doing CIF's and NFS straight off the box does have nice advantages. The last place I worked, anytime I brought up iSCSI it was shot down because "iSCSI is a broken protocol" Even though I still to this day don't understand how, maybe I am missing something.

iSCSI

raw block storage over the network accessed with raw SCSI commands as fast as fiber channel and supports multipath

vs

SAS

raw block storage over the PCIe bus accessed with raw SCSI commands as fast as fiber channel and supports multipath


I dont have any idea what your coworkers were smoking but iSCSI is fine. iSCSI is less complicated and MUCH MUCH easier to tune for high IOPs and high throughput. Also, VAAI exists which probably covers their concerns?

Now AoE -- that's something I stare at (:stonk:) and wonder what people are thinking because it is so hard to fit into most infrastructures

edit: cleanup

feld fucked around with this message at 15:42 on Jul 31, 2014

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply