|
Pillar is what you want if you run Oracle1000101 posted:High-End: I challenge your heierarchy. Pillar has had features nobody else has had when we bought our SAN last year. TWO controllers per each brick, QoS on the data (now by APPLICATION!), ability to choose where on the disk the data is at (inside=slow, middle=medium, outside=faster). That was always known as "short stroking" the disk and until Pillar nobody offered it because when you started doing it you couldn't use the rest of the disk. Pillar gives you access to the whole disk still. I'm throwing my gloves down and saying Pillar wholeheartedly deserves to be in the High-End. FFS we paid $40,000 for 2TB. feld fucked around with this message at 16:44 on Aug 29, 2008 |
# ¿ Aug 29, 2008 16:35 |
|
|
# ¿ May 1, 2024 13:50 |
|
spoon daddy posted:truth that. We needed the oomph of top tier netapps (17 62xx clusters) and had the $$$. How much did that cost?!
|
# ¿ Feb 16, 2012 16:04 |
|
Oh, has anyone lately mentioned that Pillar is overpriced poo poo? OK, Pillar is overpriced poo poo. Might as well be in the title. I hated it 3 years ago and our customers have pretty much all dumped them as well. Hilarious. Nice try, Ellison.
|
# ¿ Mar 30, 2012 23:38 |
|
Goon Matchmaker posted:Coincidentally, the tech was stabbed in the parking lot after he pulled an all nighter trying to fix the SAN when it broke for the millionth time. Am I a bad person for laughing at this?
|
# ¿ May 29, 2012 01:36 |
|
Someone earlier was talking about DRBD active/active, GFS, and NFS and it wasn't clear if NFS was on top of this stack. For the record you can't do this because of file locking issues with NFS on this stack. You'll get kernel panics and eventually find the RedHat docs that say this isn't supported at all.
|
# ¿ Jun 2, 2012 17:30 |
|
adorai posted:With an oracle zfssa with two heads, two shelves can get you 30TB usable easily, have great performance, fit in 6u, and cost well under $100k. It will support CIFS, NFS, RSYNC, ZFS replication, FTP, SFTP, and pretty much anything else that is unix-y. It comes with enough CPU that you can do on the fly compression and actually improve your performance as you reduce IO. why are there no L2ARC SSDs; only SSDs for ZIL?
|
# ¿ Dec 17, 2013 01:35 |
|
MrMoo posted:It'll be nice when presumably BTRFS is in production as we might start seeing sensible such options. I guess the NAS vendors are too scared with FreeBSD to produce anything with ZFS. It's a shame TrueNAS devices are currently $10k+. The 10K is pretty much the cost of the hardware. And their support is phenomenal. Why would you try to hate on iXSystems?
|
# ¿ Feb 26, 2014 02:17 |
|
cheese-cube posted:This takes me back, been a while since I've had to mess with LSI. Checkout this article, it's pretty straight-forward: http://xorl.wordpress.com/2012/08/30/ibm-megaraid-bios-config-utility-raid-10-configuration/ I'm working with incin and here's the fun part: Select all 24 drives and choose RAID1. You'd think it's just a giant RAID1, right? 500GB drives, you end up with only 500GB storage with a ton of redundancy. Turns out it's exactly the same volume size if it was a RAID 10 over 12 mirrors (500*12)! All volumes blink when doing writes, so it's striped across all of them. Two drive groups with 12 drives each? Can't do it -- max of 8 drives per group. We could of 3x 8 drives and RAID10 over that, but seems unnecessary. Check out the bottom answer here: http://serverfault.com/questions/517729/configuring-raid-10-using-megaraid-storage-manager-on-an-lsi-9260-raid-card quoted below: quote:It seems that LSI decided to introduce their own crazy terminology: When you really want a RAID 10 (as defined since ages), you need to choose RAID 1 (!). I can confirm that it does mirroring and striping exactly the way I would expect it for a RAID 10, in my case for a 6 disk array and a 16 disk array. Was hoping someone else here ran across this and could provide another data point / confirmation about this ridiculous use of terminology. (fyi, we're building servers with 24 500GB SSDs )
|
# ¿ Jul 29, 2014 15:45 |
|
PCjr sidecar posted:You're doing this with a LSI RAID controller? You're going to hit its limit before you hit the capability of the SSDs. 400,000 IOPS is the limit of the raid controller in question. cheese-cube posted:Yeah LSI MegaRAID does things in a weird way compared to other more straight-forward controllers (i.e. Adaptec). It's a server from iXSystems. We're building a new direct attached storage virtualization cluster that will blow your balls off. edit: with redundant 10gbit we can live migrate 100GB VMs between servers if extremely quickly, if necessary. I mean, at 1-2GB/s ... you do the math. feld fucked around with this message at 16:06 on Jul 29, 2014 |
# ¿ Jul 29, 2014 16:02 |
|
cheese-cube posted:That's with a single controller and 8 SSDs in RAID0. I hope you bought more controllers. We don't need to go beyond the 400,000 IOPS limitation, so that's not a concern. Dilbert As gently caress posted:I would try this. Citrix Xenserver. I wish it was FreeBSD based, but it's not. Again, we can't do 12 RAID1 volumes because the controller caps out at 8 drives in a single volume and 8 total volumes.
|
# ¿ Jul 29, 2014 16:12 |
|
Dilbert As gently caress posted:Oh ha, misread the 8 total volumes. Derp, yeah 3x8 raid 10 drives would not be ideal, but it would give you the same total capacity as raid 10 but not on an individual datastore level. Are 2TB volumes too small for what you are trying to do? This has been an internal debate of mine. I'd love to not have multiple local storage volumes in case we have anyone beg for large amounts of disk space, but I dont think I can have my cake and eat it too
|
# ¿ Jul 29, 2014 16:21 |
|
NippleFloss posted:This seems wildly optimistic for a real world performance estimate. Hope all of your data is small block read or write only synthetic IO. It doesn't matter what the VMs do for block size read or write as the hypervisor only operates at a certain fixed block size. You tune for the hypervisor, not the VMs inside. feld fucked around with this message at 17:17 on Jul 30, 2014 |
# ¿ Jul 30, 2014 17:14 |
|
|
# ¿ May 1, 2024 13:50 |
|
Dilbert As gently caress posted:Ah okay I can get that, iSCSI is good, but doing CIF's and NFS straight off the box does have nice advantages. The last place I worked, anytime I brought up iSCSI it was shot down because "iSCSI is a broken protocol" Even though I still to this day don't understand how, maybe I am missing something. iSCSI raw block storage over the network accessed with raw SCSI commands as fast as fiber channel and supports multipath vs SAS raw block storage over the PCIe bus accessed with raw SCSI commands as fast as fiber channel and supports multipath I dont have any idea what your coworkers were smoking but iSCSI is fine. iSCSI is less complicated and MUCH MUCH easier to tune for high IOPs and high throughput. Also, VAAI exists which probably covers their concerns? Now AoE -- that's something I stare at () and wonder what people are thinking because it is so hard to fit into most infrastructures edit: cleanup feld fucked around with this message at 15:42 on Jul 31, 2014 |
# ¿ Jul 31, 2014 15:39 |