Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

NippleFloss posted:

StorageTek uses SUN Common Array Management, which is also pretty bad. CAM also includes a cli that is also pretty goofy, but is better than CAM.

That covers all of the equipment I've worked with except for an old IBM DS4000 series FAStT, years ago, which used the generically named Storage Manager and which I remember very little about except that it was easy to understand and pretty unremarkable.

God, CAM is liquid dog poo poo; Thankfully, our last Sun array is rolling out the door in a few weeks. If the DS4000 used the same rebadged LSI/Engenio software as some of the other LSI resellers, it was pretty inoffensive.

in a well actually fucked around with this message at 23:25 on Jun 25, 2012

Adbot
ADBOT LOVES YOU

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

KS posted:

I'm looking to add some secondary storage for D2D backups -- about 25TB, and I don't want to take more than 2-3U and ~$30k. Not opposed to roll-your-own for this, but only if it saves a bunch of money. Performance needs to not suck. It needs to export as either NFS or iscsi.

I know I can get a 380 G8 with 12x 3TB drives in it, but what would I run on it? Nexenta adds $10k to the bill, and that's a hard pill to swallow. I don't know enough about the collection OpenSolaris forks to know if they're at a point where they're usable for something like this with ZFS, or if I should just go with something I know better.

Also looking at the Nexsan E18, and if anyone has other suggestions I'd love to hear them.

We've been pretty happy with OmniOS, and the support is much cheaper than Nexenta. Just make sure the hardware is on the Illumos (nee OpenSolaris) HCL; the Dell R720XDs H310 is not as of this writing.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Jadus posted:

I'm looking at doing this same thing, and am leaning towards something like this guy did using a SuperMicro SC847 - 36 drive chassis and FreeNAS.

Configured half full with 18 x 3TB drives is about $8,000 from CDW, and would give over 40TB of usable space.

Its definitely a 'roll your own' solution, and I'm not sure how fast it would be, but for that price the capacity can't be beat.

We've been happy with that chassis w/ OpenSolaris for a few years for light/medium workloads.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

cheese-cube posted:

I've worked with V7000's of varying configurations for a while now and I agree they are brilliant devices (If you can afford them).

On the suject of IBM midrange storage systems has anyone had a look at their new DCS3700 and DCS9900 high-density systems yet? 60 HDDs in 4U is pretty drat crazy. Also you can get the DCS9900 with either 8 x FC 8Gb or 4 x InfiniBand DDR host interfaces which is insane.

The 9900 is a rebadged data direct networks 9900, which is two generations behind DDN's current offering. You see them in hpc and broadcast; they do some interesting things to keep those pipes full that make them less well suited for general SAN workloads.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Rusty Kettle posted:

I am trying to rebuild a RAID array that is managed by Linux using 'fdisk' and 'mdadm' and I am having problems. I am a grad student who I guess is now the IT guy for our research group, so I have little to no experience with this kind of thing. I am very nervous and google isn't helping much.

Is this a good place to ask questions? If not, where should I go?

Probably the Linux Questions thread would be a better choice; is it degraded but accessible or completely offline?

Also, do whatever you can to not be your research group's IT guy.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

ghostinmyshell posted:

I don't know where my boss keeps finding these supermicro/nexenta vendors but it's like playing Bingo asking them about their short commings and they are like, "How do you know all of this???"

I'd be interested in hearing about some of these, if you wouldn't mind.

Unrelated tip of the day: always have well-defined required performance numbers in your RFP and make sure they pass before you pay the vendor.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Misogynist posted:

OpenIndiana and FreeBSD are similar enough to administer that it's really not worth the hassle. If you want a Linux userland with a kernel that better supports ZFS, look into Illumian (formerly Nexenta Core).

FWIW, there's been some OpenIndiana drama recently. The project lead quit:

http://article.gmane.org/gmane.os.openindiana.devel/1578

There are other server-focused Illumos derivatives that are more actively developed.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

the spyder posted:

I racked one of my new internal 336TB (raw) ZFS SAN's this week and realized that my field engineers are not configuring hot spares.

My question is what drive groupings would you use for such a large storage pool?
We currently use Raidz2 with 7 disk sets (16). I configured this one with Raidz2, 6 disk sets (18) and 4 hot spares (one per JBOD.)

How many controllers? How many CPUs? How many heads? What do the paths from controller to enclosure to enclosure look like? Do you have a SSD ZIL? What does your workload look like? What are your availability requirements? SAS or SATA?

We've run as few as 6 disks and as many as 45 disks in vdevs.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

the spyder posted:

One controller, Dual x5650's, 192GB ram, Mellanox Dual port 40Gb IB. Second cold spare racked, identical config. Two LSI HBA's, (2ext ports each) four cables ran to
Four JBODS (despite having dual expander backplanes, we don't use them.
28 Disk each. No SSD ZIL. (I know, I know. Not my design- changing it on the next revision
to include a fusion IO Write/Zil SSD's.

Workload is: Giant NAS. Used for storage of large (1TB) datasets comprising of 1000's of ~30mg files until moved to processing. Storing processed info for ~ XX days and then rinse/repeat. Currently have ~20 of these deployed.

Uptime is not written in to any of our contracts. It going down is not good for our field guys.
Sata due to price.

Thinking of switching for now to a RaidZ2 8+2 config with 2 hot spares. I have not had these deployed long enough to get lifespan figures, but our stuff gets beatup, quickly due to the environment it is in.

8+2 w/ 2 hot spares seems reasonable; you may want to keep a few cold spares on hand. You may want to stripe each vdev across the controllers. For the ZIL, we're switching to ZeusRAM on our new servers.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Confused_Donkey posted:

Reposting this from the HP support forum as I'm running out of ideas here.

We recently reactivated one of our older EVA SAN's (5000) and am running into issues with cache batteries.

The EVA sat for awhile, so of course the batteries died over the years. All 4 are marked as bad currently on both controllers.

I put a call into HP to 2 day me some new batteries, however after 2 weeks of (we don't know where they are) or (I'm sorry I cannot update you on your order status) I said to hell with it and cracked them open.

Thankfully I noticed that the Hawker Energy 2V 5AH batteries are still made(Albeit under the EnerSys brand now)

So we rebuilt two of the packs, batteries are fully charged, nice and happy.

Slap one in, instantly marked as FAILED (voltage reads perfectly however), the second one came up as working, however the charger never kicked in and let it die overnight.

I just rebuilt another cell, and once again marked as FAILED.

Looking closely at the PCB I noticed a freaking EEPROM which I'm guessing tracks battery charging history (Thanks!)

Does anyone have any ideas? This system is out of support, we are not buying another one, HP won't sell me the batteries without some long and drawn out excuse as to why they cannot find them at the warehouse, and now it seems I cannot even replace the cells myself because of the way the boards are designed (Assuming the EEPROM onboard is the issue)

I don't suppose HP hid an option somewhere on the controller to reset battery status and let me use my EVA that I paid for?

http://www.ebay.com/sch/i.html?_odkw=eva+5000&_osacat=0&_from=R40&_trksid=p2045573.m570.l1313&_nkw=eva+5000+battery&_sacat=0


Or one of the secondhand enterprise vendors like:
http://www.xsnet.com/

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Novo posted:

I could use some information from any ZFS experts in this thread.

I'm trying to put together a pair of ZFS boxes to export iSCSI volumes for Debian virtualization servers. Rather than spend 2-3x as much on some kind of ZFS appliance or other exotic HA solution, I figured I'd build out a pair of 12-bay chassis we already have lying around. My plan is to use zrep (http://www.bolthole.com/solaris/zrep/) to send snapshots to the secondary server every minute or so (I work at a private college that can handle small amounts of data loss, but we need to be back up and running quickly). My question is -- what free OS has the best ZFS implementation these days?

I started with SmartOS and found a lot to like, except that it doesn't support certain features that zrep wants to use (recv doesn't support -o). Right now I'm waiting for Solaris 11 to finish installing so I can see how that feels, but some of the reading I've been doing suggests that the ZFS in Solaris is lagging behind Illumos, which is confusing to me because Wikipedia shows Solaris 11 as version 34 with SmartOS as version 28. I'm not even looking at Linux implementations of ZFS because it seems way too risky.

If you have a ZFS setup that you're proud of I'd love to hear about it. Or explain to me why I will regret this and should ask my boss for twice as much money for an appliance.

OmniOS; http://omnios.omniti.com/ ; OSS, and support contracts are v. reasonable.

ZFS versions are meaningless since Oracle and Illumos maintain two different forks. Stay away from Solaris 11; there's still a hilarious data loss bug that still isn't fixed in their free release.

I think FreeBSD's ZFS implementation si pretty stable, and the Linux ports are making surprisingly decent process.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

the spyder posted:

I support 1.4PB of ZFS based storage across our two sites. In the field, we have over 4PB of ZFS storage. Our older systems are all based off NexantaStor and everything in the last year off OpenIndiana- both running Napp-it 0.8h. Half is linked via Infiniband to our cluster, the other half is archival/local storage. I just ordered another 100TB of disks/chassis, one will be for consolidating our VM's local storage and completely SSD based- the other two for local backup storage. I am very eager to see what 22x240gb Intel 520 series will do performance wise. The other two are 48 3TB 7200rpm spindles. We have suffered some pretty bad disk failures thanks to the dance club downstairs, but not once lost data. (24 failed drives in 12 months.) If you have any specifics, let me know. Everything is based on Supermicro dual quad xeons, 96gb+ of ram, LSI HBA's, and Seagate/Intel drives.

How's the IB support working for you? NFS over IPoIB, I assume?

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

hackedaccount posted:

You should really drop the OpenIndiana guys a line about this. I'm sure they would enjoy a writing a whitepaper about how their ZFS can survive a dance club.

There really aren't any OpenIndiana guys anymore; the OpenSolaris community's moved to Illumos-based distributions. You can see tumbleweeds drifting through OI's mailing lists and hg repos.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Misogynist posted:

Technically, the most popular Illumos distributions are fairly single-purpose (Nexenta, SmartOS). Illumian's as much of a ghost town as OpenIndiana. The only active general-purpose platform development seems to be centered around OmniOS, which is starting to look like abandonware as well.

FWIW OmniOS's being actively developed by OmniTI and it is cash-flow positive for them.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Misogynist posted:

Man, I basically lifted the "starting to look like abandonware" part from a different part of the post without noticing, sorry. What I meant is that the platform is new and the userland is pretty neglected, at least compared to what people might expect from a modern-day Linux/BSD. I do hope that some more momentum springs up behind it later; OmniTI being the Red Hat of Illumos would be a pretty cool thing for Oracle to have to fight with.

AIX is being actively developed by IBM and it's cash-flow positive for them, but it doesn't mean it has good long-term prospects as a general-purpose OS in most people's datacenters. Not intended as a slam -- Theo Schlossnagle is a really loving smart guy and OmniOS has some really great stuff going on -- but it's a product with very specific use cases. If you want something to make you more productive as an admin out of the box, I'm not at a point where I could comfortably recommend it to people who aren't looking to engineer their whole app stack from the ground up. It's basically the Slackware of Illumos distributions, and people are using it because nobody's released a Debian yet with enough momentum to keep the core bits of the platform from bitrotting.

I give him a lot of credit for the shitload of Illumos expertise they have behind what they do, though, and I hope they're successful at keeping it running (not least of all because of what the other Illumos distro commit histories look like).

NP; I think that's all pretty reasonable, and thanks for clarifying re: abandonware. I do think that the minimalism isn't as big of a deal in a NAS role if you're reasonably comfortable with ZFS and Unix.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

FISHMANPET posted:

We've got a faculty member looking to buy 100-200TB, which to us is "big data." They're looking at some of those ridiculous SuperMicro servers with drives on both sides of the chasis, sold by a SuperMicro reseller we've worked with in the past (aka they would sell us a complete warrantied system). We also have a Compellent SAN, but we don't think getting a pile of trays is going to be cost effective (though it might be, we're still getting quotes).

Those SuperMicro chassis are fine for that use case. What OS/platform is the reseller offering?

quote:

Are there any good inexpensive SANs for big data? We don't need high performance or a lot of features because this will mostly be static data, we just want the system to be manageable and expandable.

Not in the SuperMicro price range.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Nukelear v.2 posted:

What is everyone using for PCIe SSD storage? We're going to be rolling out an ElasticSearch cluster and want to run it on SSD. Initially was planning to just plug some 2.5 drives into hotswap bays, but SSD cards seems to have gotten much more reasonable and then I won't have to try to use aftermarket/unsupported drives in a Dell.

The two choices I've kind or narrowed down to at 800Gig are the Intel 910 for $4k or the Micron P320h for $6500. Dell sells ioDrive2 directly, but it's $8k and performance is mediocre.

We're running a handful of STEC s1120s and they seem OK, but with the HGST acquisition I'm not sure where they fit into HGST's product line.

Comedy answer: The 400GB flash DIMMs IBM is selling in the new X3850s.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

KillHour posted:

This thread is probably my best bet because it's storage heavy, even if we aren't using SANs.

The company I work for is looking into OEMing servers to rebrand as storage appliances for video security (NVRs). I've been put in charge of doing research on how they should be built out, and who we should go through. Right now, I'm leaning towards HP, due to their warranty (and because we've been using them for a while).

I've also been looking at Supermicro, due to their lower cost and ease of rebranding. Also, the largest OEM in the industry (BCD Video) uses HP, so another vendor would help differentiate ourselves.

Anyways, I had a few questions.

For people who have worked with Supermicro, how is their support/reliability? We're a 10-man shop, so we really don't want to have to spend a lot of time on support calls, and since this is for security, these things have to be rock-solid.

Supermicro's support process is time-intensive. Good results for end users requires an engaged, active, and knowledgeable reseller.

quote:

For people that have done OEM work in the past, who is easiest to work with? I've done some work with Dell in the past when I worked at Ingram, and it didn't go very well.

Secondly, while most of the systems will be 20TB or less (which I could shove in a DL380 12-bay, no problem), we will probably need to accommodate systems as large as 200TB or more. I could either go with external DAS units or use something like the Proliant 1xSL4540 to get the job done. Is there a good reason to go with one over the other other than cost and rack density? What is the densest system out there outside of SANs? I know Supermicro has a 72 LFF disk system, and I've seen them advertise a 90 LFF disk system (but I can't find it on their website, is it new?).

JBODs+external servers are going to be a bit less dense than a server w/ a ton of drives. How many are you planning on buying? There are other vendors that sell those form factors, but generally don't deliver a HP/Dell/IBM support experience.

quote:

Also, one of the biggest issues with large camera systems is disk throughput. I see systems all the time that use 6 or 8 15k SAS drives in a RAID 10 just for 24 hours of storage so that they can offload the video to the slower 7200 RPM drives at night when there's less recording happening. Milestone (the VMS we're using) actually requires a "live" array for this reason. Is there a reason not to use SSDs instead of 15k SAS for something like this? It seems less expensive, and if I use a PCIe card like this, I can even save some drive bays.

Be mindful of SSD write lifecycles.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

CrazyLittle posted:

Are SAS ssd's significantly better in that regard? How are the big vendors doing it?

Somewhat better, not immune. Better manufacturing/testing tolerances, better controllers, more spare cells at listed capacity, overprovisioning.

Dilbert As gently caress posted:

To be fair they have gotten much better in the past years, and generally a mechanical drive will fail before an SSD reaches a write life.

IIRC, when ssd's hit that write limit you can still do reads so it's a matter of copying data off.

A rule of thumb is that the better eMLC SSDs can handle 10 full writes per day over 3 years. I'd be concerned if KH's live tier is sized close to his daily write size, especially if using cheaper SSDs.

Failure mode depends on the drive and manufacturer. We've seen failing writes, failing reads, the drive disappearing entirely, or taking the entire SAS expander or controller offline (that was a fun one.)

KillHour posted:

We're very knowledgeable and more than capable of building and supporting systems. That being said, we have about 10,000 cameras out there, so we need to have reliable systems that we won't have to touch often and can be fixed quickly if they do break, or we'll drown. Also, we don't have the luxury of billing by the hour for support. Most of our customers pay for a yearly support contract, and we need to handle those quickly or we can loose a lot of money.

If it takes a week to process an RMA, that won't work for us.

Unless you maintain an inventory of cold spare parts or work with a third party fulfillment service you probably don't want to go with SuperMicro.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

KillHour posted:

Wouldn't that mean that if I fill the drive once per day (which is pretty much what would happen), then I should expect about 30 years? I... don't see that being a problem. Also, I'd obviously use RAID 1. I just don't want to be going out there every year to swap out a drive.

It's not necessarily a linear relationship and it's not a guarantee, but an average. IIRC, that's with the enterprise Intel drives. Generally speaking, an expected workload within an order of magnitude of predicted lifetime would make me nervous. Check out the SSD megathread OP for more info on lifespan, write averaging, etc.

I've heard some anecdotal evidence that some consumer hardware RAID cards don't play well with SSDs, but I don't know much about that.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Modern Pragmatist posted:

Right now we're looking at 128 GB or RAM but that's purely what the hardware guys recommended. We have no problem spending more if necessary.

The ARC works well, give it as much as you can cram in.

quote:

The setup doesn't necessarily require an SSD for L2ARC or SLOG we were just thinking this would give us a pretty big performance boost. Would it be better to use mirrored 15k spinning disks for these? Is there a more widely used option that we haven't considered?

Heh, no; don't bother with 15K drives in any case. I'm going to disagree with thebigcow slightly; if you're really concerned about degraded performance in case of a SSD failure, it's better to add them as two separate devices and let ZFS balance the load across the two. Better peak, and in case of failure your write is only as bad as it would be when mirrored. Frankly, the SLOG and L2ARC failure modes we've seen are clean enough (not in Oracle Solaris (lol), but Illumos is OK) that I wouldn't be too worried, especially for a departmental server. Writes go sync (a big deal for NFS clients, mostly), and read cache falls back to ARC only.

L2ARCs mostly make sense when your average working set is larger than RAM but small and nonrandom enough that the L2ARC can effectively cache warm data.

My favorite SLOG device: http://www.hgst.com/solid-state-storage/enterprise-ssd/sas-ssd/zeusram-sas-ssd

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

parid posted:

I hear a lot about paralyzed file systems.

Heh.

Lustre is pretty popular. GPFS if you do business with IBM. Ceph if you're interested in emerging technologies.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

parid posted:

I don't have a lot of details. I wouldn't take responsibility for a whole cluster, but I might be asked to help them with storage and I'd like to be useful if they do. Existing systems are getting long in the tooth and haven't been meeting their needs well. Its all NFS right now. I wouldn't be surprised if it was time to step up into something more specific to their use.

I have gotten the E series pitches before so I'm familiar with their architecture. NetApp is a strong partner for our traditional IT needs so I'm sure we will talking to them at some point. I don't want to just assume they are going to be the best fit due to our success with a different need.

What kind of interconnects do your see? Between processing nodes and storage? Sounds like a lot of block-level stuff. Is that just due to performance drivers?

No, block-level isn't really appropriate for HPC systems; multi-node access at block level falls over badly for more than a few nodes. Lustre and GPFS expose a POSIX-like file system to all clients. For the end user, a GPFS or Lustre installation will look like a NFS or local file system.

A standard Lustre* installation would look like E-series** JBODs, connected via SAS to dedicated Lustre servers, which are exposed via the HPC interconnect*** to the main cluster processing nodes**** If you're using Lustre, you'll need a metadata server with some high IOPS disks connected to the HPC interconnect.

* (or GPFS native raid)
** (48-72 3.5" 7200 RPM SATA drives and enclosure with a passive backplane and SAS expander)
*** (Infiniband or 10GbE if you're lucky, GbE if you're not)
**** which will either use a native client (kernel modules or FUSE) or NFS exported from a server that has the native client mounted.

I would strongly suggest looking at preconfigured Lustre or GPFS appliances; conveniently, NetApp sells one:

http://www.netapp.com/us/media/ds-3243.pdf

but you can get them elsewhere.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Dilbert As gently caress posted:

Isn't SmartOS just a limited VAAI and Vm-open-tools ready appliance?

no

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Dilbert As gently caress posted:

Yeah Smart OS is based on OpenSolairs; how the gently caress do you not know the VAAI plugins or tools load in?

Sorry I am pissed off, because apparently internal IT doesn't push your limits.

I'd love to hear about the SmartOS VAAI plugins :allears:, and what you mean by 'just an appliance.'

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Dilbert As gently caress posted:

Okay want to live demo it?

On SmartOS, I'm not going to be able to do much more than what you'll find on YouTube; it's not the Illumos distro I primarily use (or that I'd recommend) for storage. I can talk more about OmniOS; my employer has a significant amount of storage using ZFS on OmniOS that we export via NFS and Samba.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

pixaal posted:

Are these actual keys?

Shameful. Forums user FCKGW would like a word with you.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

feld posted:

(fyi, we're building servers with 24 500GB SSDs :c00lbert:)

You're doing this with a LSI RAID controller? You're going to hit its limit before you hit the capability of the SSDs.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

feld posted:

400,000 IOPS is the limit of the raid controller in question.

And you've got ~1-2M IOPS worth of SSD behind it. Which is fine if:

feld posted:

We don't need to go beyond the 400,000 IOPS limitation, so that's not a concern.

but my original statement still stands.


goobernoodles posted:

Anyone know how to get IBM on the phone without waiting for a callback for SAN issue? loving waiting for a callback.

Call your sales rep/account manager. (This only works if you spend enough with IBM.)

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Declustered RAID is so cool.

Number19 posted:

I'd even out the raid groups and have a third hot spare.

3 is way too many hot spares for ~45 disks. 1 per 30 to 60 drives is normal.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

I know we've talked about this before, but what does an entry-level Nimble CS210 or CS220G go for?

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

NippleFloss posted:

So this is what a storage hipster looks like?

spinning rust is warmer

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

the spyder posted:

I figured I would toss this here: I know it's not EMC, Netapp, or Nimble, but I'm prepping 1.8PB of whitebox storage this week. Any testing you guys would like to see before it ships next week? They are very simple and designed to be nothing more then a rather large NAS. The only consumer part are the SSD's. This goes along side a whitebox processing array containing 240 cores, 2.5TB ram, and 20 Xeon Phi's. I'm going to be building another system in the near future, with E5 V3/SAS3/Intel SSD's, so I'm curious how they will compare.

Processing specs (dedicated system for running a custom application in):
2u Supermicro 24 2.5" chassis
Dual E5 V2 Xeon
512GB
LSI 9300-8e HBA's
Supermicro HBA's for the 24 2.5' bays
Mellanox Connectx3 40Gb's Infiniband
OS/ZIL/L2ARC SSD's
(20) 1TB Samsung 850 Pro SSD's
(90) 4TB WD RE4'S
(2) 847 45 bay JBOD's

Archival spec's (handles raw data/finished product) x2:
Same as above minus the (24) 1TB SSD's and with double the 9300-8e/JBOD's + 180 4TB RE4's each.

(Initially I did not want to use the RE4's, but you try buying 480 4TB HDD's with two weeks notice. Initial choice was a 4TB SAS drive. Same for the SSD's, Intel 3700's were the first choice, but unavailable.)

Linux, FreeBSD, or Illumos/Solaris?

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Misogynist posted:

Not sure whether you're gay-shaming or slut-shaming your SAN or just implying that your SAN is spending company time on activities it ought not to

nah just that operating near capacity brings most sans to their knees

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

adorai posted:

If you are going to roll your own, I'd look into openindiana.

Stay the gently caress away from OpenIndiana; it is dead and it sucks.

If you want a OpenSolaris derivative, use one of the Illumos distributions. Alternatively, the FreeBSD/ZFS port is pretty mature, and the ZFS on Linux port isn't bad for experimental/development work if you're comfortable with either one of those.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

thebigcow posted:

OpenIndiana is an Illumos distribution, are you thinking of something else?

When was the last OpenIndiana release? Almost two years ago. Tumbleweeds on the dev-list, etc.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

adorai posted:

There is development on the hipster branch.

https://github.com/OpenIndiana/oi-userland/graphs/contributors

Unless I'm misreading this, almost all of the updates in the last 6 months are from one sysadmin in Russia.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

cheese-cube posted:

Just want to quote NippleFloss because holy poo poo so many people gloss-over FC based on throughput numbers yet they don't actually understand how resilient FC is as a protocol. You could have an FC fabric with >200 unique domain IDs and still have sub-millisecond RSCN propagation times.

FC as a protocol is really amazing and I wish more people have the chance to work with it (Disclaimer: I'm a massive IBM/Brocade shill but lol if you have to do zoning with QLogic product or whatevs).

FC is great; if you've got the budget for diamonds why go with a polished turd.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Mr Shiny Pants posted:

If you have diamonds why not go whole hog and get Infiniband?

It is probably the best interconnect, too bad it gets glossed over.

I like Infiniband; it's so weird, the per-port cost's lower than 40GbE, but the enterprise fabric management is not anywhere near what you get with FC.

adorai posted:

sometimes children get their hands chopped off because of the diamond trade. :colbert:

A small price to pay to escape bob metcalfe's abomination.

Adbot
ADBOT LOVES YOU

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

If you have Ceph experience or are interested in Ceph and have a resume, send me a PM.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply