Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Zaepho
Oct 31, 2013

Nukelear v.2 posted:

What is everyone using for PCIe SSD storage? We're going to be rolling out an ElasticSearch cluster and want to run it on SSD. Initially was planning to just plug some 2.5 drives into hotswap bays, but SSD cards seems to have gotten much more reasonable and then I won't have to try to use aftermarket/unsupported drives in a Dell.

The two choices I've kind or narrowed down to at 800Gig are the Intel 910 for $4k or the Micron P320h for $6500. Dell sells ioDrive2 directly, but it's $8k and performance is mediocre.

At the office we have a pair of FusioIO cards that absolutely scream. They're very pricey though (we got them as part of a project we did for FusionIO). Currently, to my dismay, we have a bunch of packaging VMs running on them. I'd much rather present it as storage for some of our SQL Servers.

Adbot
ADBOT LOVES YOU

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

The_Groove posted:

I don't know anything about SONAS, but GPFS is still an official IBM product so I want to say yes. Their current HPC "storage appliance" offerings are called GSS (GPFS Storage Server?) and use the declustered RAID stuff I mentioned earlier on (now Lenovo's) x3650 servers. We've sent IBM a list of questions about how the Lenovo deal will affect our support and future products, but no response yet.

From what I know, GPFS and SONAS will still be IBM.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Nukelear v.2 posted:

What is everyone using for PCIe SSD storage? We're going to be rolling out an ElasticSearch cluster and want to run it on SSD. Initially was planning to just plug some 2.5 drives into hotswap bays, but SSD cards seems to have gotten much more reasonable and then I won't have to try to use aftermarket/unsupported drives in a Dell.

The two choices I've kind or narrowed down to at 800Gig are the Intel 910 for $4k or the Micron P320h for $6500. Dell sells ioDrive2 directly, but it's $8k and performance is mediocre.

We're running a handful of STEC s1120s and they seem OK, but with the HGST acquisition I'm not sure where they fit into HGST's product line.

Comedy answer: The 400GB flash DIMMs IBM is selling in the new X3850s.

Nukelear v.2
Jun 25, 2004
My optional title text

Misogynist posted:

How big a cluster? With the replication and robust failover baked into Elasticsearch, I'd buy the cheapest SSDs I could get my hands on with decent performance if I was going to run more than two nodes. The 910 is great, but way pricy if you don't actually need the best warranty and most reliable hardware you can get.

Smallish, 3 or 4 nodes per datacenter cluster. But I'm likely only going to run 2 shard replicas. In terms of PCIe, the Intel is cheapest non-consumer unit I've seen, is there even cheaper?

The 910 is my 'value' option, the Micron crushes it in performance, but at 50% price premium I don't know if it's really worth it. Edit: Above that you get the more exotic FusionIO cards etc that are another 50-200% more, but I doubt they would beat the Micron enough to make it worth the cash.

Nukelear v.2 fucked around with this message at 20:28 on Feb 20, 2014

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Nukelear v.2 posted:

Smallish, 3 or 4 nodes per datacenter cluster. But I'm likely only going to run 2 shard replicas. In terms of PCIe, the Intel is cheapest non-consumer unit I've seen, is there even cheaper?

The 910 is my 'value' option, the Micron crushes it in performance, but at 50% price premium I don't know if it's really worth it. Edit: Above that you get the more exotic FusionIO cards etc that are another 50-200% more, but I doubt they would beat the Micron enough to make it worth the cash.
If the cluster was going to be larger with more replicas, I'd actually go with the cheap poo poo consumer cards and treat them as consumables instead of hardware investments.

Are you maxing out memory on those boxes? Elasticsearch loves huge heaps, and that will probably make more difference than the SSDs if your queries tend to follow hot paths. If you do a lot of long-tail queries, ignore me.

Nukelear v.2
Jun 25, 2004
My optional title text

Misogynist posted:

If the cluster was going to be larger with more replicas, I'd actually go with the cheap poo poo consumer cards and treat them as consumables instead of hardware investments.

Are you maxing out memory on those boxes? Elasticsearch loves huge heaps, and that will probably make more difference than the SSDs if your queries tend to follow hot paths. If you do a lot of long-tail queries, ignore me.

That was actually my starting idea but:
a) don't want to deal with shoving consumer drives into enterprise hardware and deal with support issues
b) I'm not going to have enough of these to treat them like cattle, can't tolerate a lot of failures
c) from what I've read, due to the nature of lucene indices all the iops will end up being random, it loves loves more iops and pcie seems cheap-ish now


RAM wise, probably won't max out, 128G seems likely. Unless you are going to throw like 128G+ of heap at it, losing heap pointer compression makes 30G the most popular heap size. Indices are mmaped so the balance of ram will continue help with performance. I have some slight concerns that most people don't use high grade pet ES nodes and doing obscenely large heaps will cause weird GC conditions. I haven't seen anyone make the case to use slower storage with tons of ram, every serious build I've seen uses high end ssd.

Going to one of their training sessions next month, so I may come back with a whole other plans before we start spending.

Nukelear v.2 fucked around with this message at 15:36 on Feb 21, 2014

Syano
Jul 13, 2005
What can everyone tell me about EMC Vmax? I had a recruiter call me this morning asking about a :yotj: I am working on. They asked me if I had any experience with EMC Vmax. I figure I better bone up on it real fast in case an interview ever happens. I am absorbing all I can online at the moment but figured I'd ask here as well.

luminalflux
May 27, 2005



For some reason my LeftHand won't upgrade. I've got 2 old P4300G2 nodes stuck at 10.5, and they don't realize there's a newer version, even though I can see that 11.0 is available under "View notifications". I even had to manually download and upgrade the CMC, but that won't help.

Here's what it looks like in CMC:



Any ideas on how to get 11.0 installed on these?

Bitch Stewie
Dec 17, 2011
HP haven't officially released the 11.0 upgrade for the G2 systems yet as there was/is an issue with upgrading. Apparently it's a couple of months away.

Apparently there's no issue if you reimage a node from the 11.0 ISO but the online upgrade is a no-go.

(this was told to me today by a StoreVirtual engineer as part of a call we had open about another issue on our SAN).

luminalflux
May 27, 2005



Wow, I thought it would be supported given that P4300G2 is in the release notes.

What's better - admitting the 11.0 nodes to the cluster, or manually upgrading from a DVD?

Bitch Stewie
Dec 17, 2011

luminalflux posted:

Wow, I thought it would be supported given that P4300G2 is in the release notes.

What's better - admitting the 11.0 nodes to the cluster, or manually upgrading from a DVD?

The way I had it explained to me is that there's basically zero benefit upgrading the G2 to 11.0 right now since the main thing is adaptive optimisation and the G2 won't do that anyway.

I'd open a ticket with support and go through your scenario with them tbh.

luminalflux
May 27, 2005



Yeah I'm talking to my VAR tomorrow about this definitely.

luminalflux
May 27, 2005



VAR and HP Support says they should have 11.0 for G2s in 2-3 weeks. In the meantime i've added the new 11.0 4330's to the cluster and it's restriping :toot:

MC Cakes
Dec 30, 2008
I'm being asked to put together a backup solution for my lab, unfortunately I have next to zero experience with storage administration. I inherited my would-be mentor's problems, who quit partially because he was frustrated with my boss. The IT director also does not get along with my boss.

Question 1:
We currently have about 40TB worth of data spread across 3 servers, and no off-site backup. We're located in sunny California, near a major fault line. Ideally we'd have off-site backups in the event our building falls down, or our server rack falls over, or the vibration shakes a lot of fast-spinning drives. Any great remote storage recommendations for linux? Cold storage?

Question 2:
Assuming an off-site backup isn't approved, would using something like a 12-bay array like Drobo populated with 4tb sata drives & raid-6/dual disk redundancy serve as a decent on-site backup? Would it be relatively safe if it was powered down when it wasn't being written to?

Question 3:
How important is SAS vs SATA? And consumer vs enterprise? I don't think my boss will pay for 40TB of SAS. My boss is also heavily pushing towards using consumer instead of enterprise 'because he spoke to some people at google', but from what I've read about datacenters like backblaze, they don't care about array failure because their network layer handles data migration of failing arrays for them, so there's very little reliance on single arrays or hands on maintenance, which we can't rely on.

Question 4:
How hosed am I?

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

How important is this data?

How long is an acceptable window to restore the data in the event of a data loss?

How much money can you spend on this solution?

How much does the data change? ie. you have 40TB of data, is it static or do you create 1TB of new data a week?

Do you understand you are not google and what works for them doesn't work for other folks?


A: You are very hosed as I feel the answers coming are "this is mission critical life saving data and we can't afford to be without it and I have 5,000 dollars to spend".

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

MC Cakes posted:

I'm being asked to put together a backup solution for my lab, unfortunately I have next to zero experience with storage administration. I inherited my would-be mentor's problems, who quit partially because he was frustrated with my boss. The IT director also does not get along with my boss.

quote:

Question 1:
We currently have about 40TB worth of data spread across 3 servers, and no off-site backup. We're located in sunny California, near a major fault line. Ideally we'd have off-site backups in the event our building falls down, or our server rack falls over, or the vibration shakes a lot of fast-spinning drives. Any great remote storage recommendations for linux? Cold storage?

If you have VMware you can use VDP-Advanced which is actually really loving nice; the VDP also is good but the VDP-A supports replication.

Some questions I have are
What would be your ideal Recovery Time Objective? If and when a failure happens how long can the company be down for?
During a failure how much data can be lost? 15 minutes? 30 minutes? hour? etc
What are critical services or servers that need to run in the event of a disaster? Customer portals, accounting info, email?
Can you use a Virtual backup appliance or does your company need something in the way of a physical appliance dedicated and hosting data?
During the downtime of a primary site what are the performance requirements and SLA's?
What is acceptable lantecy, or throughput do a DR site; remember the farther you go may impact latency and throughput.
How many servers are like servers e.g. windows/*nix/other; Many backup softwares use deduplication or high level compression which helps make that 40TB much smaller.
(maybe don't word it this way but) What is the value of the data you are backing up if the earth opened up and swallowed your datacenter?
What would be your growth rate so you can size properly?

This will determine a lot Going with cold storage such as tapes is great for long term but can be slow for recovery and return to operations. Depending on what your company needs and the budget, you may find hosting a Beefy 720xd loaded with 2 or 3TB drives and leveraging SSD's on the raid controller for caching; and a bunch of CPU and Mem; or you could find you need to rent a rack in a datacenter in Colorado or the like loaded with a few servers and a NAS hosting replicated storage or backups.

quote:

Question 2:
Assuming an off-site backup isn't approved, would using something like a 12-bay array like Drobo populated with 4tb sata drives & raid-6/dual disk redundancy serve as a decent on-site backup? Would it be relatively safe if it was powered down when it wasn't being written to?

I can't say I would recommend drobo; but also realize 4TB drives are slow; and Raid 6 will have a larger IOP penatly making it even slower during writes. Not a huge deal after a fist backup if it leverages snapshots or CBT; but if and when you need to fire up that Drobo realize it will most likely crawl. Chances are it would be okay to power down and run assuming no data is being written; however I have yet to see someone be able to protect disasters(except hurricanes, you know those are coming).


quote:

Question 3:
How important is SAS vs SATA? And consumer vs enterprise? I don't think my boss will pay for 40TB of SAS. My boss is also heavily pushing towards using consumer instead of enterprise 'because he spoke to some people at google', but from what I've read about datacenters like backblaze, they don't care about array failure because their network layer handles data migration of failing arrays for them, so there's very little reliance on single arrays or hands on maintenance, which we can't rely on.

SATA or NL-SAS is acceptable for backups as generally you'll need larger capacity depending on what the questions I asked would determine what you need. Remember you need to leverage the costs of what the business is losing when you have a disaster to your boss so he can go to finance and say "when poo poo hits the fan and we are losing 1024$ a minute this backup solution saves us X as appose to Y". Again I need to know what your company would feel an accurate RTO would be for a disaster.

quote:

Question 4:
How hosed am I?
Not as hosed as you think, look at what the business needs and not shooting for lowest bidder wins because when a failure happens you will be hosed.

Dilbert As FUCK fucked around with this message at 22:28 on Feb 25, 2014

madsushi
Apr 19, 2009

Baller.
#essereFerrari
1) Offsite for that much data would probably be something in the range of thousands of dollars a month, unless you were having someone like Iron Mountain come and pick up an array and rotate the arrays regularly.

2) Stuff is usually safe when it's off. Your big question with a Drobo is throughput: it will take a long time to write that 40 TB, and maybe even longer to get it back.

3) SATA is fine for backups, nothing wrong with that, but you definitely don't want to go with a consumer solution if the backup data is important.

Skipdogg's questions were apt. Boils down to figuring out your RPO (how often do you back the data up, can you lose 24 hours of data?) and RTO (how fast do you need it to be back online) and your budget.

AlternateAccount
Apr 25, 2005
FYGM

MC Cakes posted:

'because he spoke to some people at google'

Can he give me that person's number so that I can call them with my data storage questions?

Docjowles
Apr 9, 2009

MC Cakes posted:

Question 3:
How important is SAS vs SATA? And consumer vs enterprise? I don't think my boss will pay for 40TB of SAS. My boss is also heavily pushing towards using consumer instead of enterprise 'because he spoke to some people at google', but from what I've read about datacenters like backblaze, they don't care about array failure because their network layer handles data migration of failing arrays for them, so there's very little reliance on single arrays or hands on maintenance, which we can't rely on.

This (his, not your) attitude makes me rage so much. "Hurr durr, well, GOOGLE runs just fine without spending a fortune on :airquote: enterprise gear. And if it's good enough for them, it's good enough for us!" Sure, if you have the size, manpower and architecture to take advantage of it. Having me run critical services off a RAID-5 array of 4 ancient SATA disks with no hot spare and an EOL Adaptec RAID card isn't quite the same thing. It "works" but only because I come in one weekend a month to replace the latest dead disk and babysit the rebuild because it fails half the time.

Unsurprisingly I no longer work for that guy!

If the data is important but they can't shell out the capital cost for decent gear, see if they'd go for something like Backblaze or even Amazon S3/Glacier. All those backup services let you mail them drives to seed the initial backup so you don't have to wait a year for 40TB to upload over the office T1.

MC Cakes
Dec 30, 2008

skipdogg posted:

How important is this data?
How long is an acceptable window to restore the data in the event of a data loss?
How much money can you spend on this solution?
How much does the data change? ie. you have 40TB of data, is it static or do you create 1TB of new data a week?
A: You are very hosed as I feel the answers coming are "this is mission critical life saving data and we can't afford to be without it and I have 5,000 dollars to spend".

It's hard to quantify downtime or data value, because we're a small lab at a relatively small research institution. There's no opportunity cost to downtime like there are for retailers like Amazon, except for our few salaried employees. However, our lab does receive grants for making certain web services available, but I don't think there are downtime stipulations for those grants.

Hopefully the critical sites would have a downtime of >24 hours, but I think all the critical stuff might be less than a terabyte or two.

The rest is mostly archived research data, most of which hasn't been touched in three or four years; I don't think many people would care if it took a month to restore. I think it's mostly being kept in case a grant provider wants to audit past experiments.

Our budget is a quickly expiring grant with a remaining ~$50k for equipment, so it's neither trivial nor life-changing- But while cloud storage would be ideal, that would be a recurring cost not covered by the grant. My boss would much rather put together some beefy machines with that money but I'm terrified of what will happen in a 7.0 or greater earthquake.

I'd say we're currently generating data at a rate of maybe a terabyte or two a year. It could potentially be as high as 4TB a year, but I honestly think that's at least a year or two out (our institute recently bought some data-intensive equipment, but running experiments on it has very expensive material costs)

Dilbert As gently caress posted:

What are critical services or servers that need to run in the event of a disaster? Customer portals, accounting info, email?
Can you use a Virtual backup appliance or does your company need something in the way of a physical appliance dedicated and hosting data?
How many servers are like servers e.g. windows/*nix/other; Many backup softwares use deduplication or high level compression which helps make that 40TB much smaller.
What would be your growth rate so you can size properly?

It's somewhat low-stakes: Our lab runs RHEL, which I've been put in charge of supporting. The rest of the institute is handled by the IT department. The critical stuff I'm in charge of is mostly stuff like user-auth and access to our NAS, web-services with active grants, and our publicly visible institute website (it would be embarrassing and potentially bad if the whole institutes site went down)

What do you mean by virtual backup appliances?

I'd say at least $1M went into generating this data, but that's including like ten years worth of research data. That data generates grants, but after they expire... I couldn't put a price on what remains.


E: I should also mention, the only 'backup' we have at the moment is a mirror :suicide:. We have two raid 10 arrays, which are rysnc'd daily. (It is a destructive rsync, so accidental deletions are non-recoverable after 24h)

MC Cakes fucked around with this message at 00:10 on Feb 26, 2014

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
I have to ask, are you part of NEES?

MC Cakes
Dec 30, 2008

FISHMANPET posted:

I have to ask, are you part of NEES?



That would be too ironic.

MrMoo
Sep 14, 2000

Docjowles posted:

This (his, not your) attitude makes me rage so much. "Hurr durr, well, GOOGLE runs just fine without spending a fortune on :airquote: enterprise gear. And if it's good enough for them, it's good enough for us!" Sure, if you have the size, manpower and architecture to take advantage of it. Having me run critical services off a RAID-5 array of 4 ancient SATA disks with no hot spare and an EOL Adaptec RAID card isn't quite the same thing. It "works" but only because I come in one weekend a month to replace the latest dead disk and babysit the rebuild because it fails half the time.

It'll be nice when presumably BTRFS is in production as we might start seeing sensible such options. I guess the NAS vendors are too scared with FreeBSD to produce anything with ZFS. It's a shame TrueNAS devices are currently $10k+.

feld
Feb 11, 2008

Out of nowhere its.....

Feldman

MrMoo posted:

It'll be nice when presumably BTRFS is in production as we might start seeing sensible such options. I guess the NAS vendors are too scared with FreeBSD to produce anything with ZFS. It's a shame TrueNAS devices are currently $10k+.

The 10K is pretty much the cost of the hardware. And their support is phenomenal. Why would you try to hate on iXSystems?

MrMoo
Sep 14, 2000

The hardware is all enterprise graded SAS equipment, hence the price, it would be nice to see a cheaper SATA version. Forgo all the expensive multi-pathing SAS support and expensive connectors and drastically simplify the entire hardware configuration and thus price, but retain reliability in software.

I'm only looking for 0.25-0.5 TB storage but with ZFS protection, dedup, and SSD read speeds, I might drop iXSystems a note soon.

Amandyke
Nov 27, 2004

A wha?

Syano posted:

What can everyone tell me about EMC Vmax? I had a recruiter call me this morning asking about a :yotj: I am working on. They asked me if I had any experience with EMC Vmax. I figure I better bone up on it real fast in case an interview ever happens. I am absorbing all I can online at the moment but figured I'd ask here as well.

VMAX is Symmetrix. Big honking robust block storage. Quite expensive.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

MC Cakes posted:

It's hard to quantify downtime or data value, because we're a small lab at a relatively small research institution. There's no opportunity cost to downtime like there are for retailers like Amazon, except for our few salaried employees. However, our lab does receive grants for making certain web services available, but I don't think there are downtime stipulations for those grants.

Hopefully the critical sites would have a downtime of >24 hours, but I think all the critical stuff might be less than a terabyte or two.

The rest is mostly archived research data, most of which hasn't been touched in three or four years; I don't think many people would care if it took a month to restore. I think it's mostly being kept in case a grant provider wants to audit past experiments.

Our budget is a quickly expiring grant with a remaining ~$50k for equipment, so it's neither trivial nor life-changing- But while cloud storage would be ideal, that would be a recurring cost not covered by the grant. My boss would much rather put together some beefy machines with that money but I'm terrified of what will happen in a 7.0 or greater earthquake.

I'd say we're currently generating data at a rate of maybe a terabyte or two a year. It could potentially be as high as 4TB a year, but I honestly think that's at least a year or two out (our institute recently bought some data-intensive equipment, but running experiments on it has very expensive material costs)

I personally would go Tape Backup. Get that archived research data backed up to LTO5 or LTO 6 tapes and put them in cold storage somewhere in case you ever need the data again. It's an inexpensive solution and would meet your needs.

AlternateAccount
Apr 25, 2005
FYGM

skipdogg posted:

I personally would go Tape Backup. Get that archived research data backed up to LTO5 or LTO 6 tapes and put them in cold storage somewhere in case you ever need the data again. It's an inexpensive solution and would meet your needs.

And you can pay for literally decades of storage in advance. I wouldn't even pay for cold storage. Find a long term storage facility that does documents and such that runs underground. Their temperature variances across the year will almost certainly be well within tolerances. You can store a couple dozen tapes in a cardboard box that way for literally a dollar or two a month.

Pile Of Garbage
May 28, 2007



AlternateAccount posted:

And you can pay for literally decades of storage in advance. I wouldn't even pay for cold storage. Find a long term storage facility that does documents and such that runs underground. Their temperature variances across the year will almost certainly be well within tolerances. You can store a couple dozen tapes in a cardboard box that way for literally a dollar or two a month.

Iron Mountain are usually the go-to guys for archival storage. Not sure on their pricing though...

AlternateAccount
Apr 25, 2005
FYGM

cheese-cube posted:

Iron Mountain are usually the go-to guys for archival storage. Not sure on their pricing though...

I wouldn't poo poo in a paper bag and trust Iron Mountain with it, honestly. Unless you need one company to handle a nationwide sized account, I'd advise avoiding them.

Thanks Ants
May 21, 2004

#essereFerrari


AlternateAccount posted:

I wouldn't poo poo in a paper bag and trust Iron Mountain with it, honestly. Unless you need one company to handle a nationwide sized account, I'd advise avoiding them.

I have nothing to add other than to say that's a loving great analogy.

NullPtr4Lunch
Jun 22, 2012
Not to mention, IronMountain makes even the simplest of tasks way to drat complicated. I hate their SecureSync website.

You can't just be like: "Take this container and keep it for a week. Do that every week as long as we keep paying you..."

Demonachizer
Aug 7, 2004

NullPtr4Lunch posted:

Not to mention, IronMountain makes even the simplest of tasks way to drat complicated. I hate their SecureSync website.

You can't just be like: "Take this container and keep it for a week. Do that every week as long as we keep paying you..."

That is pretty much our setup here except it is for 4 weeks. :confused:

AlternateAccount
Apr 25, 2005
FYGM
I think he's referring to how there has to be a big stupid contract and schedule A and all that paperwork jazz(and that's not unique to IM, but IM is pretty legendary at making even routine tasks extraordinarily difficult and frustrating.)

Also, yes, SecureSync is a complete nightmare.

blindjoe
Jan 10, 2001
Im not sure if this question should go here or a home NAS thread that I can't find anymore, but I am trying to figure out the best way to store files for a small business, an industrial engineering and controls company that has 25 employees.
We currently have 3 Tb of data, of which most is archive. We work off of one NAS, and it backs up to another which is another building.


I have a bunch of esx servers, and I have four new 4 tb drives I was going to put into one of them. I was then going to run a windows machine with a 8 tb drive, and move all the files over to it, and let it do backups so we would have previous versions to restore to if people move folders around or delete by accident. I was then going to copy everything over to the Nas every night.

Is there something better I should be using to have user-restorable files through windows?
I am trying to figure out more IT stuff, but its tough in the industrial field as we are 10 years behind all the commercial stuff. Plus this is for internal use, and everyone is used to just storing everything locally.

stop, or my mom will post
Mar 13, 2005

MC Cakes posted:


What do you mean by virtual backup appliances?


Dilbert was referring to things like VMware's VDPA (http://www.vmware.com/au/products/vsphere-data-protection-advanced) -- It's essentially a pre-configured virtual machine image that includes backup software and storage. VMware VDPA is a good product although it is unlikely to help here as it doesn't sound like you guys are running VMware and each VDPA instance only scales to 8TB of storage, even with the deduplication engine used in VDPA you are unlikely to be able to shrink 40TB of data down to fit into 8TB.

Have a look at tape. As skipdogg said you probably want to get that research data onto tape, offsite and protected.

KillHour
Oct 28, 2007


This thread is probably my best bet because it's storage heavy, even if we aren't using SANs.

The company I work for is looking into OEMing servers to rebrand as storage appliances for video security (NVRs). I've been put in charge of doing research on how they should be built out, and who we should go through. Right now, I'm leaning towards HP, due to their warranty (and because we've been using them for a while).

I've also been looking at Supermicro, due to their lower cost and ease of rebranding. Also, the largest OEM in the industry (BCD Video) uses HP, so another vendor would help differentiate ourselves.

Anyways, I had a few questions.

For people who have worked with Supermicro, how is their support/reliability? We're a 10-man shop, so we really don't want to have to spend a lot of time on support calls, and since this is for security, these things have to be rock-solid.

For people that have done OEM work in the past, who is easiest to work with? I've done some work with Dell in the past when I worked at Ingram, and it didn't go very well.

Secondly, while most of the systems will be 20TB or less (which I could shove in a DL380 12-bay, no problem), we will probably need to accommodate systems as large as 200TB or more. I could either go with external DAS units or use something like the Proliant 1xSL4540 to get the job done. Is there a good reason to go with one over the other other than cost and rack density? What is the densest system out there outside of SANs? I know Supermicro has a 72 LFF disk system, and I've seen them advertise a 90 LFF disk system (but I can't find it on their website, is it new?).

Also, one of the biggest issues with large camera systems is disk throughput. I see systems all the time that use 6 or 8 15k SAS drives in a RAID 10 just for 24 hours of storage so that they can offload the video to the slower 7200 RPM drives at night when there's less recording happening. Milestone (the VMS we're using) actually requires a "live" array for this reason. Is there a reason not to use SSDs instead of 15k SAS for something like this? It seems less expensive, and if I use a PCIe card like this, I can even save some drive bays.

Mostly just throwing my ideas out there and wanted to know if anything I've been thinking is way off base. I'm pretty good with designing these systems, but I wanted to make sure I have a flexible and stable base we can build on.

Edit: I just thought of IBM. They could be a good option, too.

KillHour fucked around with this message at 17:54 on Mar 4, 2014

Thanks Ants
May 21, 2004

#essereFerrari


Dell are pretty big OEMs. All the Avigilon stuff is Dell.

Edit: Whoops, just saw you'd tried that already.

Thanks Ants fucked around with this message at 18:21 on Mar 4, 2014

KillHour
Oct 28, 2007


Caged posted:

Dell are pretty big OEMs. All the Avigilon stuff is Dell.

Edit: Whoops, just saw you'd tried that already.

Yeah, SecurePod was Dell R420 servers and MD1200 DAS units. I've worked with the Avigilon stuff, and setting up the R420 wasn't bad for SecurePod. I just didn't like dealing with Dell, and let's face it, I have a hard time trusting them not to take big projects direct.

Adbot
ADBOT LOVES YOU

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

KillHour posted:

Secondly, while most of the systems will be 20TB or less (which I could shove in a DL380 12-bay, no problem), we will probably need to accommodate systems as large as 200TB or more. I could either go with external DAS units or use something like the Proliant 1xSL4540 to get the job done. Is there a good reason to go with one over the other other than cost and rack density? What is the densest system out there outside of SANs? I know Supermicro has a 72 LFF disk system, and I've seen them advertise a 90 LFF disk system (but I can't find it on their website, is it new?).

Also, one of the biggest issues with large camera systems is disk throughput. I see systems all the time that use 6 or 8 15k SAS drives in a RAID 10 just for 24 hours of storage so that they can offload the video to the slower 7200 RPM drives at night when there's less recording happening. Milestone (the VMS we're using) actually requires a "live" array for this reason. Is there a reason not to use SSDs instead of 15k SAS for something like this? It seems less expensive, and if I use a PCIe card like this, I can even save some drive bays.

For the higher end arrays (200-300TB+), 15k drives are being priced out, squeezed on both ends by 10k and SSD. Milestone video surveillance for casinos :ninja: uses two tiers with 7.2k the pretty optimal choice for the lower. The questions are do your customers require the extra throughput of SSD over 10k drives, and can they afford the additional cost? In my experience the answers are "no" and "no". However, SSD prices are coming down all the time, so YMMV.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply