Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
IOwnCalculus
Apr 2, 2003





You need a RAID controller that supports SATA port multiplication. It looks like this is the one they include normally: http://www.newegg.com/Product/Product.aspx?Item=N82E16816115076

Adbot
ADBOT LOVES YOU

OldSenileGuy
Mar 13, 2001
I'm connecting it to my Mac via this:

http://amzn.com/B00EOM2EPE

Not that exact one, but basically the same thing. Everything I've found tells me that:

1) The latest version of OSX removed RAID striping capabilities from Disk Utility.
and
2) Even when Disk Utility still had RAID striping capabilities, they were limited to RAID0 or RAID1.

I do still have a laptop running 10.8.5, so if the old Disk Utility DID actually have the capability to set up an enclosure as RAID5, I could use that, but something tells me it doesn't work that way and I'm stuck with RAID0.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

OldSenileGuy posted:

I don't know a lot on the topic of RAIDs, and I've done a little research and I think I already know the answer to my question, but I just wanted to make sure I haven't missed anything:

I recently bought this RAID secondhand:

http://www.sansdigital.com/towerraid-plus/tr5mp.html

The RAID was sold to me with 5 drives (4x2TB and 1x3TB). But I only got the enclosure, I didn't get the eSata card that it apparently can come bundled with. Since I didn't get this card, does that mean I can only use this raid as RAID0, meaning that the 5 drives within are read as 1x 11TB drive, but if one of them fails then I lose the whole thing?

I was planning on making it into a RAID5 array by replacing the 3TB drive with a 2TB drive, but I didn't realize I needed this card. From what I can tell, without it I'm stuck with RAID0.

I'm using it with OSX if it matters.

You can build a RAID5 setup with the 3tb disk, it's just going to act like a 2tb one.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

OldSenileGuy posted:

I don't know a lot on the topic of RAIDs, and I've done a little research and I think I already know the answer to my question, but I just wanted to make sure I haven't missed anything:

I recently bought this RAID secondhand:

http://www.sansdigital.com/towerraid-plus/tr5mp.html

The RAID was sold to me with 5 drives (4x2TB and 1x3TB). But I only got the enclosure, I didn't get the eSata card that it apparently can come bundled with. Since I didn't get this card, does that mean I can only use this raid as RAID0, meaning that the 5 drives within are read as 1x 11TB drive, but if one of them fails then I lose the whole thing?

I was planning on making it into a RAID5 array by replacing the 3TB drive with a 2TB drive, but I didn't realize I needed this card. From what I can tell, without it I'm stuck with RAID0.

I'm using it with OSX if it matters.

You can build a RAID5 setup with the 3tb disk, it's just going to act like a 2tb one.

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!

g0del posted:

Oh, and his assertion that "gnarly" parity calculations are what kill your performance during a rebuild is just dumb.

To be fair OpenZFS rebuilding of raidz1-3 is slower than it should be, though it doesn't have anything to do with parity calculations. The algorithm they use for deciding the order to write blocks actually results in a large amount of small and effectively random reads and writes. In a pretty recent version of closed source ZFS they improved the algorithm significantly.

Not a concern for home users, and raid 5/raidz1 should be avoided anyway because of the potential for simultaneous disk failures.

Raid 10/01 is something you use when you actually do need those IOPS for a performance critical application. Unless you are trying to stream uncompressed video or you have more than 15-20 computers playing back video from your NAS - both of which would require 10gbit networking - you can do whatever you like and you won't run into problems.

OldSenileGuy
Mar 13, 2001

IOwnCalculus posted:

You need a RAID controller that supports SATA port multiplication. It looks like this is the one they include normally: http://www.newegg.com/Product/Product.aspx?Item=N82E16816115076

Even assuming I had a PCI slot to plug that into (I don't - I'm using an iMac and a Macbook Air), is there some software that comes with that to do the actual drive striping? And then once the drive is striped, is it usable on any machine, but I only need that card again if I need to restripe it or rebuild it due to a drive failure?

It seems like I'm stuck with RAID0 in this instance, but I'm still curious how it all works.

Yaoi Gagarin
Feb 20, 2014

g0del posted:

I wouldn't say it's full of poo poo, but it's definitely not talking to you. His arguments basically come down to "You can totally afford to buy 2 hard drives for every one you use" and "You'll need mirrored drives to wring all the IOPS possible out of your array, especially when resilvering". The second definitely doesn't apply to you, and most home users can't afford to throw money at redundancy they way businesses do so the first probably doesn't apply either.

As for the danger - I've done dozens of resilvers on RAIDZ1* and RAIDZ2 vdevs, both at home and at work. I've never lost a pool during one. People talk about the dangers of getting an URE while rebuilding a regular RAID5 array and somehow think that applies to ZFS. It doesn't. If you did get an URE during rebuild of a RAIDZ array ZFS would simply mark that particular file as unreadable, it wouldn't immediately kill the whole pool.

Your pool will technically be slower with a failed drive and during a resilver, but for home use that won't be noticeable. And as for IOPS, unless you spent way too much money on 10GB networking throughout your house, your network will be the bottleneck when watching movies not pool performance.

Oh, and his assertion that "gnarly" parity calculations are what kill your performance during a rebuild is just dumb.

Just stress test your drives first to weed out any bad ones and then set them up in RAIDZ2 in a machine with plenty of RAM.


* It was put in place before I started working there, I would have designed things differently if I'd been around when it was set up.


Desuwa posted:

To be fair OpenZFS rebuilding of raidz1-3 is slower than it should be, though it doesn't have anything to do with parity calculations. The algorithm they use for deciding the order to write blocks actually results in a large amount of small and effectively random reads and writes. In a pretty recent version of closed source ZFS they improved the algorithm significantly.

Not a concern for home users, and raid 5/raidz1 should be avoided anyway because of the potential for simultaneous disk failures.

Raid 10/01 is something you use when you actually do need those IOPS for a performance critical application. Unless you are trying to stream uncompressed video or you have more than 15-20 computers playing back video from your NAS - both of which would require 10gbit networking - you can do whatever you like and you won't run into problems.

Guess I'll go with a raidz2 then, and just save up until I can swing four or six drives at once.

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!
With raidz2 it's "chunkier" if you later want to expand it, but it can be more efficient. With RAID 10 with single redundancy you're always getting 50% of the storage you pay for, but if you were to buy 5 drives now and 5 later for two separate raidz2 vdevs you'd get 60%.

Though 10 drives is a lot and 5 per raidz2 vdev is on the low side. 6-8 is the range you want.

Personally I have 8 drives in raidz2 when I could have gone for 6 or 7. I had planned to increase capacity eventually by replacing drives one at a time but by the time I need more space it'll be time to build an entirely new server.

BlankSystemDaemon
Mar 13, 2009



OldSenileGuy posted:

RAID on OSX.
For what it's worth, OpenZFS does exist for OSX - and I seem to recall hearing of at least one commercial production use where it's run continuously for quite a number of years now.

Furism
Feb 21, 2006

Live long and headbang
After browsing around various online shops I find that the price they ask for consumer-grade NAS is quite high given the performances they offer and the features I don't need. I'm mostly interested in write performance (I just don't like to wait more than I have to when I copy large files from my Linux box or Windows client to my NAS) since the read performance is usually to stream video or music (so not more than 20 MB/s or so for read, highest possible for write). I don't know much about filesystems (I'm just a network engineer) but apparently the ideal solution for my issue would be to use ZFS?

For now I have 4x2TB drives in a RAID5 array on an older Synology NAS. What I'd like to do is backup the data (about 3 TB worth) to some place, delete the array and use the drives in a new Linux box using ZFS. Apparently RAID-Z1 is the equivalent of RAID5, except that it's handled by the filesystem instead of a controller? Also it's faster? In my example of 4x2TB drive, how much data would I effectively be able to store and how many drives can fail (I think it's 1) ? Another question I had was if the CPU has any impact on the performance, and if a low power-consumption one would do the trick? So far I have my eyes on this mini-ITX motherboard sporting an AMD "Ontario" APU. I obviously don't care about all the bullshit features of the motherboard, I care mostly about the 4xSATA ports and form-factor.

Last but not least, what's a typical way of mounting all this in Linux? Do I create the ZFS pool and mount it in some /home directory, while keeping the rest of the OS on a separate drive, or do I create everything as ZFS? Do I want to just go for FreeNAS or do I go for Linux? I'm really just going to use this as a NAS so I think FreeNAS would be good but I know nothing about FreeBSD and I'd still like to be able to write some scripts for pushing logs or cron whatever I might need in the future.

Sorry for the newbie questions, if there's an article I should read somewhere about that feel free to point it out.

KinkyJohn
Sep 19, 2002

What are the differences between WD red/green/blue/purple again? I know I would want red for storage, but why are the others so terrible again?

Nulldevice
Jun 17, 2006
Toilet Rascal

KinkyJohn posted:

What are the differences between WD red/green/blue/purple again? I know I would want red for storage, but why are the others so terrible again?

Red: Storage drives, equipped with TLER for RAID
Green: Low power drives, parks heads aggressively. Due to this limits lifespan.
Blue: Standard consumer drive. 1 or 2 year warranty.
Purple: Video recording drive. You wouldn't use this in a typical storage situation.
Black: High performance drive. 5 year warranty.

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.
Head parking can be disabled on the greens btw.

phosdex
Dec 16, 2005

Greens are now blues or some poo poo so you have to be more careful at look at specs.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Don Lapre posted:

Head parking can be disabled on the greens btw.
WD used to have an wdidle.exe program that worked on Greens to disable TLER as well as increasing idle timeout, but the utility stopped working around the time that WD released the Red drives presumably to drive people toward them. This is why I bought Samsung Spinpoints shortly afterward to round out my storage (they could be hacked easily to do what the greens used to do although they didn't work after rebooting, can't quite remember since I setup init scripts to do everything for me). Unless this firmware behavior has changed, I don't think it's possible on WD's green drives, unfortunately.

phosdex
Dec 16, 2005

Nah wdidle still worked on greens.

Farmer Crack-Ass
Jan 2, 2001

this is me posting irl
I seem to recall seeing something about RAID-5 arrays working best with odd numbers of drives and RAID-6 arrays working best with even numbers... or maybe it was vice-versa. Anyway, is there any truth to that at all?

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I checked up and wdidle3 and wdtler are impacted totally differently by firmware and idle can be adjusted fine, but WDTLER is definitely locked out in firmware now for sure.

For the sake of re-iterating an important principle for the thread, TLER is important for everyone in this thread honestly though because it impacts software RAID just the same as hardware RAID ("deep recovery mode" is another term WD uses for the opposite of TLER). If a drive is experiencing errors in an array, your I/O will straight up block on reading / writing to affected blocks and tank I/O on your array potentially. I'm not sure if ZFS will simply rebuild the data from parity if a timer is reached, but invoking time-outs is never a recipe for decent performance. In contrast, Idle is important for reducing the number of head parks and an orthogonal problem. "Deep recovery" is kind of BS because honestly if your data cannot be read / written within 10 seconds, another 20+ seconds is almost never going to make a difference.


The optimum number of disks varies from RAID implementation and is not really anywhere near as important for performance in practice as much as making sure your drives, array, and partition are aligned correctly together.

Farmer Crack-Ass
Jan 2, 2001

this is me posting irl
Yeah WDTLER was locked out years ago.

KinkyJohn
Sep 19, 2002

Well it looks like my supplier has the 3TB Seagate Barracuda 7200 cheaper than the 3TB WD blue(green)"blue" 5400. Why should I buy WD again?

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.
It's not that you should buy WD, it's that you shouldn't buy 3TB Seagates. Anecdotal Backblaze evidence aside, Seagate's got a class action lawsuit going against them for those drives. They're just flat out not reliable.

Nulldevice
Jun 17, 2006
Toilet Rascal
I'm fairly certain that the 3TB Seagates were limited to a specific model/run. The ones made after the Thailand flooding if I'm remembering correctly. Those drives were absolute poo poo and failed at extremely high rates. This is responsible for the high failure rates seen on the Backblaze graphs. The data has simply been skewed by a bad production run. Remove those drives and we'd see a more realistic view of Seagate's current 3TB drives.

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.
And that's very much possible, but what data shows the drives that were out of the bad batch(es)? I haven't been able to find any. Which means you're limited to somewhat blind guessing. Now, odds are pretty good that any 3TB drives currently on the market are new stock that aren't from those runs, but there's no guarantees of that. It's worth keeping under consideration that it's possible you're buying brand new drives that are technically old stock from those batches, and that they carry an abnormally high failure risk.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Another fun bit to consider in the Seagate vs WD question is Backblaze's note that the Seagates, despite high failure rates, were also the ones most likely to actually throw SMART errors prior to failure, allowing for far easier preemptive replacement, vice having drives fall over dead with no real warning.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

OldSenileGuy posted:

1) The latest version of OSX removed RAID striping capabilities from Disk Utility.
and
2) Even when Disk Utility still had RAID striping capabilities, they were limited to RAID0 or RAID1.

I do still have a laptop running 10.8.5, so if the old Disk Utility DID actually have the capability to set up an enclosure as RAID5, I could use that, but something tells me it doesn't work that way and I'm stuck with RAID0.

Yes, OS X software raid never did anything but 0/1. I think you could layer them and make 0+1 or 10 but don't quote me on that. The ability to create RAID volumes in Disk Utility has indeed been taken away in 10.11 but you can still create them with the "diskutil" tool from the command line, or use raids created by older versions of OS X. Not sure you'd really want to do that since the removal from Disk Utility might be a sign they're planning to deprecate the soft RAID driver in future OS X versions. There's signs in the open source parts of OS X that this feature hasn't been actively maintained for a few years, so I can believe they might want to get rid of it.

If you want software RAID 5 on the Mac, there is a third party package called SoftRAID which is highly regarded and IIRC supposedly written by the same engineer who originally wrote Apple's built in RAID 0/1 driver. However, the edition which supports RAID 5 is a bit expensive.

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!

DrDork posted:

Another fun bit to consider in the Seagate vs WD question is Backblaze's note that the Seagates, despite high failure rates, were also the ones most likely to actually throw SMART errors prior to failure, allowing for far easier preemptive replacement, vice having drives fall over dead with no real warning.

I think that's reading too much into it. It could just be that the common defects in those drives caused them to fail in a way that was reliably detected by SMART, as opposed to the more random defects drives have in general that may or may not be easily predicted.

Mr Shiny Pants
Nov 12, 2012

BobHoward posted:

Yes, OS X software raid never did anything but 0/1. I think you could layer them and make 0+1 or 10 but don't quote me on that. The ability to create RAID volumes in Disk Utility has indeed been taken away in 10.11 but you can still create them with the "diskutil" tool from the command line, or use raids created by older versions of OS X. Not sure you'd really want to do that since the removal from Disk Utility might be a sign they're planning to deprecate the soft RAID driver in future OS X versions. There's signs in the open source parts of OS X that this feature hasn't been actively maintained for a few years, so I can believe they might want to get rid of it.

If you want software RAID 5 on the Mac, there is a third party package called SoftRAID which is highly regarded and IIRC supposedly written by the same engineer who originally wrote Apple's built in RAID 0/1 driver. However, the edition which supports RAID 5 is a bit expensive.

Why not install OpenZFS? I have it running a RaidZ1 with ARC2 SSD end it performs very well. I was pleasantly surprised.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Mr Shiny Pants posted:

Why not install OpenZFS? I have it running a RaidZ1 with ARC2 SSD end it performs very well. I was pleasantly surprised.

I didn't know that existed before now, but honestly, after a scan through the forum, there are a couple things that scare me: recent reports of kernel panics, and apparently the integration isn't good enough for Time Machine to back up from a ZFS volume yet.

Mr Shiny Pants
Nov 12, 2012

BobHoward posted:

I didn't know that existed before now, but honestly, after a scan through the forum, there are a couple things that scare me: recent reports of kernel panics, and apparently the integration isn't good enough for Time Machine to back up from a ZFS volume yet.

Don't know what you're using but it is plenty stable on Yosemite.

Josh Lyman
May 24, 2009


Yesterday I picked up a 5TB WD My Book and extracted the drive. A normal WD50EZRZ Blue was inside and the data migration went fine.

HOWEVER, when I try to adjust its power management via CrystalDiskInfo to reduce head parking frequency, both AAM and APM are greyed out - I can't make any changes.

I shucked a 5TB Seagate external about a year and a half ago that had a ST5000DM000 and its APM was available to adjust. What gives?

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

AFAIK, external drives often have custom firmware that makes them somewhat different from what their model # would lead you to believe.

ChiralCondensate
Nov 13, 2007

what is that man doing to his colour palette?
Grimey Drawer
Someone posted about the ASRock C2550D4I or the 2750 a while back and it stuck in the back of my mind. Now I want to blow some money, but the reviews on Newegg have a rash of people with dead boards. Anyone here have the 2550 and happy with it?

uhhhhahhhhohahhh
Oct 9, 2012
Are there any HDD Stress Testing packages for Synology? I've got 2 new drives and before I set them up in SHR and copy everything from my 1 JBOD and wipe it I wanted to test to make sure they aren't fuckered first.

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.
You can do an extended smart test inside synology.

uhhhhahhhhohahhh
Oct 9, 2012
Is that going to show up mechanical issues from shipping for example though?

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.
Yes, if the head is having problems it should be creating bad sectors which will be detected.

movax
Aug 30, 2008

http://www.servethehome.com/asrock-rack-d1540d4u-2t8r-review-full-featured-xeon-d-platform/

So, these finally may not be vaporware! I see Supermicro has announced some new boards that have PCIe x8 slots on them as well, so maybe I can finally get these to upgrade my machine.

Looks like they are listed on Amazon for around $1K -- I wonder what they are supposed to retail at. That's CPU + mobo, but RAM will cost a pretty penny as well.

Mr Shiny Pants
Nov 12, 2012

movax posted:

http://www.servethehome.com/asrock-rack-d1540d4u-2t8r-review-full-featured-xeon-d-platform/

So, these finally may not be vaporware! I see Supermicro has announced some new boards that have PCIe x8 slots on them as well, so maybe I can finally get these to upgrade my machine.

Looks like they are listed on Amazon for around $1K -- I wonder what they are supposed to retail at. That's CPU + mobo, but RAM will cost a pretty penny as well.

That's one expensive setup, but it really does look like a cool board. :)

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
I love the 1540 but I'm thinking a 1520 might be adequate for my NAS needs.. They look to be about $500: http://www.amazon.com/ASRock-Rack-D1520D4I-Motherboard/dp/B01B9627DQ

Adbot
ADBOT LOVES YOU

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
The uATX board seems like an interesting approach for a storage + compute server if you're tight on aggregate space for a build and want 10GbE (those don't look like SFP+ connections, so that's iffy on whether you'll get 10GbE really anyway). I'm a tad bummed out that it didn't come with 8x DIMM slots though because functionally it's not all that different from the mini ITX Xeon Ds out besides the addition of the SAS controller. However, with that SAS controller it frees up the primary PCI-E slot for a GPU. Trouble is, if I want to pull double duty with file operations and crunching through several TB of data per hour (well, if you can get your block storage to output at 1 GB+ / sec through something like LXC / Docker / ESXi VMXNet vNICs), you would have been looking at two separate, cheaper mini ITX systems probably, but then you'd need 10GbE which puts you back into the $560+ Xeon D 1520 range again but now needing to spec two physical servers. Hence, the 1540 uATX board very well may be more ideal for home setups over lower cost Xeon Ds.

It's a bit unfortunate the m.2 connectors are SATA though, it'd have been nice to get NVMe on this for some seriously high throughput storage setups. I guess enterprise SSDs supporting NVMe are a bit out of reach for the kinds of places that look at a board like this from an OEM so cutting that might be reasonable.

I do have to wonder if Facebook's boards are using 10 GbE SFP+ or the RJ45 because the power savings going with SFP+ are pretty significant at maybe 4-5 watts per connection. It might not be justifiable though yet even if they would save that power due to cabling and switch costs eating into power cost savings.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply