|
You need a RAID controller that supports SATA port multiplication. It looks like this is the one they include normally: http://www.newegg.com/Product/Product.aspx?Item=N82E16816115076
|
# ? Apr 17, 2016 02:20 |
|
|
# ? May 15, 2024 16:35 |
|
I'm connecting it to my Mac via this: http://amzn.com/B00EOM2EPE Not that exact one, but basically the same thing. Everything I've found tells me that: 1) The latest version of OSX removed RAID striping capabilities from Disk Utility. and 2) Even when Disk Utility still had RAID striping capabilities, they were limited to RAID0 or RAID1. I do still have a laptop running 10.8.5, so if the old Disk Utility DID actually have the capability to set up an enclosure as RAID5, I could use that, but something tells me it doesn't work that way and I'm stuck with RAID0.
|
# ? Apr 17, 2016 02:33 |
|
OldSenileGuy posted:I don't know a lot on the topic of RAIDs, and I've done a little research and I think I already know the answer to my question, but I just wanted to make sure I haven't missed anything: You can build a RAID5 setup with the 3tb disk, it's just going to act like a 2tb one.
|
# ? Apr 17, 2016 02:36 |
|
OldSenileGuy posted:I don't know a lot on the topic of RAIDs, and I've done a little research and I think I already know the answer to my question, but I just wanted to make sure I haven't missed anything: You can build a RAID5 setup with the 3tb disk, it's just going to act like a 2tb one.
|
# ? Apr 17, 2016 02:36 |
|
g0del posted:Oh, and his assertion that "gnarly" parity calculations are what kill your performance during a rebuild is just dumb. To be fair OpenZFS rebuilding of raidz1-3 is slower than it should be, though it doesn't have anything to do with parity calculations. The algorithm they use for deciding the order to write blocks actually results in a large amount of small and effectively random reads and writes. In a pretty recent version of closed source ZFS they improved the algorithm significantly. Not a concern for home users, and raid 5/raidz1 should be avoided anyway because of the potential for simultaneous disk failures. Raid 10/01 is something you use when you actually do need those IOPS for a performance critical application. Unless you are trying to stream uncompressed video or you have more than 15-20 computers playing back video from your NAS - both of which would require 10gbit networking - you can do whatever you like and you won't run into problems.
|
# ? Apr 17, 2016 02:42 |
|
IOwnCalculus posted:You need a RAID controller that supports SATA port multiplication. It looks like this is the one they include normally: http://www.newegg.com/Product/Product.aspx?Item=N82E16816115076 Even assuming I had a PCI slot to plug that into (I don't - I'm using an iMac and a Macbook Air), is there some software that comes with that to do the actual drive striping? And then once the drive is striped, is it usable on any machine, but I only need that card again if I need to restripe it or rebuild it due to a drive failure? It seems like I'm stuck with RAID0 in this instance, but I'm still curious how it all works.
|
# ? Apr 17, 2016 02:51 |
|
g0del posted:I wouldn't say it's full of poo poo, but it's definitely not talking to you. His arguments basically come down to "You can totally afford to buy 2 hard drives for every one you use" and "You'll need mirrored drives to wring all the IOPS possible out of your array, especially when resilvering". The second definitely doesn't apply to you, and most home users can't afford to throw money at redundancy they way businesses do so the first probably doesn't apply either. Desuwa posted:To be fair OpenZFS rebuilding of raidz1-3 is slower than it should be, though it doesn't have anything to do with parity calculations. The algorithm they use for deciding the order to write blocks actually results in a large amount of small and effectively random reads and writes. In a pretty recent version of closed source ZFS they improved the algorithm significantly. Guess I'll go with a raidz2 then, and just save up until I can swing four or six drives at once.
|
# ? Apr 17, 2016 06:52 |
|
With raidz2 it's "chunkier" if you later want to expand it, but it can be more efficient. With RAID 10 with single redundancy you're always getting 50% of the storage you pay for, but if you were to buy 5 drives now and 5 later for two separate raidz2 vdevs you'd get 60%. Though 10 drives is a lot and 5 per raidz2 vdev is on the low side. 6-8 is the range you want. Personally I have 8 drives in raidz2 when I could have gone for 6 or 7. I had planned to increase capacity eventually by replacing drives one at a time but by the time I need more space it'll be time to build an entirely new server.
|
# ? Apr 17, 2016 07:22 |
OldSenileGuy posted:RAID on OSX.
|
|
# ? Apr 17, 2016 07:54 |
|
After browsing around various online shops I find that the price they ask for consumer-grade NAS is quite high given the performances they offer and the features I don't need. I'm mostly interested in write performance (I just don't like to wait more than I have to when I copy large files from my Linux box or Windows client to my NAS) since the read performance is usually to stream video or music (so not more than 20 MB/s or so for read, highest possible for write). I don't know much about filesystems (I'm just a network engineer) but apparently the ideal solution for my issue would be to use ZFS? For now I have 4x2TB drives in a RAID5 array on an older Synology NAS. What I'd like to do is backup the data (about 3 TB worth) to some place, delete the array and use the drives in a new Linux box using ZFS. Apparently RAID-Z1 is the equivalent of RAID5, except that it's handled by the filesystem instead of a controller? Also it's faster? In my example of 4x2TB drive, how much data would I effectively be able to store and how many drives can fail (I think it's 1) ? Another question I had was if the CPU has any impact on the performance, and if a low power-consumption one would do the trick? So far I have my eyes on this mini-ITX motherboard sporting an AMD "Ontario" APU. I obviously don't care about all the bullshit features of the motherboard, I care mostly about the 4xSATA ports and form-factor. Last but not least, what's a typical way of mounting all this in Linux? Do I create the ZFS pool and mount it in some /home directory, while keeping the rest of the OS on a separate drive, or do I create everything as ZFS? Do I want to just go for FreeNAS or do I go for Linux? I'm really just going to use this as a NAS so I think FreeNAS would be good but I know nothing about FreeBSD and I'd still like to be able to write some scripts for pushing logs or cron whatever I might need in the future. Sorry for the newbie questions, if there's an article I should read somewhere about that feel free to point it out.
|
# ? Apr 17, 2016 17:34 |
|
What are the differences between WD red/green/blue/purple again? I know I would want red for storage, but why are the others so terrible again?
|
# ? Apr 19, 2016 11:07 |
|
KinkyJohn posted:What are the differences between WD red/green/blue/purple again? I know I would want red for storage, but why are the others so terrible again? Red: Storage drives, equipped with TLER for RAID Green: Low power drives, parks heads aggressively. Due to this limits lifespan. Blue: Standard consumer drive. 1 or 2 year warranty. Purple: Video recording drive. You wouldn't use this in a typical storage situation. Black: High performance drive. 5 year warranty.
|
# ? Apr 19, 2016 14:32 |
|
Head parking can be disabled on the greens btw.
|
# ? Apr 19, 2016 15:11 |
|
Greens are now blues or some poo poo so you have to be more careful at look at specs.
|
# ? Apr 19, 2016 15:13 |
|
Don Lapre posted:Head parking can be disabled on the greens btw.
|
# ? Apr 19, 2016 15:17 |
|
Nah wdidle still worked on greens.
|
# ? Apr 19, 2016 18:44 |
|
I seem to recall seeing something about RAID-5 arrays working best with odd numbers of drives and RAID-6 arrays working best with even numbers... or maybe it was vice-versa. Anyway, is there any truth to that at all?
|
# ? Apr 19, 2016 18:47 |
|
I checked up and wdidle3 and wdtler are impacted totally differently by firmware and idle can be adjusted fine, but WDTLER is definitely locked out in firmware now for sure. For the sake of re-iterating an important principle for the thread, TLER is important for everyone in this thread honestly though because it impacts software RAID just the same as hardware RAID ("deep recovery mode" is another term WD uses for the opposite of TLER). If a drive is experiencing errors in an array, your I/O will straight up block on reading / writing to affected blocks and tank I/O on your array potentially. I'm not sure if ZFS will simply rebuild the data from parity if a timer is reached, but invoking time-outs is never a recipe for decent performance. In contrast, Idle is important for reducing the number of head parks and an orthogonal problem. "Deep recovery" is kind of BS because honestly if your data cannot be read / written within 10 seconds, another 20+ seconds is almost never going to make a difference. The optimum number of disks varies from RAID implementation and is not really anywhere near as important for performance in practice as much as making sure your drives, array, and partition are aligned correctly together.
|
# ? Apr 19, 2016 19:26 |
|
Yeah WDTLER was locked out years ago.
|
# ? Apr 19, 2016 21:34 |
|
Well it looks like my supplier has the 3TB Seagate Barracuda 7200 cheaper than the 3TB WD blue(green)"blue" 5400. Why should I buy WD again?
|
# ? Apr 20, 2016 12:06 |
|
It's not that you should buy WD, it's that you shouldn't buy 3TB Seagates. Anecdotal Backblaze evidence aside, Seagate's got a class action lawsuit going against them for those drives. They're just flat out not reliable.
|
# ? Apr 20, 2016 13:47 |
|
I'm fairly certain that the 3TB Seagates were limited to a specific model/run. The ones made after the Thailand flooding if I'm remembering correctly. Those drives were absolute poo poo and failed at extremely high rates. This is responsible for the high failure rates seen on the Backblaze graphs. The data has simply been skewed by a bad production run. Remove those drives and we'd see a more realistic view of Seagate's current 3TB drives.
|
# ? Apr 20, 2016 18:59 |
|
And that's very much possible, but what data shows the drives that were out of the bad batch(es)? I haven't been able to find any. Which means you're limited to somewhat blind guessing. Now, odds are pretty good that any 3TB drives currently on the market are new stock that aren't from those runs, but there's no guarantees of that. It's worth keeping under consideration that it's possible you're buying brand new drives that are technically old stock from those batches, and that they carry an abnormally high failure risk.
|
# ? Apr 20, 2016 20:34 |
|
Another fun bit to consider in the Seagate vs WD question is Backblaze's note that the Seagates, despite high failure rates, were also the ones most likely to actually throw SMART errors prior to failure, allowing for far easier preemptive replacement, vice having drives fall over dead with no real warning.
|
# ? Apr 21, 2016 05:12 |
|
OldSenileGuy posted:1) The latest version of OSX removed RAID striping capabilities from Disk Utility. Yes, OS X software raid never did anything but 0/1. I think you could layer them and make 0+1 or 10 but don't quote me on that. The ability to create RAID volumes in Disk Utility has indeed been taken away in 10.11 but you can still create them with the "diskutil" tool from the command line, or use raids created by older versions of OS X. Not sure you'd really want to do that since the removal from Disk Utility might be a sign they're planning to deprecate the soft RAID driver in future OS X versions. There's signs in the open source parts of OS X that this feature hasn't been actively maintained for a few years, so I can believe they might want to get rid of it. If you want software RAID 5 on the Mac, there is a third party package called SoftRAID which is highly regarded and IIRC supposedly written by the same engineer who originally wrote Apple's built in RAID 0/1 driver. However, the edition which supports RAID 5 is a bit expensive.
|
# ? Apr 21, 2016 23:46 |
|
DrDork posted:Another fun bit to consider in the Seagate vs WD question is Backblaze's note that the Seagates, despite high failure rates, were also the ones most likely to actually throw SMART errors prior to failure, allowing for far easier preemptive replacement, vice having drives fall over dead with no real warning. I think that's reading too much into it. It could just be that the common defects in those drives caused them to fail in a way that was reliably detected by SMART, as opposed to the more random defects drives have in general that may or may not be easily predicted.
|
# ? Apr 22, 2016 09:48 |
|
BobHoward posted:Yes, OS X software raid never did anything but 0/1. I think you could layer them and make 0+1 or 10 but don't quote me on that. The ability to create RAID volumes in Disk Utility has indeed been taken away in 10.11 but you can still create them with the "diskutil" tool from the command line, or use raids created by older versions of OS X. Not sure you'd really want to do that since the removal from Disk Utility might be a sign they're planning to deprecate the soft RAID driver in future OS X versions. There's signs in the open source parts of OS X that this feature hasn't been actively maintained for a few years, so I can believe they might want to get rid of it. Why not install OpenZFS? I have it running a RaidZ1 with ARC2 SSD end it performs very well. I was pleasantly surprised.
|
# ? Apr 22, 2016 12:18 |
|
Mr Shiny Pants posted:Why not install OpenZFS? I have it running a RaidZ1 with ARC2 SSD end it performs very well. I was pleasantly surprised. I didn't know that existed before now, but honestly, after a scan through the forum, there are a couple things that scare me: recent reports of kernel panics, and apparently the integration isn't good enough for Time Machine to back up from a ZFS volume yet.
|
# ? Apr 23, 2016 10:02 |
|
BobHoward posted:I didn't know that existed before now, but honestly, after a scan through the forum, there are a couple things that scare me: recent reports of kernel panics, and apparently the integration isn't good enough for Time Machine to back up from a ZFS volume yet. Don't know what you're using but it is plenty stable on Yosemite.
|
# ? Apr 23, 2016 13:01 |
|
Yesterday I picked up a 5TB WD My Book and extracted the drive. A normal WD50EZRZ Blue was inside and the data migration went fine. HOWEVER, when I try to adjust its power management via CrystalDiskInfo to reduce head parking frequency, both AAM and APM are greyed out - I can't make any changes. I shucked a 5TB Seagate external about a year and a half ago that had a ST5000DM000 and its APM was available to adjust. What gives?
|
# ? Apr 23, 2016 22:19 |
|
AFAIK, external drives often have custom firmware that makes them somewhat different from what their model # would lead you to believe.
|
# ? Apr 24, 2016 00:27 |
|
Someone posted about the ASRock C2550D4I or the 2750 a while back and it stuck in the back of my mind. Now I want to blow some money, but the reviews on Newegg have a rash of people with dead boards. Anyone here have the 2550 and happy with it?
|
# ? Apr 24, 2016 01:50 |
|
Are there any HDD Stress Testing packages for Synology? I've got 2 new drives and before I set them up in SHR and copy everything from my 1 JBOD and wipe it I wanted to test to make sure they aren't fuckered first.
|
# ? Apr 26, 2016 14:50 |
|
You can do an extended smart test inside synology.
|
# ? Apr 26, 2016 15:35 |
|
Is that going to show up mechanical issues from shipping for example though?
|
# ? Apr 26, 2016 22:09 |
|
Yes, if the head is having problems it should be creating bad sectors which will be detected.
|
# ? Apr 26, 2016 23:10 |
|
http://www.servethehome.com/asrock-rack-d1540d4u-2t8r-review-full-featured-xeon-d-platform/ So, these finally may not be vaporware! I see Supermicro has announced some new boards that have PCIe x8 slots on them as well, so maybe I can finally get these to upgrade my machine. Looks like they are listed on Amazon for around $1K -- I wonder what they are supposed to retail at. That's CPU + mobo, but RAM will cost a pretty penny as well.
|
# ? Apr 27, 2016 00:23 |
|
movax posted:http://www.servethehome.com/asrock-rack-d1540d4u-2t8r-review-full-featured-xeon-d-platform/ That's one expensive setup, but it really does look like a cool board.
|
# ? Apr 27, 2016 06:54 |
|
I love the 1540 but I'm thinking a 1520 might be adequate for my NAS needs.. They look to be about $500: http://www.amazon.com/ASRock-Rack-D1520D4I-Motherboard/dp/B01B9627DQ
|
# ? Apr 27, 2016 07:25 |
|
|
# ? May 15, 2024 16:35 |
|
The uATX board seems like an interesting approach for a storage + compute server if you're tight on aggregate space for a build and want 10GbE (those don't look like SFP+ connections, so that's iffy on whether you'll get 10GbE really anyway). I'm a tad bummed out that it didn't come with 8x DIMM slots though because functionally it's not all that different from the mini ITX Xeon Ds out besides the addition of the SAS controller. However, with that SAS controller it frees up the primary PCI-E slot for a GPU. Trouble is, if I want to pull double duty with file operations and crunching through several TB of data per hour (well, if you can get your block storage to output at 1 GB+ / sec through something like LXC / Docker / ESXi VMXNet vNICs), you would have been looking at two separate, cheaper mini ITX systems probably, but then you'd need 10GbE which puts you back into the $560+ Xeon D 1520 range again but now needing to spec two physical servers. Hence, the 1540 uATX board very well may be more ideal for home setups over lower cost Xeon Ds. It's a bit unfortunate the m.2 connectors are SATA though, it'd have been nice to get NVMe on this for some seriously high throughput storage setups. I guess enterprise SSDs supporting NVMe are a bit out of reach for the kinds of places that look at a board like this from an OEM so cutting that might be reasonable. I do have to wonder if Facebook's boards are using 10 GbE SFP+ or the RJ45 because the power savings going with SFP+ are pretty significant at maybe 4-5 watts per connection. It might not be justifiable though yet even if they would save that power due to cabling and switch costs eating into power cost savings.
|
# ? Apr 27, 2016 16:05 |