Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Devian666
Aug 20, 2008

Take some advice Chris.

Fun Shoe

IOwnCalculus posted:

Holy poo poo, that's slick as hell. I could actually see switching to that from my Nexenta based setup on my next drive swap.

It's pretty cool so far. Installing openVPN actually went smoothly. I'm going to rebuild the VM that I'm testing this on tonight to test out some other configurations. I probably need to allocate some more hard drive space to the VM.

Out of the box you can use http://hda and this will bring up the index screen and the installed applications. Or http://hda/movies/ for your movie folder (needs some minor configuration). I also have http://scicalc/ which loads up a web based scientific calculator (you have to install this).

I'm into the Home Digital Assistant (HDA) concept, and I'm glad I found this. It also has a lot of cool local and external media streaming applications. Of course having a free url makes it easy to get to your media externally.

e: while I'm going on about :airquote: cloud :airquote: stuff I'm going to try out eyeOS as well. This gives a virtual desktop for one or more users which you could use as a gui based private storage system. It appears to have a lot of office and communication applications build in.
http://eyeos.org/

Devian666 fucked around with this message at 03:42 on Sep 3, 2012

Adbot
ADBOT LOVES YOU

Longinus00
Dec 29, 2005
Ur-Quan
This talk of greyhole has reminded me that the btrfs allocater has been improved recently and is smart enough now to spread raid1 piece replication in such a way as to maximize space (requires a recent kernel, in the following example I'm using 3.5). This should be good for anybody who wants the convenience of "just add disks" but would also like to have data checksumming/scrubbing/snapshotting. Currently only raid1 is supported and while there are plans for changing parity levels on the fly (disabling mirroring for specific files you don't want) I don't think that work has been merged and will probably wait until the higher parity level (raid5/6) work is merged changing parity levels on a per block/file basis should be possible now.

As an example I've created a VM with an OS disk and 5 other disks sized: 2, 2, 4, 8, 16GB. First lets create the btrfs volume and mount it.

code:
$ uname -r
3.5.0-13-generic
$ mkfs.btrfs -m raid1 -d raid1 /dev/sd{b,c,d,e,f}

WARNING! - Btrfs Btrfs v0.19 IS EXPERIMENTAL
WARNING! - see [url]http://btrfs.wiki.kernel.org[/url] before using

adding device /dev/sdc id 2
adding device /dev/sdd id 3
adding device /dev/sde id 4
adding device /dev/sdf id 5
fs created label (null) on /dev/sdb
        nodesize 4096 leafsize 4096 sectorsize 4096 size 32.00GB
Btrfs Btrfs v0.19
$ btrfs filesystem show
Label: none  uuid: fc42b39f-86b1-4a56-bb5b-4a45a2e45c87
        Total devices 5 FS bytes used 28.00KB
        devid    5 size 2.00GB used 1.01GB path /dev/sdf
        devid    4 size 2.00GB used 8.00MB path /dev/sde
        devid    3 size 16.00GB used 1.00GB path /dev/sdd
        devid    2 size 8.00GB used 1.00GB path /dev/sdc
        devid    1 size 4.00GB used 1.02GB path /dev/sdb

Btrfs Btrfs v0.19
$ mount /dev/sdb /mnt/test/
$ cd /mnt/test/
$ btrfs filesystem df /mnt/test/
Data, RAID1: total=1.00GB, used=0.00
Data: total=8.00MB, used=0.00
System, RAID1: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=1.00GB, used=24.00KB
Metadata: total=8.00MB, used=0.00
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       7.4G  1.1G  6.0G  15% /
udev            237M  4.0K  237M   1% /dev
tmpfs            99M  264K   99M   1% /run
none            5.0M     0  5.0M   0% /run/lock
none            246M   56K  246M   1% /run/shm
/dev/sdb         32G   56K   20G   1% /mnt/test
You may notice that df gives crazy weird numbers and that's, unfortunately, a long standing bug. Once btrfs is more mainstream and is potentially a default disk format in fedora/ubuntu maybe somebody will fix that but until then btrfs' own df command is what you'll want to use.

Now lets see how much data this "32GB" volume will really accept.

code:
$ dd if=/dev/zero of=test.file bs=4M
dd: writing `test.file': No space left on device
3575+0 records in
3574+0 records out
14990442496 bytes (15 GB) copied, 115.144 s, 130 MB/s
"But wait!" you might say. "That's only 14(28)GB! Where did the other 2(4)GB go?"

code:
$ btrfs filesystem df /mnt/test/
Data, RAID1: total=13.96GB, used=13.96GB
Data: total=8.00MB, used=0.00
System, RAID1: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=1.00GB, used=19.86MB
Metadata: total=8.00MB, used=0.00
By default btrfs always allocates a metadata chunk of 1GB to start out with on volumes large enough to fit it, you can see it in the output from the first time I ran btrfs filesystem df. Btrfs allocates sections for data and metadata dynamically unlike some other filesystems which reserve a fixed amount of disk space when partitioning. This has some nice advantages such as not having more free space available for data if you have low metadata requirements (e.g. large static files) but means that the amount of available space on a btrfs volume is workload dependent. Since the metadata is mirrored in this case it's actually taking up 2GB of space. This leaves a remaining 1GB(2GB mirrored) of space that is unavailable but that is probably due to the overhead from using so many small disks. If you gave btrfs 1TB disks to play with instead of 2GB ones then losing 2GB is hardly an issue.



As an added benefit btrfs supports removing devices and permanently shrinking the total storage capacity, something that many other solutions find annoying, painful or potentially impossible (hello zfs).

code:
$ rm test.file 
$ dd if=/dev/zero of=test.file bs=4M count=1024
1024+0 records in
1024+0 records out
4294967296 bytes (4.3 GB) copied, 28.0684 s, 153 MB/s
Here are the device ids again before we start deleting.
code:
Label: none  uuid: fc42b39f-86b1-4a56-bb5b-4a45a2e45c87
        Total devices 5 FS bytes used 28.00KB
        devid    5 size 2.00GB used 1.01GB path /dev/sdf
        devid    4 size 2.00GB used 8.00MB path /dev/sde
        devid    3 size 16.00GB used 1.00GB path /dev/sdd
        devid    2 size 8.00GB used 1.00GB path /dev/sdc
        devid    1 size 4.00GB used 1.02GB path /dev/sdb
code:
$ btrfs device delete /dev/sdf /mnt/test/
$ btrfs filesystem df /mnt/test/
Data, RAID1: total=11.97GB, used=4.00GB
Data: total=8.00MB, used=0.00
System, RAID1: total=64.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=1.00GB, used=5.39MB
Metadata: total=8.00MB, used=0.00
So far so good, we've lost 2GB of allocated data. Lets delete some more!

code:
$ btrfs device delete /dev/sde /mnt/test/
$ btrfs device delete /dev/sdb /mnt/test/
ERROR: error removing the device '/dev/sdb' - No space left on device
$ btrfs filesystem df /mnt/test/
Data, RAID1: total=7.00GB, used=4.00GB
System, RAID1: total=32.00MB, used=4.00KB
Metadata, RAID1: total=1.00GB, used=5.39MB
Hmm, that's not good. I should have plenty of space left, why is btrfs complaining? It just so happens that to speed up deleting/adding of devices, btrfs doesn't automatically force a rebalance (resilver/rebuild in zfs/raid parlance) every time you change something. You should honestly do one every time you change the device layout but I wanted to show what happens when you don't.

code:
$ btrfs balance start /mnt/test/
Done, had to relocate 8 out of 8 chunks
$ btrfs device delete /dev/sdb /mnt/test/
$ btrfs filesystem df /mnt/test/
Data, RAID1: total=6.00GB, used=4.00GB
System, RAID1: total=64.00MB, used=4.00KB
Metadata, RAID1: total=768.00MB, used=4.39MB
Adding devices is as simple as running btrfs device add /dev/sdxx and balancing afterwards.

So there you have it. If you're willing to deal with btrfs quirks (why did the btrfs df command output formatting change halfway through this demo?) and understand that it's still not "production" ready (even though oracle and a few other companies apparently have enough faith in it to sell you support) you now have a checksumming/snapshotting/compressing alternative for mirroring volume management across heterogeneous disks. I've been using btrfs for some volumes where I store large static content and haven't had any issue in the 2 years I've used it.

Longinus00 fucked around with this message at 07:58 on Sep 3, 2012

kalicki
Jan 5, 2004

Every King needs his jester

Devian666 posted:

This weekend I've been testing Amahi and it's interesting. It includes a drive pooling system like windows home server. So you can add whatever sized drives and any data that you require to be redundant you need to indicate that. Other than any redundancy it makes your collection of drives into one giant hard drive.

It has a heap of other neat features such as local applications that you can use a url for the application on your local network. Setting it up was pretty easy, install Ubuntu then just type a few commands to install and configure, then log into the amahi system to put in a few details. It also gives you a dynamic dns at yourhda.com.

It's easily the least effort home server I've ever set up.

Wow, Amahi sounds great. I had thought about WHS, but this sounds better

Jolan
Feb 5, 2007
I've got a Silverstore 2-bay NAS, and one of the shares on it disappeared. It still shows up in the browser-based config pages, but I can't approach it through either Windows or MacOS (it just throws up a "can't connect, contact systems admin"-message or something). I can only get to the files on it through Silverstore's Tonido service, but that it really not designed for transferring millions of files totaling over a hundred gigs. In any case, the data is still there, but I can't get to it, and plugging the drive directly into a computer doesn't work either (because of the formatting of the drive, I guess). One other, minor share seems to have shared the same fate, I think that one's been gone for a while longer but both it and the data it contains are unimportant so I only just noticed it. All other shares are working fine.

Any ideas what I can try to get my data back?

astr0man
Feb 21, 2007

hollyeo deuroga
There's no way to roll back a zfs pool version is there? I'm currently running openindiana and stupidly upgraded my pools to the feature flags pool version (the one that shows up as 3000 or whatever) without realizing this would break compatibility with all the other available zfs platforms.

complex
Sep 16, 2003

There is no zfs downgrade.

Viktor
Nov 12, 2005

Been looking at the QNAP forums and they have some interesting porting going on. They have successfully reflashed QNAP hardware with Synology DSM, also success reported on VM's and bare metal hardware such as Atom system boards.

Jolan
Feb 5, 2007
I'm thinking about buying another NAS and finally getting serious about data redundancy. It'd seem that Synology and QNAP are the qualitative leaders in the field, but I'm having trouble deciding between them. When considering technically similar devices, which manufacturer would you prefer, Synology or QNAP? (More specifically, I'm looking at the Synology DS413j and the QNAP TS-412, which seem fairly similar in specs.)

As an added question, I've noticed that some devices have two gigabit ports. Does this actually have a significant effect on transfer speed?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Jolan posted:

As an added question, I've noticed that some devices have two gigabit ports. Does this actually have a significant effect on transfer speed?
Depends on your usage scenarios. Do you commonly find yourself needing more than 100MB/s of bandwidth? If you're just connecting to it from one computer at a time, no, it will not make any difference (or if you've got a few devices but they're all real low-bandwidth, like a couple computers streaming compressed video/music). If you've got multiple computers connecting to it at once and are asking for a lot of bandwidth, however, it may speed things up if your drives can more than saturate a single GigE connection. You're still only going to be able to get one port's-worth of bandwidth to a single computer, though--having 2xGigE ports will not let you transfer your porn at 200MB/s--you'll be stuck at 100MB/s.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
Do you guys think the 2TB & 3TB drives will go below $50/TB any time soon?

kalicki
Jan 5, 2004

Every King needs his jester

fletcher posted:

Do you guys think the 2TB & 3TB drives will go below $50/TB any time soon?

http://pcpartpicker.com/parts/internal-hard-drive/#sort=a5

A couple drives already are

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

fletcher posted:

Do you guys think the 2TB & 3TB drives will go below $50/TB any time soon?
If you're ok with the Green (or equivalent budget-level) versions, then yes, especially with coupons or sales. If you're talking more a performance level, like a Black or something, no. And the WD Red NAS versions are having too much stock issues to really settle on a price point just yet.

mik
Oct 16, 2003
oh
I want to pick up one of the new Synology DS413J's + 4x2TB drives. Which are the appropriate drives to get? Are EARX drives fine? The Compatibility Page has a giant list, but I don't want any of the head parking nonsense or drives being dropped from the RAID or whatever. The 2TB Reds don't appear to be available anywhere yet; I guess to avoid any worries I can just wait for them to show up in stock again.

mik fucked around with this message at 20:06 on Sep 6, 2012

Jolan
Feb 5, 2007

DrDork posted:

Depends on your usage scenarios. Do you commonly find yourself needing more than 100MB/s of bandwidth? If you're just connecting to it from one computer at a time, no, it will not make any difference (or if you've got a few devices but they're all real low-bandwidth, like a couple computers streaming compressed video/music). If you've got multiple computers connecting to it at once and are asking for a lot of bandwidth, however, it may speed things up if your drives can more than saturate a single GigE connection. You're still only going to be able to get one port's-worth of bandwidth to a single computer, though--having 2xGigE ports will not let you transfer your porn at 200MB/s--you'll be stuck at 100MB/s.

I'm probably going to use 5400rpm WD Greens and when I'm using two computers, at least one of them is using WiFi, so I don't think the second port will do much for me speed-wise.

Then the question remains: QNAP TS-412 Turbo or Synology DS413j? I've noticed that the TS-412 supports Raid10, which theoretically provides the same security as Raid1. How does this work in practice? If one of the four drives dies, can I just replace it and everything will correct itself or does the entire volume need to be remade on all four disks or... what? What effect does a bad sector on one disk have on the rest of the array? (I'm really new to Raid.)

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Jolan posted:

I'm probably going to use 5400rpm WD Greens and when I'm using two computers, at least one of them is using WiFi, so I don't think the second port will do much for me speed-wise.

Then the question remains: QNAP TS-412 Turbo or Synology DS413j? I've noticed that the TS-412 supports Raid10, which theoretically provides the same security as Raid1. How does this work in practice? If one of the four drives dies, can I just replace it and everything will correct itself or does the entire volume need to be remade on all four disks or... what? What effect does a bad sector on one disk have on the rest of the array? (I'm really new to Raid.)

Are those green drives on their compatibility list?

Jolan
Feb 5, 2007

Moey posted:

Are those green drives on their compatibility list?

They're on the unofficial list, and I can't find real issues with the TS412 and those disks on the boards (lots of people warning about them though, but with little to back up their statements). I can always just pick up four Seagate ST3000DM001's instead, they're on the official list and cost pretty much the same (though I haven't found out how they compare power consumption-wise yet).

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
I've started to run into a strange speed problem with my setup. I'm running NAS4Free on a N40L with 8GB RAM, 4x2TB drives in RAID-Z1, and an Intel Pro NIC. The drives, while similar, are not all exactly the same (2x Seagate Greens, 1x different Seagate Green, 1x Seagate Baracuda). The array is about half full, mostly with music and movies, but also with a several thousand image files from the finance's photography business. Everything's all in one pool and then shared out over CIFS.

About half of my files (no pattern that I can discern) transfer to my computer at what I consider to be the correct rate--between 60 and 90MB/s. The other half only transfer at 15-25MB/s. While I have no idea what separates the two categories, if it's a "fast" file it'll reliably be fast, and same with "slow" files. Similarly, writing files sees wild swings in performance, though it seems that in general things stay closer to the 100MB/s mark right after a NAS reboot, and over the next 12-24 hours slowly degrade down to around 15MB/s.

I tried this on FreeNAS 8 and had similar issues. Any ideas?

DEAD MAN'S SHOE
Nov 23, 2003

We will become evil and the stars will come alive
ZFS raid is rather poor with lots of small files due to overheads such as checksumming but I doubt that's the whole story.

I'm going to have to run some tests myself because I'm transferring all files from my Ubuntu Raid5 box over gigabit ethernet to NFS-shared Freenas N40L (original bios), 8GB ram, Raidz2 (5th drive on ODD sata), and holy poo poo is it slow. 300GB in 24 hours slow. At times like this I wish I could afford a turnkey solution.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

DEAD MAN'S SHOE posted:

At times like this I wish I could afford a turnkey solution.
Until you get to the $800+ ones, most of the turnkey solutions are sub-50MB/s, too.

And, yeah, I'd expect ZFS to not be too keen on handling all the little files and whatnot, but I'm talking big files here. Like 10GB videos, some of which fly along at 90MB/s, others of which slug it out at 15MB/s. No idea why.

b0lt
Apr 29, 2005

DrDork posted:

I've started to run into a strange speed problem with my setup. I'm running NAS4Free on a N40L with 8GB RAM, 4x2TB drives in RAID-Z1, and an Intel Pro NIC. The drives, while similar, are not all exactly the same (2x Seagate Greens, 1x different Seagate Green, 1x Seagate Baracuda). The array is about half full, mostly with music and movies, but also with a several thousand image files from the finance's photography business. Everything's all in one pool and then shared out over CIFS.

About half of my files (no pattern that I can discern) transfer to my computer at what I consider to be the correct rate--between 60 and 90MB/s. The other half only transfer at 15-25MB/s. While I have no idea what separates the two categories, if it's a "fast" file it'll reliably be fast, and same with "slow" files. Similarly, writing files sees wild swings in performance, though it seems that in general things stay closer to the 100MB/s mark right after a NAS reboot, and over the next 12-24 hours slowly degrade down to around 15MB/s.

I tried this on FreeNAS 8 and had similar issues. Any ideas?

Do you have compression or dedup enabled on that pool?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
A'nope. Don't have enough redundant data for dedup to make much sense, and half the point was to get performance over max storage, so I never even considered compression.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I can safely assume that a Seagate suddenly starting to squeal/buzz at a high pitch means the thing is dieing, right? --edit: I guess so, because I'm backing my poo poo up now and the transfer rate goes to zero when it happens.

Also, I see that Western Digital has a red series now, drives meant for 24/7. Are they worth anything?

Combat Pretzel fucked around with this message at 13:10 on Sep 8, 2012

evil_bunnY
Apr 2, 2003

Reds are good for sequential workloads and are presumably designed without retarded head parking and absurdly long timeouts. Only time will tell whether reliability is any better/worse than other WD drives.

Availability is still kinda spotty

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
The big drives in my computer are essentially data dumps, so from that perspective, it'd do.

I'm however pissed as hell that this Seagate Barracuda is making a fuzz, despite being under low load. It has 18000 hours of power-on time and shits the bed. My old WD RE2 system drive has like 35000 hours and a higher load, and it keeps on trucking like it's nothing.

Nam Taf
Jun 25, 2005

I am Fat Man, hear me roar!

2 questions about my raid-z server:

#1: Is this drive (Hitachi 5k3000) rooted and due for warranty? I believe it's the source of the grief that I keep getting in the zpool insofar as it tells me it has 6558 unreadable (pending) sectors and this thing occasionally locks up when I read the zpool.

code:
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000b   100   100   016    Pre-fail  Always       -       0
  2 Throughput_Performance  0x0005   136   136   054    Pre-fail  Offline      -       100
  3 Spin_Up_Time            0x0007   173   173   024    Pre-fail  Always       -       409 (Average 399)
  4 Start_Stop_Count        0x0012   100   100   000    Old_age   Always       -       273
[b]  5 Reallocated_Sector_Ct   0x0033   032   032   005    Pre-fail  Always       -       1169[/b]
  7 Seek_Error_Rate         0x000b   100   100   067    Pre-fail  Always       -       0
  8 Seek_Time_Performance   0x0005   132   132   020    Pre-fail  Offline      -       32
  9 Power_On_Hours          0x0012   100   100   000    Old_age   Always       -       1390
 10 Spin_Retry_Count        0x0013   100   100   060    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       273
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       289
193 Load_Cycle_Count        0x0012   100   100   000    Old_age   Always       -       289
194 Temperature_Celsius     0x0002   187   187   000    Old_age   Always       -       32 (Min/Max 19/42)
196 Reallocated_Event_Count 0x0032   044   044   000    Old_age   Always       -       1170
197 Current_Pending_Sector  0x0022   001   001   000    Old_age   Always       -       6558
198 Offline_Uncorrectable   0x0008   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x000a   200   200   000    Old_age   Always       -       0
As per bolded, it's not at the threshold of 5 but still 32% remaining doesn't seem good for me when the drive is about 3 weeks from coming out of its 1 year warranty. Should I just RMA it?

edit: It also failed a long test:
code:
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed: read failure       60%      1376         2214667281
and #2:
Is a WD30EZRX a stupid drive to replace it with, given WD's fetish for accelerated head parking and the like? The red drives are currently sold out and the price is $159 for the WD30EZRX vs $205 for the WD Red 3TB drive :( Is the price difference worth it?

Nam Taf fucked around with this message at 02:23 on Sep 9, 2012

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Seagate 3TB are $150 on NewEgg.

Nam Taf
Jun 25, 2005

I am Fat Man, hear me roar!

Should've clarified: I'm in Australia, so prices are different here.

lenitic
Jun 11, 2012
I have a mac which is chock full of photos. I am thinking of building a small Linux/*BSD/etc fileserver which will let me do the following:
  • RAID-1 array of say 2x2TB disks as redundant storage
  • Another biggish disk to act as a Time Machine destination for the mac's main drive
  • Allow me to mount filesystems remotely (I want to say "over VPN" but I don't know if this is the right technology).
  • host an iTunes library off the RAID (so I can stream stuff to AppleTV - not a deal breaker)
The idea is that I'd have another small fileserver sitting at my parents to which I have an automatic backup (only between midnight and 6am as I get unlimited bandwidth in those times).

Hardware is probably going to be some all-in-one Atom mobo in a case with a bunch of SATA disks and some RAM

Software - I am thinking that FreeNAS will be able to handle all of this with appropriate tweaking. I'll probably have to use rsync rather than Time Machine on the mac but I can live with that

Criticisms?

BnT
Mar 10, 2006

lenitic posted:

I have a mac which is chock full of photos.
...
Another biggish disk to act as a Time Machine destination for the mac's main drive

You should be able to cover the Time Machine with FreeNAS. I have one working with OpenIndiana although it took a bit of reading to get it running.

My only suggestion is to go with a true backup solution for your irreplaceable photos. One copy on your FreeNAS and one on whatever cloud-backup product. You'd only have to spend the backup bandwidth once per picture, and you're very unlikely lose all your copies if something bad happens.

Gangringo
Jul 22, 2007

In the first age, in the first battle, when the shadows first lengthened, one sat.

He chose the path of perpetual contentment.

I have just finished a NAS build (with parts that have been hanging around for a long time waiting for me to finish). It's a dual-core atom 1.6ghz 3GB DDR2 RAM, 4x2TB SATA drives. I have it running NAS4free with the drives set up in a ZRAID1 array.

A few questions:

Is running ZFS with so little ram a good idea, even though only 1-2 users will be accessing the data at a time?

My goal is to have this system handle media storage and torrenting onboard, would I be better off getting windows home server or sticking with NAS4free?

Jolan
Feb 5, 2007
Hmm, I just discovered Raid5, which would give me 6TB of storage when using 4x2TB drives instead of 4x3TB drives with Raid1/10. Which means that for only about €200 more, I could get WD RE4s that have a 5-year warranty and should stand up to punishment a lot better.

So I guess my question is: how does Raid5 compare to Raid1/10 in practice? I know it might be a bit slower for reading/writing and there's about the same room for failure, but are there any other differences between the two in terms of outcome? And when one drive were to fail and you'd plug in a new one, would it simply "recreate" the data on the lost drive without touching the other drives or do you need to back everything up somewhere else, re-initialize the entire Raid array, and then copy everything back over?

evil_bunnY
Apr 2, 2003

Any modern system will recreate the failed drive's dataset.

r u ready to WALK
Sep 29, 2001

Yes, replacing a drive in raid5 puts the array in "degraded" mode, then you have to rebuild the missing data to return to normal status. NAS boxes and hotswap enterprise stuff tends to start the process on their own as soon as the bad disk is replaced while software raid tends to do nothing until you tell it which drive is the new one to use.

The thing that makes raid5 dangerous for large drives though, is that once a drive breaks and you replace it, recreating the data on it means reading all blocks from every other drive in the raid to calculate the missing data.

With huge 2 or 3tb drives you're very likely to hit a read error on another drive while doing this rebuild process.

If you have a good raid controller it will just go "oh, this block is going to be corrupt, sorry about that" and carry on rebuilding the rest. You'll just have some garbled data in one or more of your files, if you're lucky.

Sadly, the reality is that once most raid controllers hit a double disk read error, they go "FATAL ERROR! ABORT! ABORT!" then fails the second drive it had trouble reading from and disables the entire array.

The problem is made worse with consumer drives like the WD Green and others, where the drive might lock up for a very long period trying to read a block, causing the raid controller to think the drive is completely dead when there's really just a tiny bit of inaccessible data on it.

Statistically your odds are a little better with raid1 since you're just copying the missing data from a single drive to rebuild, but really the one you want for good data protection is raid6 + TLER capable drives.

(or working backups)

evil_bunnY
Apr 2, 2003

error1 posted:

With huge 2 or 3tb drives you're very likely to hit a read error on another drive while doing this rebuild process.
And unless your FS checksums you'll miss half of them too.

BlankSystemDaemon
Mar 13, 2009



error1 posted:

(or working backups)
Always have this regardless of what you're doing.

UndyingShadow
May 15, 2006
You're looking ESPECIALLY shadowy this evening, Sir

error1 posted:

Yes, replacing a drive in raid5 puts the array in "degraded" mode, then you have to rebuild the missing data to return to normal status. NAS boxes and hotswap enterprise stuff tends to start the process on their own as soon as the bad disk is replaced while software raid tends to do nothing until you tell it which drive is the new one to use.

The thing that makes raid5 dangerous for large drives though, is that once a drive breaks and you replace it, recreating the data on it means reading all blocks from every other drive in the raid to calculate the missing data.

With huge 2 or 3tb drives you're very likely to hit a read error on another drive while doing this rebuild process.

If you have a good raid controller it will just go "oh, this block is going to be corrupt, sorry about that" and carry on rebuilding the rest. You'll just have some garbled data in one or more of your files, if you're lucky.

Sadly, the reality is that once most raid controllers hit a double disk read error, they go "FATAL ERROR! ABORT! ABORT!" then fails the second drive it had trouble reading from and disables the entire array.

The problem is made worse with consumer drives like the WD Green and others, where the drive might lock up for a very long period trying to read a block, causing the raid controller to think the drive is completely dead when there's really just a tiny bit of inaccessible data on it.

Statistically your odds are a little better with raid1 since you're just copying the missing data from a single drive to rebuild, but really the one you want for good data protection is raid6 + TLER capable drives.

(or working backups)

This is mostly a non-issue on good software raid systems, like NAS4Free, FreeNAS, etc. They won't die easily on green drives, and if they do run into something they can't read, they'll continue anyway. Since they checksum, at least you'll know about it.

That said, on my home NAS, I am running RAID-6, just in case.

arbybaconator
Dec 18, 2007

All hat and no cattle

Anyone here familiar with the Western Digital Sentinel?

Seem like a good deal for 12tb of Enterprise HDD's in a box running WHS

http://www.newegg.com/Product/Product.aspx?Item=N82E16822236014&nm_mc=KNC-GoogleAdwords&cm_mmc=KNC-GoogleAdwords-_-pla-_-NA-_-NA

I'm basically just looking for something to store my large media collection and serve it throughout my house, fwiw.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I'd be a bit wary of the Atom processor on these because they're normally paired with sub-par performance network cards / chips that can be an issue with more than a couple clients. The HP Microserver can support 3TB+ size disks so that with a few enterprise-class 3TB drives should be cheaper by a few hundred and get great performance.

With that said, the 3TB drives that WD has in "enterprise class" are rather pricey and $350 is a bit steep for a 3TB drive even if it's "enterprise class" to me for home use. If you're willing and able to pay that much for drives, you should probably be buying some big-name vendor's DAS or SAN solutions for another few thousand more, not some dinky consumer-CPU based NAS device whose reliability itself isn't very enterprise-class if you ask me. I work in enterprise software and IT, don't ever buy anything labeled for home or home-like usage scenarios, you're wasting your money.

Longinus00
Dec 29, 2005
Ur-Quan

necrobobsledder posted:

I'd be a bit wary of the Atom processor on these because they're normally paired with sub-par performance network cards / chips that can be an issue with more than a couple clients. The HP Microserver can support 3TB+ size disks so that with a few enterprise-class 3TB drives should be cheaper by a few hundred and get great performance.

With that said, the 3TB drives that WD has in "enterprise class" are rather pricey and $350 is a bit steep for a 3TB drive even if it's "enterprise class" to me for home use. If you're willing and able to pay that much for drives, you should probably be buying some big-name vendor's DAS or SAN solutions for another few thousand more, not some dinky consumer-CPU based NAS device whose reliability itself isn't very enterprise-class if you ask me. I work in enterprise software and IT, don't ever buy anything labeled for home or home-like usage scenarios, you're wasting your money.

Are you basing your experience of atom+network cards on NAS systems or consumer atom boards?

While I agree with you that this is overkill for merely serving pirated legally acquired media through your house the price per drive is less than $350 because the total price includes the NAS itself, closer to $250. Not too bad a deal if you compare it to the 4x2TB model which is selling for almost the same price. Of course if you just buy cheaper consumer 3TB drives and stick them in a synology/HP micro your total cost would still be less than $1000. (comedy option, 4x4TB in that $250 patriot NAS).

On a more serious note, anything with dual NICs (and especially dual power supplies) is likely overkill for what you want. Save yourself some money.

Wait, this thing runs windows server? Yea this is aimed at small businesses and not home users who don't care about AD features.

Longinus00 fucked around with this message at 23:26 on Sep 9, 2012

Adbot
ADBOT LOVES YOU

arbybaconator
Dec 18, 2007

All hat and no cattle

I could do a Synology ds512+ with 5 3tb caviar greens for $1594. I have heard that the Caviar greens aren't great for Nas's, but will it really matter in my case? I will rarely ever have more than 1 user connected at a time.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply