Use ECC for system availability (since it's one part of an acronym covers a term that uses ECC extensively) and because ZFS itself cannot prevent memory corruption itself (especially because it was never designed to do that, since the platform that ZFS comes from doesn't exist without ECC). Please note that this doesn't mean that ZFS will write bad data from memory to disk, see Matt Ahrens himself explain how this can't happen. And no. No filesystem, not even ZFS, can prevent catastrophic hardware failure from causing lost data. So if you take away one thing from this post that is designed to end this ceaseless discussion, here it is: Back up everything you care about, regardless of whatever else you do with it. BlankSystemDaemon fucked around with this message at 00:16 on Dec 18, 2017 |
|
# ? Dec 18, 2017 00:11 |
|
|
# ? May 24, 2024 02:12 |
|
ZFS is better than mdraid in pretty much every way other than the ability to expand by single drives. Given that you're familiar with *nix OSes, the only advantage FreeNAS offers you is that it's a quick and easy way to have a system designed to boot from USB. If you're willing to commit a small boot drive (I use SSDs), I've been much happier with running full-fledged Ubuntu server than with FreeNAS.
|
# ? Dec 18, 2017 01:32 |
|
apropos man posted:Has the ZFS/ECC RAM thing been debunked now? Not really "debunked" as much as recognizing that best-practices for enterprise environments are not necessarily appropriate for home users just interested in running a Plex/file server. That said, ECC is a great addition to any server, ZFS included. If you're planning on building a whitebox out of random junk parts you have laying around and are gonna just sit porn and movies or other easily replaceable stuff on it, then it's probably not worth the cost. If you're building a custom-built box with all new parts, the price delta between non-ECC and ECC isn't really all that much (unless you're trying to build the cheapest box you possibly can and stuffing it with all of 2 HDDs or whatever), so you might as well. It's not like ZFS is gonna murder your dog if you don't, though. But yeah, RAID =/= backups is still applicable either way, as has been pointed out.
|
# ? Dec 18, 2017 03:53 |
|
Thanks for the info. I'm not absolutely critically bothered about using desktop RAM, since like plenty of people have said that a backup strategy is important. I'll do ZFS snapshots and a weekly backup. That should cover it. Cheers.
|
# ? Dec 18, 2017 12:11 |
|
With ZFS. When should I upgrade from 16gb of ram? I just plopped 20tb into a server (5x4gb) I still have lots of bays open. (5 of 14 used)
|
# ? Dec 18, 2017 18:53 |
|
Are devices like the Seagate personal cloud any good? I have a TB 3.5" drive that I use primarily as a torrent/media/emulator drive and wanted to put it on a NAS so a) I could sync up roms and saves across all my devices and b) I'm planning on moving to a miniITX setup that won't fit a 3.5". Streaming video from it would be cool I guess but since my main PC uses my TV for a monitor already it doesn't matter that much. I was looking at some of the enclosures for $150ish but it seems silly to put my old lovely 5200 RPM 1 TB drive in an enclosure when, for the same price, I could get a new device with 3TB of storage. Is there something horribly lovely about them or would they be fine for my purposes?
|
# ? Dec 18, 2017 19:41 |
|
Ziploc posted:With ZFS. When should I upgrade from 16gb of ram? I just plopped 20tb into a server (5x4gb) I still have lots of bays open. (5 of 14 used) When you have performance problems caused by ZFS starving for ram. It's very dependent upon your usage and performance degradation acceptability.
|
# ? Dec 18, 2017 19:48 |
|
1) NEVER ENABLE ZFS DEDUP 2) With the pricing on RAM these days, if I didn't already have stupid amounts of RAM in my boxes, I'd definitely try getting away with less for now. Crashplan seemed to be a worse memory hog than ZFS anyway.
|
# ? Dec 18, 2017 20:00 |
|
Oh yeah, also, I ran 30-40TB of pools for years on a machine with 12GB of RAM. It was fine. I only upgraded to 32GB because of other services I wanted to run on that machine.
|
# ? Dec 18, 2017 20:03 |
|
Is it true you can't upgrade a ZFS pool's size with a single hard drive? I don't get it.
|
# ? Dec 18, 2017 20:23 |
|
More or less. Just use unraid (os) or snapraid (software), both support JBOD, I don't get the love here for zfs for home use. It's insane for most use-cases and users.
|
# ? Dec 18, 2017 20:37 |
|
Mr. Crow posted:More or less. As someone who has been using ZFS at home for like 5 years I fully support this message. Thanks to WD Easystore's I've now got enough free space to move like 75% of my data off of ZFS and I'm trying to decide if I hate using ZFS enough to just say "gently caress it" to 25% of my data and move to snapraid. I mean ZFS does what it says on the tin and it does it well, but it just doesn't support things that a lot of home users want...for me that would mainly be incremental pool storage upgrades. Of course, now they're working on that so that complicates my decision... Thermopyle fucked around with this message at 21:09 on Dec 18, 2017 |
# ? Dec 18, 2017 21:06 |
|
Unraid owns and I'm really glad I skipped the ZFS. It's awesome technology but overkill for home use.
|
# ? Dec 18, 2017 21:14 |
|
Oh ok cool, thanks for that information. There is just no way I could deal with that, even though I am a business user, I still need to upgrade my drives! Unraid seems best.
|
# ? Dec 18, 2017 21:16 |
|
It's less of an issue for us True Hoarders who won't bother with silly things like throwing a single additional disk into an array--we'll wait for deals like the 8TB Easystore to come along and replace entire banks of drives at once.
|
# ? Dec 18, 2017 21:24 |
|
This is probably the place to ask... Is there an application like filebot that keeps a DB of files it's handled? E.g. I want to move and rename files to nas while creating a symlink with the original filenames. I have a script that does this but it would be nice to have some mechanism of keeping track of the mappings (e.g. recreate all the symlinks in the event of needing to restore a backup).
|
# ? Dec 18, 2017 21:41 |
|
redeyes posted:Oh ok cool, thanks for that information. There is just no way I could deal with that, even though I am a business user, I still need to upgrade my drives! Unraid seems best. Just to be clear you can still upgrade your drives in ZFS. You just have to replace them one at a time. If you have three 2 TB drives, you replace one of them with a larger drive, let the array resilver, then replace the next, let the array resilver, and then and then do it one more time. Then your pool capacity is increased, but only after you've replaced all the drives in the pool.
|
# ? Dec 18, 2017 21:55 |
|
As someone who started out on FreeNAS and then went to SnapRAID, I've become marginally saner when it comes to hard drive upgrades.
|
# ? Dec 18, 2017 22:08 |
|
garfield hentai posted:Are devices like the Seagate personal cloud any good? I have a TB 3.5" drive that I use primarily as a torrent/media/emulator drive and wanted to put it on a NAS so a) I could sync up roms and saves across all my devices and b) I'm planning on moving to a miniITX setup that won't fit a 3.5". Streaming video from it would be cool I guess but since my main PC uses my TV for a monitor already it doesn't matter that much. I was looking at some of the enclosures for $150ish but it seems silly to put my old lovely 5200 RPM 1 TB drive in an enclosure when, for the same price, I could get a new device with 3TB of storage. Is there something horribly lovely about them or would they be fine for my purposes? I haven't used it myself but the reviews seem pretty bad: https://smile.amazon.com/Seagate-Personal-Storage-Device-STCR3000101/dp/B00PZZZMQC In that situation I'd either get a 2TB laptop disk to put in your case (if there's room for one and your SSD), or just put your old HD in an external USB 3.0 case. They don't cost $150. https://smile.amazon.com/Seagate-FireCuda-Gaming-2-5-Inch-ST2000LX001/dp/B01M1NHCZT/ https://smile.amazon.com/Sabrent-External-Lay-Flat-Docking-EC-DFLT/dp/B00LS5NFQ2/ Alternatively get a real NAS for bulk storage since you want to do backups anyway. More costly but it's a one time thing. My N40L microserver is still chugging along 5 years later and I don't expect to replace it soon.
|
# ? Dec 18, 2017 22:17 |
|
I've just realised I have way less than 6TB of data as well as a spare 6TB drive so moving off FreeNAS is actually a possibility. No idea why that never occurred to me. $60 for unRAID for a Microserver is a bargain.
|
# ? Dec 18, 2017 22:27 |
|
Thermopyle posted:Just to be clear you can still upgrade your drives in ZFS. So they all have to be the same upgraded size basically? And you can't add more drives than the original pool?
|
# ? Dec 18, 2017 22:42 |
|
If you still want to use FreeNAS but don’t want to jump through RAID-Z hoops, just use mirrored vdevs. Yeah, per disk you lose some space compared to RAID-Z, but resilvering disks goes a lot faster, and upgrading the size of your pool is as easy as adding more mirrored vdevs. bobfather fucked around with this message at 22:45 on Dec 18, 2017 |
# ? Dec 18, 2017 22:43 |
|
redeyes posted:So they all have to be the same upgraded size basically? And you can't add more drives than the original pool? RAID-Z can’t be changed after the fact. Both in terms of the number of drives in the array, and in the parity level you chose at the outset (i.e., Z1, Z2, Z3).
|
# ? Dec 18, 2017 22:44 |
|
Enterprise tech for sure. Not great for a small fry that wants to shove disks into a box for TBs.
|
# ? Dec 18, 2017 22:51 |
|
Everyone knows it is literally impossible to have more than one ZFS pool. The only one in the world belongs to me, and let me tell you the licensing checks from Oracle are very lucrative.
|
# ? Dec 18, 2017 23:03 |
|
redeyes posted:So they all have to be the same upgraded size basically? And you can't add more drives than the original pool? They can be different sizes, but the capacity of the pool is based on the size of the smallest drive. From my example earlier you could change one 2TB drive to a 3TB drive and the others to 8TB drives and your pool will be the same size as if you had three 3TB drives. You can't add more drives. You can have as many pools as you want, but that's not much better than not having that ability at all (for home use) since you still have to add storage in chunks of multiple drives.
|
# ? Dec 18, 2017 23:25 |
|
Pretty sure you can make a pool of a single drive if you want. ZFS is no different from ext4 or anything else you would use, in fact it's strictly superior. The only limitation is that you can't restripe a RAIDZ to include more disks (yet). All the other solutions you would be using with a normal disk apply - just mount your new pool into the filesystem somewhere and go. Ideally you should be breaking your data into smaller datasets for snapshots/etc anyway which makes the distinction of a pool even more meaningless. Also, you don't even need to be using RAIDZ in the first place. You can just make a simple spanned pool and it will act like a normal LVM logical volume - including the part where losing a disk kills the array of course. If you don't like that, you can also use mirror vdevs and then you can grow your array arbitrarily, two disks at a time. Or if you want better space efficiency you can add RAIDZ vdevs, even. But even in simple one-disk usage you get checksums and scrubbing out of the deal. It's literally only that one usecase, restriping a RAIDZ, that does not work (yet). Paul MaudDib fucked around with this message at 23:54 on Dec 18, 2017 |
# ? Dec 18, 2017 23:38 |
|
Yeah I was surprised when I had 2x 2TB disks and adding a third didn't just increase the amount of storage I had. I'll fully admit that was because I did no research at all though and just worked off a bunch of preconceptions.
|
# ? Dec 18, 2017 23:42 |
|
Paul MaudDib posted:Pretty sure you can make a pool of a single drive if you want. Well yeah, but I don't see any point to it for the users we're talking about. Home usage type of people want to use RAID-ish solutions because of the redundancy without having to give up half of their storage like you would in a mirroring situation.
|
# ? Dec 18, 2017 23:55 |
|
Can Unraid do snapshots with an easy GUI? FreeNAS and ZFS may be overkill but god drat if at least 3 or 4 times a year I don’t need to mount a snapshot for some reason or another.
|
# ? Dec 19, 2017 00:37 |
|
Thermopyle posted:Well yeah, but I don't see any point to it for the users we're talking about. Home usage type of people want to use RAID-ish solutions because of the redundancy without having to give up half of their storage like you would in a mirroring situation. That is exactly why I just gave in and RAID1'd all my data. Cheaper to buy a couple more HDs than deal with RAID.
|
# ? Dec 19, 2017 00:44 |
|
Hughlander posted:Can Unraid do snapshots with an easy GUI? FreeNAS and ZFS may be overkill but god drat if at least 3 or 4 times a year I don’t need to mount a snapshot for some reason or another. I haven't seen it and any mention of it is from old posts as a feature request that is undoubtedly at the bottom of the totem pole.
|
# ? Dec 19, 2017 00:53 |
|
Snapshots are my one reason for wanting to move off of Stablebit Drive pool. :/
|
# ? Dec 19, 2017 01:25 |
|
Unraid doesn't do snapshots.
|
# ? Dec 19, 2017 01:37 |
Paul MaudDib posted:Pretty sure you can make a pool of a single drive if you want. Ditto blocks are what make it possible, at the cost of effectively doubling (or tripling, quadrupling, et cetera, depending on what you set copies to) diskspace requirements. redeyes posted:Snapshots are my one reason for wanting to move off of Stablebit Drive pool. :/ Thanks Ants posted:Yeah I was surprised when I had 2x 2TB disks and adding a third didn't just increase the amount of storage I had. I'll fully admit that was because I did no research at all though and just worked off a bunch of preconceptions.
|
|
# ? Dec 19, 2017 10:40 |
|
Thermopyle posted:Well yeah, but I don't see any point to it for the users we're talking about. Home usage type of people want to use RAID-ish solutions because of the redundancy without having to give up half of their storage like you would in a mirroring situation. Again, the increment you grow the ZFS pool by can be a single drive vdev, a mirror vdev, or a RAIDZ vdev. So you can still hit good space efficiency/reliability, you just need to grow the array by a couple drives at a time. But I suppose you're fixated on the case where someone has a 4-8 drive RAID5/RAID6 pool but also is morally opposed to adding more than one drive at once and also demands redundancy on the single extra drive they're adding. So yes, ZFS does not do that use-case well (yet). (going wider than 8 drives is a stupid idea anyway, if you have a failure your resilver times are going to be rear end. Not even enterprises do this, just use a series of 4-8 drive pools/vdevs like a normal person, and if you are at the point of filling more than one 4-drive array then you should probably be planning your future expansion a little more carefully than adding drives on an ad-hoc basis.) This is pretty much a weakness of all logical-volume-manager type systems though. LVM/mdraid requires the same behavior (you need to expand the array by NUM_STRIPES disks at a time, which is equivalent to adding a RAIDZ# vdev). btrfs tries to restripe, but it doesn't actually work yet and you'll lose all your data. Paul MaudDib fucked around with this message at 10:57 on Dec 19, 2017 |
# ? Dec 19, 2017 10:42 |
|
D. Ebdrup posted:You forgot to mention that the only way to make checksums, scrubbing and more importantly self-healing have any effect on a single-disk ZFS setup is to set copies to 2 or higher. Checksums and scrubbing will work, metadata is stored separately from the data (twice, by default). It'll be able to tell you that you've lost data and what data you lost, it just can't do anything about it without copies. Which is still a big benefit over traditional filesystems. Again, even if you aren't using RAIDZ or mirroring, ZFS is still strictly better than something like ext4. Paul MaudDib fucked around with this message at 11:03 on Dec 19, 2017 |
# ? Dec 19, 2017 10:59 |
Paul MaudDib posted:Checksums and scrubbing will work, metadata is stored separately from the data (twice, by default). It'll be able to tell you that you've lost data and what data you lost, it just can't do anything about it without copies. Which is still a big benefit over traditional filesystems. The self-healing feature is, I would argue, a much more important feature of ZFS than is the ability to tell you what data you've lost if no mirror, parity or ditto blocks exist. Notice the ditto blocks; that's what copies=N ∈ N ≥ 2 is for - notably because it works for all types of pools, as it's a dataset property. I've mostly seen it used on striped mirrors of SSDs for databases where, in the event that a bit read error happens on a vdev where a disk is being replaced, it's used to try and avoid all chances of vdev degration - however, it works equally well to deal with bit errors for pools with a single disk. Scrubbing is just the mechanism ZFS uses to go through every single checksum+block pair to ensure that they match - it needs this, because simply reading the files or dd'ing the whole disk to /dev/null doesn't guarentee that all mirror, parity or ditto block and their corrosponding checksums will get read. BlankSystemDaemon fucked around with this message at 12:42 on Dec 19, 2017 |
|
# ? Dec 19, 2017 12:38 |
|
I don't disagree with any of that, but in each possible configuration ZFS's data integrity features exceed those of any other filesystem. You are better off running ZFS than anything else even if you are not using its capabilities to the fullest, it's a few additional layers in your defenses. Things like "single disk configurations are susceptible to drive failures" are not a ZFS-only problem (and copies don't fix them either). Single-disk solutions are just inherently not resilient to data loss, although copies help a little (and note that metadata is redundant by default). Not sure why you're trying to tell me what scrubbing does, either - especially when you're the one who claimed that checksums and scrubbing didn't work on a single disk. Single disks still have the same metadata structure as any other ZFS filesystem and you can absolutely use all the normal tools and settings. Paul MaudDib fucked around with this message at 13:11 on Dec 19, 2017 |
# ? Dec 19, 2017 13:00 |
|
|
# ? May 24, 2024 02:12 |
Paul MaudDib posted:I don't disagree with any of that, but in each possible configuration ZFS's data integrity features exceed those of any other filesystem. You are better off running ZFS than anything else even if you are not using its capabilities to the fullest, it's a few additional layers in your defenses. I'm not sure I made that claim, so maybe I phrased something wrong? I just don't know where.
|
|
# ? Dec 19, 2017 13:12 |