Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Crunchy Black posted:

Moving from Sandy Bridge (v2) to Haswell (v3) or Broadwell (v4) will net you a decent increase in PPW/$ and introduces a bunch of forward-looking virtualization features that improves performance and security therein. If you're not already bought into the ecosystem, I'd be hard-pressed to justify not going v3 or v4, especially because you can get a platform-lighting CPU 2603-v3 for ~20$ on ebay and any CPU from those familys will drop in if you need to upgrade. Remember, Sandy Bridge socket 2011 is not the same as Haswell/Broadwell 2011! At this rate, DDR3 is only going to get more expensive and DDR4 is only going to get cheaper.

You might get the chorus of "you can't run FreeNAS virtualized" here or elsewhere but look up the caveats and ensure you're up for the risks, if you'd like. It can be done but it's not officially supported and if something breaks, you're up poo poo creek.

Yeah, the availability of silly-cheap v3 CPUs is part of the draw, frankly. E5-2620v3's are <$30 all day long, or a E5-2630Lv3 for the 55W TDP power-savings at ~$50. v4's are nice, but not nearly cheap enough to bother with--a E5-2620v4 is $250! Even the 2603v4 is a pointless $180.

DDR4 might be cheaper in the future, but I'm not buying in the future, I'm buying now with the intent of filling enough RAM slots to not have to think about it for a few years. 64GB (4x16) DDR3 1600 RDIMM's go for ~$100, while DDR4 is twice that. I guess the X9 motherboards are only ~$50 less than the X10 ones, so maybe you're right and I should just go for v3/DDR4.

I'm familiar with the fun of virtualizing FreeNAS--it's honestly not as bad as some people make it out to be, though I ended up moving away from it because I couldn't get large-batch file transfers over SMB to reliably saturate the GigE link it was on while virtualized, while bare metal it does it without any fiddling.

The other bit is I'm now sitting on a small pile of 10Gig fiber parts that I'm itching to throw into my systems...because I can.

Adbot
ADBOT LOVES YOU

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

movax posted:

Interesting, I have been very behind on keeping up with this thread and didn’t realize that was a thing. I have an ESXi box that I’ve been meaning to put FreeNAS and other OSes on (probably a Fedora VM to run Usenet / other SW / things that want Linux, not BSD) and was going to use HW pass through of my HBAs. No longer a good idea, wasn’t ever a good idea?

According to the ZFS/FreeNAS greybeards, it was Never A Good Idea. The reality is that, as long as you can pass the ENTIRE storage controller through to the FreeNAS VM, it mostly Just Works and acts the way you'd expect it to. You won't get a lot of sympathy on the forums if you go crying to them with issues, but you're unlikely to run into trouble in the first place if you're just using it as a purpose-built file-server, and not dicking around with some of the more advanced stuff that breaks all the damned time (like jails). So pretty much fine for home use with the eternal understanding that RAID(z) Is Not A Backup, but highly not recommended for production/enterprise.

bobfather
Sep 20, 2001

I will analyze your nervous system for beer money
There are zero issues with virtualizing FreeNAS with an HBA passed through. It behaves just like a bare metal install.

hitze
Aug 28, 2007
Give me a dollar. No, the twenty. This is gonna blow your mind...

That's how I run unraid on my proxmox host, just pass through the card.

The NPC
Nov 21, 2010


Hughlander posted:

As above I'm in the middle of moving everything off my fileserver due to performance. But what I had done before was run ESXI with passing through the LSI controllers to FreeNAS. I got tired of paying the memory tax there, so I redid it as Proxmox keeping the same ZFS pools. In that I ran LXCs for things that needed them:
Plex - Didn't want it's performance / uptime to be related to anything else.
AirPrint/Google Cloud Print - Passed the USB to it
Minecraft worlds - Each saved world is it's own LXC since they have well known ports that the LAN uses.

Then one catch all docker server (Also LXC not VM) that had ~50 containers running through 3-4 compose files. I was really sloppy with this and anything that needed outside connection was in the same compose file since that's where the nginx reverse proxy was.

My new machine I mentioned above is also Proxmox, but I'm installing Docker raw on it so I can use the docker zfs volume driver and get fine grained control on snapshots.

With Proxmox the only time I do VMs is if it's not Linux, otherwise it's either docker container (preferred) or LXC (If something forces it to be.)

Thanks. I'm thinking of running Ubuntu as the host with ZoL and KVM. Then setting up a VM to host the containers.

I was concerned about was having the extra layer of virtual networking in there, and trying to figure out how to share data between multiple services. But that's just something that comes with the territory the point of isolation. Also half of this project is to get familiar with containerization.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

The NPC posted:

Thanks. I'm thinking of running Ubuntu as the host with ZoL and KVM. Then setting up a VM to host the containers.

I was concerned about was having the extra layer of virtual networking in there, and trying to figure out how to share data between multiple services. But that's just something that comes with the territory the point of isolation. Also half of this project is to get familiar with containerization.

Why not run docker directly on Ubuntu?

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

The NPC posted:

For people running other services on their file servers, do you segregate them at all? What about containers? Do you run those on the host or make a VM to be the docker host?

My home server has been running services installed right on the host OS for probably over a decade now. I'm currently in the process of moving everything into containers orchestrated by docker-compose.

The impetus behind this is that I really just need to wipe my OS and reinstall fresh, so by moving everything into containers I'll have a much easier time getting back to a working state now and in the future.

It's actually very nice to have a complete record of how everything on the server is installed and linked together all in one file (aka, in docker-compose.yml). A problem that any programmer or system admin will know well is that it's a huge problem to keep your documentation current with the current state of your system. Docker compose almost makes the documentation the same thing as the system, all in one place.

movax
Aug 30, 2008

DrDork posted:

According to the ZFS/FreeNAS greybeards, it was Never A Good Idea. The reality is that, as long as you can pass the ENTIRE storage controller through to the FreeNAS VM, it mostly Just Works and acts the way you'd expect it to. You won't get a lot of sympathy on the forums if you go crying to them with issues, but you're unlikely to run into trouble in the first place if you're just using it as a purpose-built file-server, and not dicking around with some of the more advanced stuff that breaks all the damned time (like jails). So pretty much fine for home use with the eternal understanding that RAID(z) Is Not A Backup, but highly not recommended for production/enterprise.

Ok, that’s kind of what I thought; my philosophy / idea was basically giving FreeNAS everything it needs bare metal (HBA + a NIC just for funsies), and then keeping stuff like sabnzbd, Plex, etc on a separate VM (Fedora) that maps to FreeNAS over vmnet or iSCSI.

Too bad my hardware has been sitting mostly unused since 2017, it’s probably not super cost effective now (especially for disk density) but should still last a good while (Skylake Xeon, 64GB DDR4 and 8x 8TB Reds).

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Thermopyle posted:

My home server has been running services installed right on the host OS for probably over a decade now. I'm currently in the process of moving everything into containers orchestrated by docker-compose.

The impetus behind this is that I really just need to wipe my OS and reinstall fresh, so by moving everything into containers I'll have a much easier time getting back to a working state now and in the future.

It's actually very nice to have a complete record of how everything on the server is installed and linked together all in one file (aka, in docker-compose.yml). A problem that any programmer or system admin will know well is that it's a huge problem to keep your documentation current with the current state of your system. Docker compose almost makes the documentation the same thing as the system, all in one place.

You can take this another step further by doing all os config in ansible. You can even test beforehand in a vagrant instance or VM.

Hughlander
May 11, 2005

Matt Zerella posted:

You can take this another step further by doing all os config in ansible. You can even test beforehand in a vagrant instance or VM.

For me as a hobbyist that was really hard to maintain. I tried doing that awhile back with chef but the discipline to try to hack something together to see how the parts fit, then go back and reverse engineer it into a recipe was too much. It was about 80% automated and then 20% black magic. For some reason I don't feel the same with Dockerfiles, maybe because I can just do a history and paste it into the Dockerfile with a RUN in front of it.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Hughlander posted:

For me as a hobbyist that was really hard to maintain. I tried doing that awhile back with chef but the discipline to try to hack something together to see how the parts fit, then go back and reverse engineer it into a recipe was too much. It was about 80% automated and then 20% black magic. For some reason I don't feel the same with Dockerfiles, maybe because I can just do a history and paste it into the Dockerfile with a RUN in front of it.

Tbf, chef can get massively more complicated than a simple ansible routine. Agent less and all you need is a ssh login with sudo. If you stick to all OS config goes in ansible then you now have your config as code. backup your docker datadir, media, and docker file and all of a sudden you can be back up in running relatively quickly depending on restore time.

Hughlander
May 11, 2005

Matt Zerella posted:

Tbf, chef can get massively more complicated than a simple ansible routine. Agent less and all you need is a ssh login with sudo. If you stick to all OS config goes in ansible then you now have your config as code. backup your docker datadir, media, and docker file and all of a sudden you can be back up in running relatively quickly depending on restore time.

Maybe I’ll give it a try. I’ve used chef and salt professionally but never ansible and I am configuring two machines currently so:..

That said I did have to rebuild my main proxmox server a few months ago and it was mostly install duplicY and restore etc.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Hughlander posted:

Maybe I’ll give it a try. I’ve used chef and salt professionally but never ansible and I am configuring two machines currently so:..

That said I did have to rebuild my main proxmox server a few months ago and it was mostly install duplicY and restore etc.

Do whatever makes sense to you. I'm in no way saying you should be running a devops style home NAS for linux ISO's and plex. But it is nice to be able to automate setup and keep track of your OS changes in a sane way.

Hughlander
May 11, 2005

Matt Zerella posted:

Do whatever makes sense to you. I'm in no way saying you should be running a devops style home NAS for linux ISO's and plex. But it is nice to be able to automate setup and keep track of your OS changes in a sane way.

Yar I know. Right now it’s an Evernote doc with bash history and links to blogs where I got the data from originaly.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Crunchy Black posted:

I've never had a resilver take more than 12 hours but I also run pretty beefy hardware.
My RAIDZ2 8x disk vdev of 8TB Easystores took about 7 days straight at roughly 140 MBps. I suspect that allocated blocks is the primary factor in rebuild times.

movax
Aug 30, 2008

Hughlander posted:

Yar I know. Right now it’s an Evernote doc with bash history and links to blogs where I got the data from originaly.

I copy/paste the blog text + take screenshots ever since losing some ZFS/OpenSolaris specific tuning knowledge to the Internet gods when the blogs go down. Mailing lists are usually safe, but some folks personal blogs understandably eventually go poof.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Speaking of deployments has anyone ever managed to get a FOG server running on unraid? I had a look for a docker and there was a janky option but it didn’t work for me. I suppose just using a VM would be fine.

Are there better options than FOG as well? We deploy dev systems a lot at work and any smooth way to do it would be handy.

FOG is a ghost type thing over NetBoot.

The NPC
Nov 21, 2010


Matt Zerella posted:

Why not run docker directly on Ubuntu?

Mostly because I'm more familiar with VMs, and would appreciate the ability to snapshot/roll back as I'm working on figuring this stuff out. If I'm looking at clustering anything too, they would end up in VMs at that point as I only have the one server. Once I get everything to a steady state or set up a deployment pipeline, then it wouldn't really matter would it?

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

The NPC posted:

Mostly because I'm more familiar with VMs, and would appreciate the ability to snapshot/roll back as I'm working on figuring this stuff out. If I'm looking at clustering anything too, they would end up in VMs at that point as I only have the one server. Once I get everything to a steady state or set up a deployment pipeline, then it wouldn't really matter would it?

That's a fine plan, but since part of the whole point of docker is isolation from the host system and easy and ephermal containers, there is no reason you can't just create/destroy containers at will while figuring it out with no fear of messing your host system up. The VM won't gain you anything in this particular area.

KS
Jun 10, 2003
Outrageous Lumpwad

movax posted:

Interesting, I have been very behind on keeping up with this thread and didn’t realize that was a thing. I have an ESXi box that I’ve been meaning to put FreeNAS and other OSes on (probably a Fedora VM to run Usenet / other SW / things that want Linux, not BSD) and was going to use HW pass through of my HBAs. No longer a good idea, wasn’t ever a good idea?

Chiming in that I've been passing a 9211-8i through to a FreeNAS VM for 6 years without a single issue. OK, one -- long ago I couldn't give the VM more than a single CPU without BSD kernel panics, but that was fixed years ago in a FreeNAS upgrade. I have a half dozen linux VMs mapping the ZFS pool via NFS and doing PLEX/backups/Usenet/etc. It's been outstanding.



Guts were salvaged out of a DL380 and are in a silent case with Noctua coolers. I need to upgrade as my CPU isn't supported by new versions of ESXi any more, but :effort: on researching a cost effective replacement.

KKKLIP ART
Sep 3, 2004

So a while back, I went buck wild and have 8x3TB WD Reds sitting in a FreeNAS box, with 2 of the drive as pairity drives. All the drives were purchased in January 2016, and the build was put together just after that, so about 4 years old in total. One of the drives is starting to throw a lot of bad sector errors, and no guide I can find that is supposed to zero out the sectors is working, so I am thinking that I eventually (sooner, rather than later) need to think about replacing my pool. I currently use 4 of the 13.5TB usable, and think that my 3x8 setup is crazy overkill for what I use it for (local backup of pictures, time machine, movies, and music). I think what I would eventually like to do is move to an SSD based solution for noise reasons, and eventually picking up a 10gbit card down the road next year. Are there any decently priced SSDs that I can toss in there and still end up with 8-10 or so TB of space so I have room to keep backing up my laptops and images? Movies and music are less important in the Netflix/Disney+/Apple Music era.

Maybe a few of the bigger shuckable drives with an SSD cache might work too, but considering this sits in my livingroom, I kind of just want silent-ish.

KKKLIP ART fucked around with this message at 15:26 on Dec 31, 2019

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Going full SSD is an option if you hate money. 2TB SSDs can be had for about $200, and since your current setup is indeed complete overkill, maybe you'd want 4 of them in Raidz1, giving you 6TB usable. To me that seems reasonable given you've gone 4 years and only filled 4TB. Total cost is about $800.

The other option is, yes, a few HDDs with a fat SSD cache, maybe a 512GB or 1TB. 3x6TB drives would get you 12TB usable for $300 (less on sale) and toss in another $100 for whatever SSD cache you want.

Also consider using sound deadening materials to line the case the NAS is in. A good case with quiet fans and some of that can really cut down the noise.

redeyes
Sep 14, 2002

by Fluffdaddy
Decided to look at my 8x4TB all Hitachi NAS rig. Oldest drive is five and a half years old. Flawless run, no errors, no bad sectors. Even the boot Samsung 840 Pro is at 89TB written and %92 wear leveling. It's also used for a ssd write cache for the pool.
I miss Hitachi already.

KKKLIP ART
Sep 3, 2004

DrDork posted:

Going full SSD is an option if you hate money. 2TB SSDs can be had for about $200, and since your current setup is indeed complete overkill, maybe you'd want 4 of them in Raidz1, giving you 6TB usable. To me that seems reasonable given you've gone 4 years and only filled 4TB. Total cost is about $800.

The other option is, yes, a few HDDs with a fat SSD cache, maybe a 512GB or 1TB. 3x6TB drives would get you 12TB usable for $300 (less on sale) and toss in another $100 for whatever SSD cache you want.

Also consider using sound deadening materials to line the case the NAS is in. A good case with quiet fans and some of that can really cut down the noise.

I think I’m going to go the shucking option with SSD cache. I’m guessing just look for the relevant WD drives to see if they are what I need and then any decent brand SSD? Might be worth also moving FreeNAS to a small SSD install instead of off a USB jumper.

Crunchy Black
Oct 24, 2017

by Athanatos
I never said he couldn't do it, just that it wasn't officially supported!

movax posted:

Interesting, I have been very behind on keeping up with this thread and didn’t realize that was a thing. I have an ESXi box that I’ve been meaning to put FreeNAS and other OSes on (probably a Fedora VM to run Usenet / other SW / things that want Linux, not BSD) and was going to use HW pass through of my HBAs. No longer a good idea, wasn’t ever a good idea?

Only a feasible idea if you know what you're doing, which you seem like you do; as Dr. Dork said, as long as you're doing as low level a pass-thru as you can, like sending the entire controller over in PCIe space.

necrobobsledder posted:

My RAIDZ2 8x disk vdev of 8TB Easystores took about 7 days straight at roughly 140 MBps. I suspect that allocated blocks is the primary factor in rebuild times.

:stonk:

jeeves
May 27, 2001

Deranged Psychopathic
Butler Extraordinaire
I'm about to upgrade my 5x 2TB drives with 5x 12TB drives.

Should I just replace them all and remake the pool from scratch, or let each one resliver in sequence?

I already copied the data off / backed up so reslivering should be quick.

BTW Is that the term? Resliver? I have no idea why that term popped up in my head, as isn't that some monster race from Magic the Gathering from my middle school days? I know "convergence" is more of a dynamic routing protocol thing so I don't think it is that.

KKKLIP ART
Sep 3, 2004

Are WD Reds still the go to NAS drive or is there another brand worth mentioning?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

KKKLIP ART posted:

Are WD Reds still the go to NAS drive or is there another brand worth mentioning?

you buy WD Mybooks and EasyStore external 8TB, 10TB, 12TB, or 14TB drives and shuck the drives out of them. Best buy is perennially running good sales on them.

https://reddit.com/r/DataHoarder/comments/ed6grj/wd_easystore_8_12_and_14tb_for_130_180_and_200/

they are either reds or white-label HGST/Hitachi helium drives

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

jeeves posted:

I'm about to upgrade my 5x 2TB drives with 5x 12TB drives.

Should I just replace them all and remake the pool from scratch, or let each one resliver in sequence?

I already copied the data off / backed up so reslivering should be quick.

BTW Is that the term? Resliver? I have no idea why that term popped up in my head, as isn't that some monster race from Magic the Gathering from my middle school days? I know "convergence" is more of a dynamic routing protocol thing so I don't think it is that.

I mean, you could resilver them all, but why? Unless you have things pegged to the actual pool ID, you're not gaining anything by resilvering them instead of just making the new pool and pushing the data from your backups.

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer

DrDork posted:

I mean, you could resilver them all, but why? Unless you have things pegged to the actual pool ID, you're not gaining anything by resilvering them instead of just making the new pool and pushing the data from your backups.

That is what I did with my 8 TBs. Worked fine.

KKKLIP ART
Sep 3, 2004

Paul MaudDib posted:

you buy WD Mybooks and EasyStore external 8TB, 10TB, 12TB, or 14TB drives and shuck the drives out of them. Best buy is perennially running good sales on them.

https://reddit.com/r/DataHoarder/comments/ed6grj/wd_easystore_8_12_and_14tb_for_130_180_and_200/

they are either reds or white-label HGST/Hitachi helium drives

Thanks. Seems like 3x8 might be in budget and give me what I need space wise, with some money to get a SSD for cache purposes. Thanks!

eames
May 9, 2009

Every few months I open this thread and leave surprised that so many people are still running some form of parity raid with these relatively gigantic drives, even though the nominal unrecoverable error rates are at staying constant at best.

Does the risk acceptance of suffering a complete data loss go up over the time or do manufacturers silently improve UERs without updating the specs?

BlankSystemDaemon
Mar 13, 2009



If you look at the MTBDL calculator I linked on the last page, it's not as bad as all that with RAIDz2 and RAIDz3.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

eames posted:

Every few months I open this thread and leave surprised that so many people are still running some form of parity raid with these relatively gigantic drives, even though the nominal unrecoverable error rates are at staying constant at best.

Does the risk acceptance of suffering a complete data loss go up over the time or do manufacturers silently improve UERs without updating the specs?

The alternative to parity raid being replication?

If you get a read error while rebuilding on zfs, won't you only lose the block that can't be read?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Twerk from Home posted:

The alternative to parity raid being replication?

If you get a read error while rebuilding on zfs, won't you only lose the block that can't be read?

Correct. This is "The consumer NAS/storage megathread" and presumably everyone running large ZFS arrays is willing to risk a single bad block per drive in exchange for the increased storage density and price efficiency such arrays bring. Hell, in that most of what most people here are storing is video, a bad block or two might not even be noticeable during playback to begin with.

Basically no one is running traditional RAID and risking their entire array to a single error anymore.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Is any consumer / hobbyist here using these huge array sizes storing anything other than Linux ISO's? I can't really think of any other purpose...

parity just makes it less likely I'll have to re-download one file.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
If published 10^14 URE rates are bits, then that the odds are against you being able to write 12.5TB of data to a 14TB or 16TB drive, and then read it back without an error?

If they're sectors, then URE stats would be 1 read error per 51.2 PB (4096 bytes/sector * 10^14) read, assuming that you're reading back whole sectors at a time, which changes the math.

eames
May 9, 2009

DrDork posted:

Basically no one is running traditional RAID and risking their entire array to a single error anymore.

Neat, that answers my question then. I wasn't aware that ZFS is the default standard for raid arrays now.

Twerk from Home posted:

If published 10^14 URE rates are bits, then that the odds are against you being able to write 12.5TB of data to a 14TB or 16TB drive, and then read it back without an error?

They're indeed bits and my understanding is that the 12.5TB figure is correct.
If the manufacturer specs are accurate you're unlikely to successfully rebuild a regular parity raid array with the drive sizes discussed in this thread.

On the other hand that would also mean that ZFS scrubs should regularly return errors. Since that's not the case I assume that the actual unrecoverable error rates are much lower than advertised.

BlankSystemDaemon
Mar 13, 2009



I think the only reason the word 'redundant' is used for the acronym is that the alternative back then was Single Large Expensive Disks - but it firmly belongs in the availability section of the RAS spectrum.
All of it, apparently, also has Bill Joy of BSD fame to blame, after he, at a presentation at ISSCC a few years prior, had predicted that the problem would only continue to get larger.

They also end the paper with a whole bunch of open questions; some of which have since been answered, but I think a few of them are still unanswered.
My favorite one being "Will disk controller design limit RAID performance?" to which the answer is 100% yes, given how fast software RAID is compared to the paltry performance that the ~500MHz PPC processors used on RAID cards yield during rebuild (10MBps for hardware RAID, and +200MBps on spinning rust using sequential I/O and ZFS dRAID)

EDIT: Naturally, all of this doesn't involve plain disk mirroring, which predates it by many years and was first used outside of mainframes by Tandem who more people should know about, because they were one of the first non-IBM companies who took availability seriously.
For comparison, in 1970 Unix was still basically just a filesystem and couldn't stay up for 24 hours, and even when Tandems NonStop Systems series was launched in 1976, Unix System 6 which incorported BSD bits still couldn't stay up for a week.

BlankSystemDaemon fucked around with this message at 18:18 on Jan 1, 2020

Adbot
ADBOT LOVES YOU

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

eames posted:

On the other hand that would also mean that ZFS scrubs should regularly return errors. Since that's not the case I assume that the actual unrecoverable error rates are much lower than advertised.

Yeah, this has always puzzled me.

I've got one raidz2 of 20+ TB that has been scrubbed around 200 times and never gotten an error.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply