Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
jammyozzy
Dec 7, 2006

Is that a challenge?

Eletriarnation posted:

You could also just get an i3 and a normal board and have as many ports as you want - it's not that much more expensive, or that much more idle power.

This is where I'm at, for just a little more than the price of an N100 + a HBA I can step up to an i3 and not have to worry about a HBA causing weird power state issues. Plus i get a much more capable & expandable box and I can still throw expansion cards in if it suits me in the future.

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



A Bag of Milk posted:

For your use-case I'd definitely recommend at least looking into TrueNAS Scale. It's free, the arguably the most mature platform for RaidZ, has a Plex app built in, has easy backups to backblaze with server side encryption built right into the ui.

And I gotta give a thumbs down to QNAP in general. Their multiple recent security fuckups have been outrageous. All is forgiven and they're trustworthy again? I'm not sure they have proven that...
It really isn't the most mature, considering both Illumos and FreeBSD has existed for longer.
Also, since it's Linux, you can't use all your memory for ARC, for reasons described up-thread.

As for security fuckups; well, everybody gets them sooner or later, it's just a question of attack surface. Don't expose things to the WAN that are not designed to be resistant to attack (and remember that things like Plex et cetera will often tunnel into your network from their butt services, so once Plex gets attacked, your network is open to the attacker).
Better yet, only expose ssh and IPsec/Wireguard, and make sure to always keep those two up-to-date at all costs, with connection-frequency limiting and passphrase-protected keyfiles/PKI certificates as applicable.

BlankSystemDaemon fucked around with this message at 16:00 on Jan 5, 2024

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Eletriarnation posted:

The ASRock N100DC-ITX has a PCIe 3.0 x2 slot - not a lot of bandwidth, but enough for a few hard drives especially if your HBA is also 3.0.

You could also just get an i3 and a normal board and have as many ports as you want - it's not that much more expensive, or that much more idle power.

Clearly this is what Intel wants, and the sane answer with the current availability and pricing of hardware. It's worth noting that the N100 and friends are astoundingly efficient under load as well, not just idle, but that's not something that. matters in a home environment for a personal server, especially with spinning disk. In a perfect world where I had unlimited resources I'd build a cheap commodity ARM on Linux NAS with a bunch of SATA, or N100 or similar, but I'm not a megacorp that could get the config I want designed so we have to work with off the shelf stuff. I guess that one could find something to do with all the extra CPU power that an i3 offers, or just lower its power limit down to 35W (or lower, if the platform will let you?) Hell, I wish I could buy the -T parts as a consumer but I can't.

Passmark isn't perfect but at N100's 6 watt TDP, it benchmarks more than 1/3 the total performance of an i3-13100 at 60W: https://www.cpubenchmark.net/compare/5157vs5295vs5170vs5154/Intel-N100-vs-Intel-i3-13100T-vs-Intel-i3-13100-vs-Intel-i9-13900T





movax posted:

There was a dude recently on HN who posted his experience with various cards / chipsets + ASPM / ALPM / etc settings on getting to a <10 W CPU setup; I forget if I read that in this thread or happened to stumble onto it.

E: here it was: https://mattgadient.com/7-watts-idle-on-intel-12th-13th-gen-the-foundation-for-building-a-low-power-server-nas/

Oh no. They're using SMR disks in a RAID-Z2 array? Have they posted anywhere about rebuilds? SMR disks perform horribly for big sustained writes, which I understand makes a rebuild take an incredibly long time and possibly fail completely. They do say they updated the OS to allow several minutes for a disk to attempt a read, which has got to be a pretty bad experience overall if you ask a disk to read a sector, and it gets back to you 2 minutes later.

quote:

SMR (Shingled Magnetic recording). Reads are fine, but write performance absolutely plummets when random writes take place – it acts like a QLC SSD without an SLC cache that also doesn’t have TRIM.
Low rated workload (55TB/year vs 550TB/year for 3.5″ Exos drives).
No configurable error recovery time (SCT ERC), and these drives can hang for minutes if they hit an error while they relentlessly try to re-read the problematic sector. Ubuntu needs to be configured to wait instead of trying to reset the drive after 30 seconds.

wibble
May 20, 2001
Meep meep
I backed this on kickstarter:
https://www.kickstarter.com/projects/icewhaletech/zimacube-personal-cloud-re-invented?ref=user_menu
Its a six disk + SSD NAS , not intrested in their OS so I'll just stick trueNAS on it. But for $400 for a full system that you can just put disks and your own OS in and boot up is something the market needs.

I hope I get it.

evil_bunnY
Apr 2, 2003

Thanks Ants posted:

My Qnap complaint is that their product managers seem to hate their customers and they have about 60 active models at any one time, each with a unique set of compromises.
LOL yeah. They're cheaper than Syno but not by much, and there's always something wacky with each model I look at.

Fozzy The Bear
Dec 11, 1999

Nothing much, watching the game, drinking a bud
I probably posted this in the wrong thread, I have three hard drives now, I can easily see going up to 8 in the future. What do you think:

Fozzy The Bear posted:

Trying to build a movie/PLEX storage server to replace my unreliable 4 disk USB DAS.

I've googled and it looks like I can do this:

I have an older Acer Predator G3-710, looks like a standard ATX motherboard.
i7-6700 CPU
32 GB
GTX 1070

Take it out and move the motherboard and components into an ATX server case.
Buy https://www.newegg.com/p/14G-009S-00030 10 Port SATA III to PCIe 3.0 x1 NON-RAID Expansion Card
or would I need the more expensive https://www.newegg.com/p/14G-000G-00087? Just one or two 4k movie streams at a time.

Take hard drives out of the DAS and install into new server case. Install TrueNAS... profit?

Case like this
https://www.newegg.com/rosewill-rsv-r4100u-black/p/11-147-332?Item=11-147-332

IOwnCalculus
Apr 2, 2003





I would do neither of those SATA controllers and get an LSI HBA instead. Something like this SAS2308 HBA will be much more reliable and widely compatible than either option from Newegg. You will also need some fanout SAS-to-SATA cables but that should be considered a bonus, not a negative. Cable management of individual SATA cables sucks.

Fozzy The Bear
Dec 11, 1999

Nothing much, watching the game, drinking a bud

IOwnCalculus posted:

I would do neither of those SATA controllers and get an LSI HBA instead. Something like this SAS2308 HBA will be much more reliable and widely compatible than either option from Newegg. You will also need some fanout SAS-to-SATA cables but that should be considered a bonus, not a negative. Cable management of individual SATA cables sucks.

I don't know what this "cable management" thing you speak of is :lol:

Had to google most of these terms, but this looks great, thanks.

Kibner
Oct 21, 2008

Acguy Supremacy

Fozzy The Bear posted:

I don't know what this "cable management" thing you speak of is :lol:

Had to google most of these terms, but this looks great, thanks.

The big thing to understand is that the line of LSI HBA's have native support in the Linux kernel (iirc). Any other similar piece of hardware is a crapshoot on whether it has native support or not. Going with LSI will just save you a bunch of headaches.

Kibner
Oct 21, 2008

Acguy Supremacy
Also, if you currently have an odd number of drives, they are mismatched in size, or you may but currently aren't going to expand, you may be better off with Unraid instead of TrueNAS. TrueNAS only really kinda supports expansion if you are doing mirrored drives but you are then limited to expanding in pairs (or triplets or however you handle the mirroring).

Yaoi Gagarin
Feb 20, 2014

raidz expansion is in upstream zfs so truenas will eventually get that too

IOwnCalculus
Apr 2, 2003





Fozzy The Bear posted:

I don't know what this "cable management" thing you speak of is :lol:

Believe me, I've been there, done that; I think I had a similar total drive count back when I still was doing individual SATA connections to each drive. Just physically plugging them in becomes problematic at that kind of density at the controller. The odds of having a lovely SATA cable, or just a poorly-seated connection, go up dramatically when you're trying to work in that small of a space.

Fozzy The Bear
Dec 11, 1999

Nothing much, watching the game, drinking a bud

Kibner posted:

The big thing to understand is that the line of LSI HBA's have native support in the Linux kernel (iirc). Any other similar piece of hardware is a crapshoot on whether it has native support or not. Going with LSI will just save you a bunch of headaches.

I'm using TrueNas FreeBSD, should be in the kernel there too

BlankSystemDaemon
Mar 13, 2009



Kibner posted:

Also, if you currently have an odd number of drives, they are mismatched in size, or you may but currently aren't going to expand, you may be better off with Unraid instead of TrueNAS. TrueNAS only really kinda supports expansion if you are doing mirrored drives but you are then limited to expanding in pairs (or triplets or however you handle the mirroring).
This is still as untrue as it has been ever since autoexpand was added in 2009.
With that zpool property enabled (it needs to be enabled before you start attempting to use it), you just can replace the smallest drive(s) in the pool one at a time, and your pool will expand automatically.

If you also use the autoreplace property on FreeBSD or Illumos (which both understand persistent device paths from SCSI/SAS Enclosure Services, also exposed via sesutil), ZFS will even do all of the administrative work for you, so you only have to physically remove the right drive and insert a new one.

Kibner
Oct 21, 2008

Acguy Supremacy
!

I appreciate that correction.

Cenodoxus
Mar 29, 2012

while [[ true ]] ; do
    pour()
done



That's a different kind of expansion, though. Autoexpand refers to replacing the devices and then growing the vdev to fill unused space. RAIDZ expansion refers to adding a brand new device to an existing vdev and having ZFS reshuffle all your parity/etc. to make use of it. It may technically be merged upstream as Vostok noted, but it's not fully vetted yet. That's where Unraid still has a leg up.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
Raidz expansion is my most-hoped-for feature, mostly because I was somewhat dumb when I built my first pool, and did a raidz1 with 4 disks, then added a second raidz1 of 4 more disks to the pool later. I rather wish I'd done one larger raidz2/3 pool instead, but to redo at this point would entail me buying 8 12-16TB disks and building basically a second NAS in order to transfer all the data to the new pool.

Eventually this old i3-4130 machine will be put to pasture, and I'll build something more modern, and redo my pool structure at that point.

Beve Stuscemi
Jun 6, 2001




Is there an easy way to DIY a DAS? I have a little stack of 4TB drives hanging around and it would be kinda nice to be able to raid them together and hook them up over USB.

Basically, can you home build something like a Lacie?

Computer viking
May 30, 2011
Now with less breakage.

Beve Stuscemi posted:

Is there an easy way to DIY a DAS? I have a little stack of 4TB drives hanging around and it would be kinda nice to be able to raid them together and hook them up over USB.

Basically, can you home build something like a Lacie?

Huh, good question. The least plug-and-play solution would be to export them with iSCSI and hook it up with a USB NIC, but bleh.

All the parts for what you want to do are actually available, with various degrees of polish.
- A USB port that can be put in device mode. With USB-C I think that's more common?
- The linux Mass Storage Gadget kernel module does the main work: Given a list of devices (or backing files) it exports each as one mass storage device on any and all device mode ports. I think.
- Some way to bolt those disks together to a single block device, like mdraid or ZFS

The smoothest solution seems to be something like the Kobol Helios64 (which I had never heard of five minutes ago) - take a look at the "USB under Linux" section of their documentation.

e: Ha, they shut down in 2021. At least they kept the documentation up, and it shows that it's possible and not even that hard?

Computer viking fucked around with this message at 01:50 on Jan 10, 2024

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
Isn't it going to be pretty hard to beat just buying a solution? A RAID box is $160, and if you don't need the hardware RAID you can get a box for 4 hard disks and do it in software for $100 flat.

https://www.amazon.com/Mediasonic-HFR2-SU3S2-PRORAID-Drive-Enclosure/dp/B003YFHEAC?ref_=ast_sto_dp

It looks like you might pay more per disk for an 8 bay one, though.

Beve Stuscemi
Jun 6, 2001




That Mass Storage Gadget looks promising. I think by connecting it with a PCIE->USB-C card you'd be pretty good to go on most PC's

There is of course the problem of running it on a real PC, shutting it down cleanly, etc. The amazon $100 disk box is probably the way to go, even though building one would be more fun

Computer viking
May 30, 2011
Now with less breakage.

The big problem seems to be that all desktop and laptop USB-C controllers appear to only do host mode (except specifically for power delivery to laptops); having a controller that can be switched over to device mode appears to only be halfway common on ARM boards. I don't really get how this works with USB-C, since there are vague hints of "this is more of a software thing with USB-C". The only thing I can say is that playing with the FreeBSD install on my laptop and my windows desktop, I had zero luck getting the laptop to appear as a USB device. Though that may be me misunderstanding the FreeBSD documentation for this.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Beve Stuscemi posted:

Is there an easy way to DIY a DAS? I have a little stack of 4TB drives hanging around and it would be kinda nice to be able to raid them together and hook them up over USB.

Basically, can you home build something like a Lacie?

I'm working on this currently, also very slowly.

I have a handful of these that I snagged to play around with.

https://a.co/d/7xkunco

For the price they are fine, but orientation of power/data connections is backwards to what I envisioned.

Then these popped on sale for like $40/pop, so I'm currently planning on using them.
https://www.newegg.com/rosewill-rsv-sata-cage-34-hard-disk-drive-cage/p/N82E16816132037

This will be somehow fixed between rack mount shelf, with a little external DC PSU powering em. Then a LSI-9206-16e with SAS-8644 to SATA breakout cables.

It will either be glorious, or burn down my house.

Not long after getting way too deep into this, QNAP released near exactly what I was looking for. The TL-D400S and TL-D800S. Off the shelf, compact, external SAS connections. But a little more than I wanted to spend per drive bay $300/4 bay, $600/8 bay.

https://www.servethehome.com/qnap-tl-d800s-review-an-8-bay-sata-jbod-das/

powderific
May 13, 2004

Grimey Drawer
I'm curious what the deal is with raid levels large drives and SSDs. I've been reading that with current drive sizes, you can't really do raid 5 safely and need to do raid 6 or equivalent. Does that mean with a 4 bay DAS thing you'd be better off just doing raid 10 or something? And for SSDs, would a raid 5 with 4 NVMEs in an enclosure be OK or is that also an issue? I swear I read somewhere that raid 4 could be better for SSDs but now I can't find it.

For context, I do freelance video production and have about 50TB of footage I need to store and backup with ongoing needs. Right now, it's local on lots of external disk pairs and the backup copies on Dropbox. But two things have happened: the unlimited dropbox gravy train is going away, and my current camera's bitrate is too high for a single platter drive to playback smoothly. It's around 200MBs, up to 400MBs and rarely pushing more like 800 but I wouldn't likely be that worried about perfect playback in those rare cases.

I'd like to have something like an 8-bay synology with, say, 16tb disks as a bulk main storage, some way to back it up smoothly locally, and then some kind of fast DAS to work off of. Or if the NAS is fast enough over 10G lan I guess I could do that? Good USB-C SSDs are fast enough for everything I do, but they aren't quite big enough for some projects hence some kind of 4 NVME array seems like it could be a good option on that end.

Kibner
Oct 21, 2008

Acguy Supremacy
From what I saw in a video the other week, 10G is enough for up to 4k footage. You mentioned that your needs could go up to 800 MBps which would work out to about 6.5 Gbps. Well within 10G.

BlankSystemDaemon
Mar 13, 2009



Cenodoxus posted:

That's a different kind of expansion, though. Autoexpand refers to replacing the devices and then growing the vdev to fill unused space. RAIDZ expansion refers to adding a brand new device to an existing vdev and having ZFS reshuffle all your parity/etc. to make use of it. It may technically be merged upstream as Vostok noted, but it's not fully vetted yet. That's where Unraid still has a leg up.
I know the difference, I've used both.

I was responding to the claim that raidz expanding is the only form of expansion, and I could also have brought up how you can also expand a zpool by adding more vdevs of the same configuration.
The latter is how striped mirrors with thousands of disks were used to provide high-IOPS storage before NVMe and flash existed.

Beve Stuscemi posted:

Is there an easy way to DIY a DAS? I have a little stack of 4TB drives hanging around and it would be kinda nice to be able to raid them together and hook them up over USB.

Basically, can you home build something like a Lacie?
You can get Direct Access Storage USB devices - just search for "UBS DAS".

powderific posted:

I'm curious what the deal is with raid levels large drives and SSDs. I've been reading that with current drive sizes, you can't really do raid 5 safely and need to do raid 6 or equivalent. Does that mean with a 4 bay DAS thing you'd be better off just doing raid 10 or something? And for SSDs, would a raid 5 with 4 NVMEs in an enclosure be OK or is that also an issue? I swear I read somewhere that raid 4 could be better for SSDs but now I can't find it.

For context, I do freelance video production and have about 50TB of footage I need to store and backup with ongoing needs. Right now, it's local on lots of external disk pairs and the backup copies on Dropbox. But two things have happened: the unlimited dropbox gravy train is going away, and my current camera's bitrate is too high for a single platter drive to playback smoothly. It's around 200MBs, up to 400MBs and rarely pushing more like 800 but I wouldn't likely be that worried about perfect playback in those rare cases.

I'd like to have something like an 8-bay synology with, say, 16tb disks as a bulk main storage, some way to back it up smoothly locally, and then some kind of fast DAS to work off of. Or if the NAS is fast enough over 10G lan I guess I could do that? Good USB-C SSDs are fast enough for everything I do, but they aren't quite big enough for some projects hence some kind of 4 NVME array seems like it could be a good option on that end.
The reason that RAID5 (read: any striped data with distributed raid and 1 disks worth of availability) isn't a good idea today is that despite spinning rust having ballooned in size for the past 20 years (with no signs of slowing down), the actual command-rate (in terms of IOPS) as well as the access times haven't meaningfully changed.
Since those two things are what the bandwidth is a product of, the net effect is that with even 3 disks of 20TB in what used to be a "simple" configuration, a conservative estimate puts the Mean Time To Data Loss at much below the amount of data you can write to the array.

Yaoi Gagarin
Feb 20, 2014

does having 2 disks of redundancy solve that or do you need 3.

IOwnCalculus
Apr 2, 2003





VostokProgram posted:

does having 2 disks of redundancy solve that or do you need 3.

Two disks helps a lot, three helps more.

It's also worth noting that when the first round of "RAID5 is dead" articles came out, pretty much every RAID solution most people could get their hands on is the type that will fail the entire array if one drive is dead and even one read error occurs from the remaining drives during the rebuild. ZFS, by comparison, can identify the data it cannot recover when that happens, but will still do its best to recover the array. You'll still lose some data but now you only need to recover a few files from the backups that you definitely have, not the whole array.

It can't do anything if you have two complete drive failures concurrently, though... but at least in my own experience, I've had "single read error during a rebuild" happen far more often, and ZFS has saved my array every time.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
How are people doing modern flash NASes? I'd assume that parity based raid would be a bottleneck that limits writing to the array, and honestly striping would seem to add extra complexity that you don't really need because read/write speeds are already fast enough to saturate a 25gbit connection.

Couldn't you just pool the disks with lvm? SSD failure is less common than spinning disks and SSDs are so expensive that you probably have a backup somewhere else anyway, so why not run a flash NAS without any redundancy?

Twerk from Home fucked around with this message at 01:27 on Jan 11, 2024

Computer viking
May 30, 2011
Now with less breakage.

Twerk from Home posted:

How are people doing modern flash NASes? I'd assume that parity based raid would be a bottleneck that limits writing to the array, and honestly striping would seem to add extra complexity that you don't really need because read/write speeds are already fast enough to saturate a 25gbit connection.

Couldn't you just pool the disks with lvm? SSD failure is less common than spinning disks and SSDs are so expensive that you probably have a backup somewhere else anyway, so why not run a flash NAS without any redundancy?

Eeeeh. SSDs die, especially when they get a lot of lifetime IO. I've got four M.2 NVME drives on a 4x card, and landed on raidz. The server only has a 2.5Gbit link anyway, it's more than fast enough.

On the other hand, you absolutely have a point - when you get to "2U server stacked full of enterprise NVME drives" that's a lot of parity calculation bandwidth. No idea how it stacks up to a modern server CPU.

BlankSystemDaemon
Mar 13, 2009



VostokProgram posted:

does having 2 disks of redundancy solve that or do you need 3.
Well, part of why having two disks makes things a little bit better is that the resilver rate is a lot quicker.

A lot of hardware RAID5 will max out at like 15MBps during a rebuild.

IOwnCalculus posted:

Two disks helps a lot, three helps more.

It's also worth noting that when the first round of "RAID5 is dead" articles came out, pretty much every RAID solution most people could get their hands on is the type that will fail the entire array if one drive is dead and even one read error occurs from the remaining drives during the rebuild. ZFS, by comparison, can identify the data it cannot recover when that happens, but will still do its best to recover the array. You'll still lose some data but now you only need to recover a few files from the backups that you definitely have, not the whole array.

It can't do anything if you have two complete drive failures concurrently, though... but at least in my own experience, I've had "single read error during a rebuild" happen far more often, and ZFS has saved my array every time.
Yup, ZFS is pretty great - but it can also saturate spinning rust when it comes to rebuild rate, so that's another reason to like it.

Another reason to like it is that if you have use snapshots + zfs send|receive for one of your backup strategies, you can restore data identified by ZFS as corrupt using corrective receive.

Twerk from Home posted:

How are people doing modern flash NASes? I'd assume that parity based raid would be a bottleneck that limits writing to the array, and honestly striping would seem to add extra complexity that you don't really need because read/write speeds are already fast enough to saturate a 25gbit connection.

Couldn't you just pool the disks with lvm? SSD failure is less common than spinning disks and SSDs are so expensive that you probably have a backup somewhere else anyway, so why not run a flash NAS without any redundancy?
A modern CPU can do the XOR operation in 1 instruction, and that's used for RAIDz1.
RAIDz2 and RAIDz3 uses Galois fields + XOR, but there are AVX2, AVX512BW, and AVX512F implementations to accelerate it - so in practice, you're spending more CPUtime on things like LZ4 (which is fast enough for NVMe).
In the future it might be possible to take advantage of GF-Ni (or something similar that isn't vendor-specific) if it's ever available on more than the 3rd-gen Xeon Scalable CPUs.
Likewise, offloading the compression and checksumming and RAIDz calculations to QAT devices will maybe also be possible in the future:
https://www.youtube.com/watch?v=mKDDKG0yVRg

BlankSystemDaemon fucked around with this message at 02:14 on Jan 11, 2024

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

BlankSystemDaemon posted:

A modern CPU can do the XOR operation in 1 instruction, and that's used for RAIDz1.
RAIDz2 and RAIDz3 uses Galois fields + XOR, but there are AVX2, AVX512BW, and AVX512F implementations to accelerate it - so in practice, you're spending more CPUtime on things like LZ4 (which is fast enough for NVMe).
In the future it might be possible to take advantage of GF-Ni once it's available on more than the 3rd-gen Xeon Scalable CPUs.

Neat, so that should mean that NVME NASes on ZFS RAIDz1/z2 could write at 10gbps line rate? I see very few actual benchmarks published about homebuilt systems, the last decent one that everyone points to is https://arstechnica.com/gadgets/2020/05/zfs-versus-raid-eight-ironwolf-disks-two-filesystems-one-winner/, which shows that raidz2 on an 8 disk array writes at about 3x a single disk (for big writes) but only reads a little faster than a single disk, which also would mean that putting disks into a raidz2 vdev wouldn't help you read faster for things like "I need to scrub through this video quickly."

You scan scale ZFS performance by adding more vdevs, but it's going to take a ton of disks to do raidz2 with lots of vdevs.

BlankSystemDaemon
Mar 13, 2009



Twerk from Home posted:

Neat, so that should mean that NVME NASes on ZFS RAIDz1/z2 could write at 10gbps line rate? I see very few actual benchmarks published about homebuilt systems, the last decent one that everyone points to is https://arstechnica.com/gadgets/2020/05/zfs-versus-raid-eight-ironwolf-disks-two-filesystems-one-winner/, which shows that raidz2 on an 8 disk array writes at about 3x a single disk (for big writes) but only reads a little faster than a single disk, which also would mean that putting disks into a raidz2 vdev wouldn't help you read faster for things like "I need to scrub through this video quickly."

You scan scale ZFS performance by adding more vdevs, but it's going to take a ton of disks to do raidz2 with lots of vdevs.
Jim explains quite well why the read is the way it is - but this is done on spinning rust, so it doesn't apply to flash arrays.
ZFS ARC would still help you scrub through a video file as all the blocks you're hitting both often and frequently will be kept there - and I'm kinda surprised Jim doesn't mention this, since he's one of the go-to folks for ZFS advice that I recommend others to pay attention to.

It sucks that he's no longer writing for Ars, as he never really finished all the topics he wanted to cover.

Yaoi Gagarin
Feb 20, 2014

Perhaps someone can recommend a case for me? I've done a lot of googling but cannot seem to find anything that meets all these requirements:
1. supports at least micro-ATX motherboards
2. has 8 3.5" hot swap bays
3. can use a normal ATX power supply
4. can keep the 8 hard drives at safe temperature
5. quiet
6. not a heavy rackmount box

The closest thing I have found is the Silverstone CS381, but I see lots of people on the internet saying their drives run hot in that thing. They also make a CS382 and it has fans directly on the drive cage, but they are half blocked by its SAS backplane. I think the latter is new enough that I can't find any info about how hot it runs, maybe half-blocked fans are OK?

Coxswain Balls
Jun 4, 2001

Well, it finally happened. Started hearing some grinding noises and checking my TrueNAS alerts to see this.



About nine years of uptime so I was expecting it sooner or later, although nothing came up in smart tests until now. It's one drive in a 4x3TB RAIDZ2 setup so theoretically the data on the NAS should still be safe. I've got offsite backups of the important stuff like photos and documents so I'm not extra worried. My local Memory Express doesn't have any 3TB WD Reds, so should I just pick one of these up and add it to the pool as a 3TB drive for now? As much as I'd love to do a full overhaul and replace all the drives at once, it's not really in the budget right now. I know I still have one more disk of redundancy left, but the rest of the drives aren't getting any younger.

BlankSystemDaemon
Mar 13, 2009



VostokProgram posted:

Perhaps someone can recommend a case for me? I've done a lot of googling but cannot seem to find anything that meets all these requirements:
1. supports at least micro-ATX motherboards
2. has 8 3.5" hot swap bays
3. can use a normal ATX power supply
4. can keep the 8 hard drives at safe temperature
5. quiet
6. not a heavy rackmount box

The closest thing I have found is the Silverstone CS381, but I see lots of people on the internet saying their drives run hot in that thing. They also make a CS382 and it has fans directly on the drive cage, but they are half blocked by its SAS backplane. I think the latter is new enough that I can't find any info about how hot it runs, maybe half-blocked fans are OK?
All that's really needed with the CS382 is some fans with higher static pressure, and you can still get ones like that that're quiet - check out the 120 or 140mm Noctua Industrial fans.

They'll be more audible than quiet, but it's not like those 40-80mm screaming fans you find in servers.

Coxswain Balls posted:

Well, it finally happened. Started hearing some grinding noises and checking my TrueNAS alerts to see this.



About nine years of uptime so I was expecting it sooner or later, although nothing came up in smart tests until now. It's one drive in a 4x3TB RAIDZ2 setup so theoretically the data on the NAS should still be safe. I've got offsite backups of the important stuff like photos and documents so I'm not extra worried. My local Memory Express doesn't have any 3TB WD Reds, so should I just pick one of these up and add it to the pool as a 3TB drive for now? As much as I'd love to do a full overhaul and replace all the drives at once, it's not really in the budget right now. I know I still have one more disk of redundancy left, but the rest of the drives aren't getting any younger.
Well, the specifications list that model as CMR so it's probably fine?

If you set the autoexpand property to on, you can buy one bigger drive and replace the broken one, and then start replacing other drives as your budget permits, and once you've replaced all the drives, the pool should expand.

And yeah, running an array with a known-faulty drive is just asking for trouble.

BlankSystemDaemon fucked around with this message at 13:03 on Jan 11, 2024

Coxswain Balls
Jun 4, 2001

I've already offlined it and pulled it out, I just want to make sure I'm following best practices moving forward. I'm tempted to just bite the bullet and grab an 8TB model and phase in replacements for the rest over the next year or two since they're already pretty ancient. Is shucking still something people do? I haven't followed any storage stuff since before the pandemic so I'm not sure what I should be looking at other than "SMR bad".

BlankSystemDaemon
Mar 13, 2009



Coxswain Balls posted:

I've already offlined it and pulled it out, I just want to make sure I'm following best practices moving forward. I'm tempted to just bite the bullet and grab an 8TB model and phase in replacements for the rest over the next year or two since they're already pretty ancient. Is shucking still something people do? I haven't followed any storage stuff since before the pandemic so I'm not sure what I should be looking at other than "SMR bad".
Shucking still happens, yeah - but you gotta remember that shucked drives exist because the USB DAS disks that people are after are disks that are discarded for QA reasons when binning enterprise drives.

You may get lucky and get ones that fail the QA binning to become enterprise drives because of a minor flaw, but you can also get ones that'll constantly fail or misbehave in unpredictable ways, making the experience of using them an absolute nightmare and the rootcausing of the issue without something like dtrace even worse.

Basically, disks don't like any external movement - and it doesn't matter if they're good vibrations or not:
https://www.youtube.com/watch?v=tDacjrSCeq4

withoutclass
Nov 6, 2007

Resist the siren call of rhinocerosness

College Slice
Anecdotal but I've been running shucked Easy Store drives for probably 1.5-2 years now without any issues.

Adbot
ADBOT LOVES YOU

Beve Stuscemi
Jun 6, 2001




Every day I open palm slam my hand into the basement door, walk over by my NAS and scream at the top of my lungs. 2 hours, including wind down

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply