Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
LRADIKAL
Jun 10, 2001

Fun Shoe
It's just a whatever dimension barrel plug. Any (yes, yes, scam garbage exists) power supply rated high enough is fine.

Adbot
ADBOT LOVES YOU

Hadlock
Nov 9, 2004

Yeah typically I'd be like "meh, whatevs" but not super excited about losing all our baby photos, and the only real backup of family photos on both sides of our families because I saved $60 and bought the $17 option from whatever rando vendor on amazon and it turned out to be a brick full of sand that just outputed 120v on three pins and ground on the fourth, or whatever. It's a stupid amount of money but it's worth the peace of mind in this particular exact instance.

L33t_Kefka
Jul 16, 2000

My 1337 littl3 magic us3r, put 0n this cr0wn, bitch! H4W H4W! I 0wn j00!!!!
Can anyone recommend me a "good" DAS? I was using a Drobo 5D that died and looking around I can only find this QNAP that does not look good per the reviews.

I would prefer a DAS over a NAS (as I have a "server" and just wanted a box to shove HDs into) but I am also open to NASes (such as this if no decent/cheap DAS can be found.

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

Too late now, but if you know the specs of the supply, you can buy something from digikey.com (or mouser or arrow) and be about as certain as possible it's the real deal and fit for purpose. They generally don't carry any outright crap, and are very aware of avoiding counterfeits.

some kinda jackal
Feb 25, 2003

 
 
Well I'm glad I opted for cloud backup for my Synology because the external drive I have also just conked out (not unexpectedly as it's a few years old) so I'm down to one backup from two.

Have two approaches I can take here. I can either replace the external drive, or I can try to source a second small 2-bay Synology, set it up as a mirror for single drive failure, and just do a basic device-device backup. A little more up front but probably a good option. Theoretically I could have it power down and back up during a scheduled window but I'm not sure the power use is worth saving vs wear and tear on drives spinning up and down every day.

Thinking of the bare minimum setup I'd need to get this done -- presumably I'd ned something capable of taking DSM7 if that's what I'm at on my main Syno.

I could achieve the same with some DIY setup and two drives but if I can find a suitable 2 bay Synology for under a hundred bucks used I might take the leap. I still need to get two drives and worst case I can repurpose those for whatever "better" solution might be out there.

Functionally I'm pretty sure the Syno.C2 service is enough for me, but in a restore situation I guess I'd rather have faster access to just re-sync everything back and leave C2 as insurance against.. I don't know.. my house burning down.

Hadlock
Nov 9, 2004

Looking at the invoice for my Synology DS418 it was purchased in June 2018 which means 2/4 disks are approaching their 4th birthday

What kind of replacement schedule should I be looking at for my oldest disks? I think they're 6TB WD Reds mfg ~2018. Kind of thinking I should go ahead and replace one of the oldest drives with a 10TB. I think I'm using Synology raid.s2 or whatever their more advanced proprietary raid is with a 3-way mirror. I have an AWS glacier backup but I like to pretend it doesn't exist because I've never tested it

BlankSystemDaemon
Mar 13, 2009



The new Xeon Xeon D-1700 boards from AsrockRack could be interesting for anyone building a DIY NAS.

Klyith
Aug 3, 2007

GBS Pledge Week

Hadlock posted:

Looking at the invoice for my Synology DS418 it was purchased in June 2018 which means 2/4 disks are approaching their 4th birthday

What kind of replacement schedule should I be looking at for my oldest disks? I think they're 6TB WD Reds mfg ~2018. Kind of thinking I should go ahead and replace one of the oldest drives with a 10TB. I think I'm using Synology raid.s2 or whatever their more advanced proprietary raid is with a 3-way mirror. I have an AWS glacier backup but I like to pretend it doesn't exist because I've never tested it

Going by this it seems like replacing after just 4 years is over-cautious.



So 5 years = "I am paranoid about my data & want to replace before failure"
6 years = "they're mirrored, I trust redundancy"
7+ years = "DEATH IS CERTAIN, everything is backed up anyways"

And that's not even touching on how HDDs probably last longer in a personal PC/NAS situation than enterprise.

some kinda jackal posted:

Theoretically I could have it power down and back up during a scheduled window but I'm not sure the power use is worth saving vs wear and tear on drives spinning up and down every day.

Spinning up and down once or twice a day is almost certainly less wear than running 24/7, especially if you use consumer drives.

Thanks Ants
May 21, 2004

#essereFerrari


Spin them down to save power. If cycling them draws out a failure earlier then that could also be viewed as a positive, rather than having a bunch of disks that are fine until you power the box down and then you lose half the members of an array.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

BlankSystemDaemon posted:

The new Xeon Xeon D-1700 boards from AsrockRack could be interesting for anyone building a DIY NAS.

:swoon: I have no pressing reason to get these but I love me some Xeon-D.

Chilled Milk
Jun 22, 2003

No one here is alone,
satellites in every home

BlankSystemDaemon posted:

The new Xeon Xeon D-1700 boards from AsrockRack could be interesting for anyone building a DIY NAS.

Those are some compelling packages. I wonder what they'd go for at retail

Shumagorath
Jun 6, 2001
Is the Xeon attractive for any reason besides running ECC?

Astro7x
Aug 4, 2004
Thinks It's All Real
So in doing my research… it seems like the best way to run Dropbox on a server to 3 Mac computers is to run a external RAID (looking at the OWC ThunderBay 8) with Thunderbolt 3 into a Mac Mini and then connect to that computer over 10GbE using some switches to create an internal high speed network. There is a bit more to it than that, but that’s the basics.

Is there any reason not to do this or a better way? Dropbox doesn’t run on a NAS, and I need to get Dropbox running on a server. We are doing video editing with Premiere, if that matters.

Klyith
Aug 3, 2007

GBS Pledge Week

Astro7x posted:

Is there any reason not to do this or a better way? Dropbox doesn’t run on a NAS, and I need to get Dropbox running on a server.

https://kb.synology.com/en-in/DSM/help/CloudSync/cloudsync?version=7

quote:

Cloud Sync
With Cloud Sync, you can seamlessly sync and share files among your Synology NAS and multiple public cloud services, including:

Dropbox (including Dropbox for Business. However, Dropbox Team Folder is excluded)

Your post doesn't have enough information to know whether this is good or works for how you use dropbox. But, sync with dropbox from a NAS is doable.

BlankSystemDaemon
Mar 13, 2009



Seems like SuperMicro also has a couple of boards in a couple different formfactors like MiniITX, MicroATX, and FlexATX (this last one should fit in cases that can fit ATX).

Their new site is so bad, it's loving impossible to find anything on it easily, and you can't link to anything but specific products. :mad:

Shumagorath posted:

Is the Xeon attractive for any reason besides running ECC?
There are models that act as a complete SoC with both integrated LAN for 10G/25G SFP+/SFP28 and QuickAssist meaning you get a low-power CPU+motherboard that can be used as a very efficient NAS, since it can offload SHA2 for checksumming and LZS/DEFLATE for compression - which combine well with a filesystem like ZFS.
It's a pity that the Xeon D-series doesn't integrate GFNI, which would let it offload the raidz1-3 calculations, but at least they can be done using SIMD/AVX2/AVX512 vectorization - so they're faster and use less power.

Astro7x posted:

So in doing my research… it seems like the best way to run Dropbox on a server to 3 Mac computers is to run a external RAID (looking at the OWC ThunderBay 8) with Thunderbolt 3 into a Mac Mini and then connect to that computer over 10GbE using some switches to create an internal high speed network. There is a bit more to it than that, but that’s the basics.

Is there any reason not to do this or a better way? Dropbox doesn’t run on a NAS, and I need to get Dropbox running on a server. We are doing video editing with Premiere, if that matters.
I mean, it's doable.
There's a couple issues though, which is that the external bays are going to be using some form of hardware RAID that's not battery backed, so you're better off using software RAID like ZFS (which does work on macOS, and I know a goon who's using Thunderbolt JBOD chassis plus a Mac Mini to host his storage).
I'm not sure what any of it has to do with DropBox, I thought the point of that was that it's stored in the butt.

BlankSystemDaemon fucked around with this message at 11:19 on Apr 14, 2022

some kinda jackal
Feb 25, 2003

 
 
huh, good thoughts on the spin up/down stuff. Somewhere I remember reading or being told that it's more wear to keep a disk running than it is to have a spin up/down cycle but that may have been BS from the getgo, true of older equipment, or is honestly so insignificant a factor that I shouldn't even consider it in the year 2022.

I'll try to find myself a 2 bay DSM7 compat Syno and just set it up in a different corner of my house as insurance. Spin up/down an hour before scheduled backup and have it spin down three or four hours later, just to give it headroom to do any maintenance it wants to, etc. in addition to being an actual replication target.

jawbroken
Aug 13, 2007

messmate king

Astro7x posted:

So in doing my research… it seems like the best way to run Dropbox on a server to 3 Mac computers is to run a external RAID (looking at the OWC ThunderBay 8) with Thunderbolt 3 into a Mac Mini and then connect to that computer over 10GbE using some switches to create an internal high speed network. There is a bit more to it than that, but that’s the basics.

Is there any reason not to do this or a better way? Dropbox doesn’t run on a NAS, and I need to get Dropbox running on a server. We are doing video editing with Premiere, if that matters.

I have a Mac Studio (previously a Mac Mini) and two ThunderBay 8s running OpenZFS and it works great for me. You might want to test OpenZFS vs SoftRAID (which comes with the enclosures) for your particular workload, I have no experience with SoftRAID.

Astro7x
Aug 4, 2004
Thinks It's All Real

BlankSystemDaemon posted:

I mean, it's doable.
There's a couple issues though, which is that the external bays are going to be using some form of hardware RAID that's not battery backed, so you're better off using software RAID like ZFS (which does work on macOS, and I know a goon who's using Thunderbolt JBOD chassis plus a Mac Mini to host his storage).
I'm not sure what any of it has to do with DropBox, I thought the point of that was that it's stored in the butt.

The Thunderbay 8 does use SoftRAID.

Here is the diagram of how we're thinking of connecting it all.



So there are some other things going on, like we have some older 1 GbE workstations we want to connect to it, as well as a machine that pretty much just runs out LTO equipment. We also have our older NAS that we want to essentially use as Cold Storage for old projects before they are archived.

The reason of mentioning Dropbox is because you can't run Dropbox off a NAS. It needs to be running off of direct attached storage, which is why the Mac Mini setup is necessary to run Dropbox and attach a JBOD chassis to it. From what I understand, direct attached storage is the only way that Dropbox gets the incremental file updates it needs for the service to work as intended.

We have employees that we've hired over the past two years that are completely remote and working off Dropbox, and it's worked pretty flawlessly so far and we'd like to keep it in place. We also like it as a disaster recovery for our active jobs that is provides, since a RAID is not a backup. And Dropbox is also going to help use do some sort of hybrid workflow as well so I can work from home or the office, since everything will also live in Dropbox.


jawbroken posted:

I have a Mac Studio (previously a Mac Mini) and two ThunderBay 8s running OpenZFS and it works great for me. You might want to test OpenZFS vs SoftRAID (which comes with the enclosures) for your particular workload, I have no experience with SoftRAID.

Are the ThunderBay 8s directly attached or running through a network? Curious what your actual read/write speeds are to those things.


Our Small Tree NAS currently runs on ZFS. I don't know... none of us like the Small Tree NAS. We got it in 2018, and since Day 1 using Finder has been sluggish compared to working with our older RAID that worked in a DAS setup over 1 GbE that we want to replace it with. Like if I am opening up folders that I have not recently opened, it will just sit there for way too long as it pulls up the file structure.

I am also not a fan of the Snapshots feature on ZFS, which is a loving pain trying to recover anything from it. Dropbox has protected us from things like people overwriting a project file and losing data, since it has the versioning for 30 days I believe.

Less Fat Luke
May 23, 2003

Exciting Lemon

Astro7x posted:

I am also not a fan of the Snapshots feature on ZFS, which is a loving pain trying to recover anything from it. Dropbox has protected us from things like people overwriting a project file and losing data, since it has the versioning for 30 days I believe.
Really? I find browsing into the .zfs folder and just exploring through snapshots amazing, what don't you like about it?

BlankSystemDaemon
Mar 13, 2009



some kinda jackal posted:

huh, good thoughts on the spin up/down stuff. Somewhere I remember reading or being told that it's more wear to keep a disk running than it is to have a spin up/down cycle but that may have been BS from the getgo, true of older equipment, or is honestly so insignificant a factor that I shouldn't even consider it in the year 2022.

I'll try to find myself a 2 bay DSM7 compat Syno and just set it up in a different corner of my house as insurance. Spin up/down an hour before scheduled backup and have it spin down three or four hours later, just to give it headroom to do any maintenance it wants to, etc. in addition to being an actual replication target.
Headparking is a problem on some disks that're rated for a fraction of the normal load-unload cycles you see on high end drives, and there was a particular line of WD Greens which had several orders of magnitude lower amounts of load-unload cycles.
If you buy drives that aren't poo poo, you should be fine to idle them if they're only being spun up twice a day - but it all comes down to RPO and RTO, and what you're willing to pay/tolerate.

Astro7x posted:

The Thunderbay 8 does use SoftRAID.

Here is the diagram of how we're thinking of connecting it all.



So there are some other things going on, like we have some older 1 GbE workstations we want to connect to it, as well as a machine that pretty much just runs out LTO equipment. We also have our older NAS that we want to essentially use as Cold Storage for old projects before they are archived.

The reason of mentioning Dropbox is because you can't run Dropbox off a NAS. It needs to be running off of direct attached storage, which is why the Mac Mini setup is necessary to run Dropbox and attach a JBOD chassis to it. From what I understand, direct attached storage is the only way that Dropbox gets the incremental file updates it needs for the service to work as intended.

We have employees that we've hired over the past two years that are completely remote and working off Dropbox, and it's worked pretty flawlessly so far and we'd like to keep it in place. We also like it as a disaster recovery for our active jobs that is provides, since a RAID is not a backup. And Dropbox is also going to help use do some sort of hybrid workflow as well so I can work from home or the office, since everything will also live in Dropbox.
As you may discover if you read my post history ITT, I'm not terribly trusting of any RAID that isn't also a filesystem which does transactional copy-on-write with checksummed hash-trees - because there's just too many ways things can fail and end up throwing away your data or get silent data corruption.

What are you hoping to achieve with Dropbox? They don't really do much in the way of retention so your RTO is gonna be terrible unless you get to restoring immediately after you've had any kind of an issue, which can lead to issues if things are metaphorically (or demonstrably) on fire - and while your RPO with Dropbox may be good, paying for it in RTO isn't ideal.

Also, remember: If your data doesn't exist in three places, and if you don't test recovery regularly, automatically, and programmatically, it isn't backed up.

BlankSystemDaemon fucked around with this message at 15:47 on Apr 14, 2022

Astro7x
Aug 4, 2004
Thinks It's All Real

Less Fat Luke posted:

Really? I find browsing into the .zfs folder and just exploring through snapshots amazing, what don't you like about it?

The interface is not intuitive enough for anybody at my company to actually use it. Like I don't even know how you're exploring the .zfs folder... maybe I am not using it right.

BlankSystemDaemon
Mar 13, 2009



Astro7x posted:

The interface is not intuitive enough for anybody at my company to actually use it. Like I don't even know how you're exploring the .zfs folder... maybe I am not using it right.
If you're exporting your ZFS via Samba, snapshot via .zfs can be made accessible in Windows under the Previous Versions tab in the Properties window when you right-click a file.

Shumagorath
Jun 6, 2001
I'm continuing to price out solutions for a home NAS for after I've sold my old desktop. Is it still best practise to buy mixed lots of drives (say, 2x WD Red Plus NAS and 2x Seagate IronWolf NAS for a ZR2 configuration) to avoid an entire pool failing in quick succession?

BlankSystemDaemon
Mar 13, 2009



Shumagorath posted:

I'm continuing to price out solutions for a home NAS for after I've sold my old desktop. Is it still best practise to buy mixed lots of drives (say, 2x WD Red Plus NAS and 2x Seagate IronWolf NAS for a ZR2 configuration) to avoid an entire pool failing in quick succession?
It's never a bad idea to buy disks from different vendors, especially if they don't have (near-)successive serial numbers.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Feels like I need a whole weekend studying the manual of that Xeon-D board, because of all the lane sharing.

Astro7x
Aug 4, 2004
Thinks It's All Real

BlankSystemDaemon posted:

What are you hoping to achieve with Dropbox? They don't really do much in the way of retention so your RTO is gonna be terrible unless you get to restoring immediately after you've had any kind of an issue, which can lead to issues if things are metaphorically (or demonstrably) on fire - and while your RPO with Dropbox may be good, paying for it in RTO isn't ideal.

The cost of $75/mo for Dropbox is a non issue.

I want our 3 video editors to be able to come into the office and work off a RAID on iMac Pros connected over 10 GbE, with all project files being uploaded to Dropbox. Each editor will then have a home computer setup running off an External HD where all that data is copied via Dropbox. So you can come into the office or work at home, and you have all the data needed for the active jobs.

We also have 2 GFX designers working remotely, who don't need all of the footage data for editorial, so their storage needs are much smaller. They are working completely on Dropbox as well. So if an editor is working at the office or home, they need to have access to the files created by two remote GFX designers at all times. All the GFX design files are also being backed up to Dropbox as well.

So the active jobs for Editorial are actually in their entirety at 5 places.
-In the cloud on Dropbox
-Office Thunderbay 8
-Editor 1's home computer
-Editor 2's home computer
-Editor 3's home computer

The Design projects are in 7 places, which includes the Designer's home hard drives. So the data is being copied to multiple places and tested regularly, since we are working with these files on a daily basis.

Dropbox also works great because we have remote freelancers. It is dummy proof. We share a folder with them, everyone gains access to it at all locations, and all their project files are backed up as soon as they are saved. When they are no longer working for us I can remove access to the folder and even wipe the project files from their drive, if I want to.

Honestly, I haven't found a solution to all of this that works better than Dropbox. I don't care about cost, it does what I need it to do. Having renders upload incrementally for syncing really speeds up the file sharing process between everyone's location that can change on any given day. Being able to rewind a file in an interface that is intuitive to everyone is great. The ability to get freelancers up and running quickly is wonderful. Having everything in the cloud makes sharing large files with clients via Dropbox Transfer speedy, because I don't need to upload all of the data to a second file sharing service which is slow and another cost. I've literally had clients request their footage, and without Dropbox I'd have to get a hard drive, copy it over, and ship it out. With Dropbox I just create some Transfer Links to the footage on the server and I'm done in a few minutes.

When jobs are finished, they are written to two LTO tapes. One stays on site at the office, one goes into offsite storage. That active job then gets moved from the Thunderbay8 in the Dropbox folder to an 80TB NAS for cold storage. All data on the NAS is in 3 locations (NAS, Onsite LTO Tape, Offsite LTO Tape). Since jobs can have revisions months/years down the road, I want all the data accessible so that I can quickly get the job up and running again by moving it back from the NAS to the Thunderbay 8 in Dropbox. That also means that I don't need to actively go into the office to unarchive an old job on LTO, that I can just move it back remotely. When the NAS is full, I will purge old projects and then data will be in 2 locations, an onsite and offsite LTO tape.

powderific
May 13, 2004

Grimey Drawer
I'm just starting to use Dropbox in smaller scale relation to video editing workflow and really like it so far. I'm just a solo DP and hire freelance editors on occasion so smaller scale and it's all direct attached storage for now. The big draw for me is that it's unlimited storage for $75 in a fairly easy to use package. Gives me a much easier centralized archive of old footage and make the cloud backup during a project much more useful as I can share the files out easily too. Got 40TB up so far and it has been working fairly well!

jawbroken
Aug 13, 2007

messmate king

Astro7x posted:

Are the ThunderBay 8s directly attached or running through a network? Curious what your actual read/write speeds are to those things.

Our Small Tree NAS currently runs on ZFS. I don't know... none of us like the Small Tree NAS. We got it in 2018, and since Day 1 using Finder has been sluggish compared to working with our older RAID that worked in a DAS setup over 1 GbE that we want to replace it with. Like if I am opening up folders that I have not recently opened, it will just sit there for way too long as it pulls up the file structure.

I am also not a fan of the Snapshots feature on ZFS, which is a loving pain trying to recover anything from it. Dropbox has protected us from things like people overwriting a project file and losing data, since it has the versioning for 30 days I believe.

They're directly attached to the Mac Studio fileserver and then everything else just accesses them over the network. Read/write speeds are great, just limited by the drives themselves not the Thunderbolt connection. I really love the combination of Thunderbolt drive enclosures attached to a separate small server. I've had this setup for about 8 years now and it's been a lot more flexible, modular, and nicer to use than when I used to build OpenSolaris servers, etc.

SoftRAID sounds fine for your purposes and is more Mac-user friendly (you can just create APFS volumes, etc). It sounds like you're combining it with a couple of backups anyway. You're talking to a thread full of file system nerds so they're going to have strong opinions, but it sounds good for your use case and probably works better with Dropbox.

BlankSystemDaemon
Mar 13, 2009



Combat Pretzel posted:

Feels like I need a whole weekend studying the manual of that Xeon-D board, because of all the lane sharing.
:same:

Astro7x posted:

:words: about needing commercial/enterprise storage in a thread for people doing consumer/prosumer level storage at home
If you're doing professional work, you should probably not skimp out on things like storage and backup by using flat-fee services with no retention and limited ability to restore at any meaningful rate, and instead do things properly by hiring/contracting with a storage engineer.

Klyith
Aug 3, 2007

GBS Pledge Week

some kinda jackal posted:

huh, good thoughts on the spin up/down stuff. Somewhere I remember reading or being told that it's more wear to keep a disk running than it is to have a spin up/down cycle but that may have been BS from the getgo, true of older equipment, or is honestly so insignificant a factor that I shouldn't even consider it in the year 2022.

It's relatively common unsupported internet advice, you see it all over. There's a basic bit of truth -- spin up & down is vastly more stress & wear than the same 30 seconds period of normal operation. But the internet telephone game somehow generalized that to "spin up/down is more wear than 24 hours of operation" which isn't true.

tldr over-aggressive power saving of HDDs bad, rational profiles that power cycle them a few times/day ok


edit: The other thing that encouraged people to never power-save drives was that this stuff dates from back when everything was on HDDs, and waiting for a spin-up when you did anything on your PC was annoying. That may have been why the advocates exaggerated the wear issue so much.


BlankSystemDaemon posted:

As you may discover if you read my post history ITT, I'm not terribly trusting of any RAID that isn't also a filesystem which does transactional copy-on-write with checksummed hash-trees - because there's just too many ways things can fail and end up throwing away your data or get silent data corruption.

Here's a great video for anyone looking for a 2nd opinion:
https://www.youtube.com/watch?v=l55GfAwa8RI&t=3s

In short, very few things these days besides ZFS and Btrfs do verification of data on reads, instead trusting the drives to catch & report errors.

Klyith fucked around with this message at 17:33 on Apr 14, 2022

Shumagorath
Jun 6, 2001
Just pushed my spin-down time from 20min to 120min in Windows on a drive that's already been in service for five and a half years :ohdear:

BlankSystemDaemon
Mar 13, 2009



Klyith posted:

Here's a great video for anyone looking for a 2nd opinion:
https://www.youtube.com/watch?v=l55GfAwa8RI&t=3s

In short, very few things these days besides ZFS and Btrfs do verification of data on reads, instead trusting the drives to catch & report errors.
So, there's a bit more to it than what Wendel is getting into here.
The GF(2^8) operation used for RAID6 is so computationally expensive, that if you were to do it in software, you'd need a supercomputer to do it.
Luckily for us, the Reed-Solomon codes which use this exact bit of finite field theory mathematics have also been used for CDs, DVDs, BluRays, digital video broadcast, QR codes, and many other things - so all systems come with chips to offload the actual operations into integrated circuitry that's on-die which can handle the computation for us, and it's further possible to vectorize the calculations onto SSE2, AVX2 or AVX512 (which is what ZFS does).

As for btrfs, it still requires you to manually balance an array if you've lost part of it at any point (a manual scrub doesn't do this, and on every other RAID implementation it's part of the resilvering process), and won't even mount properly unless you give it the right flags if there's a device missing but still enough mirroring or distributed parity to reconstruct the information.
This assumes, of course, that the RAID5/6 implementation has been provably fixed after it was broken but apparently it's still subject to writeholes, btrfs developers think you don't need checksums with distributed raid, and there's no SCSI DISCARD/ATA TRIM support for SSDs.
The policy-based availability per-directory that Wendel is talking about in the video sounds interesting, though. Do you happen to know what it's called? Because it's not particularly well-documented so as to show up in my searches (which is another problem with the whole resilver also needing rebalancing that I mentioned, because it's not mentioned anywhere in the official documentation).

If you remember back to over a decade ago, you might remember there were headlines going around about how disk capacity was getting to become too big for traditional RAID5 (also known as P distributed parity) to deal with given the stagnant number of operations per second that a disk is capable of (given that bandwidth is a function of the number and size of operations)?
The number of operations per second on drives haven't gotten any better since then, and it's getting to the point where we're past where RAID6 (also known as P+Q distributed parity) can be reasonably expected to work, so what happens when disk capacities on spinning rust is expected to grow by by 10x or 100x within the next 20 years?
Well, RAID7 (which nominally doesn't exist, but can be thought of as P+Q+R distributed parity) is going to have exactly the same problem, and we're going to need a whole new class of erasure codes which can handle that can handle S+T+U+V+W distributed parity on top of the existing P+Q+R, and we're gonna need to be able to vectorize that because our CPUs sure as gently caress can't handle the mathematics behind it, without getting speeds that makes a single disk feel fast.

EDIT: Also, here's a slightly interesting fact about the FreeBSD Handbook - when Allan Jude first wrote the documentation on ZFS, he included examples of how to use truncate(1) and dd(1) to demonstrate this exact kind of self-healing that Wendel is also demonstrating in this video.
I'd argue that that's part of why ZFS is so popular with FreeBSD folks even nowadays.

BlankSystemDaemon fucked around with this message at 20:05 on Apr 14, 2022

Shrimp or Shrimps
Feb 14, 2012


So reading the last couple of pages it occurred to me that I hadn't even considered power usage / wear of a 24/7 on Truenas box I put together at home with just consumer level drives. I am a total noob and don't know anything about HDD power states or if Truenas has this functionality built into it. Searching is giving me a lot of 4 year old truenas forum stuff of people having to write scripts to spin down their drives, and I don't know what that even means.

Could somebody here please give me an eli5 on my use-case? Thank you in advance: I only access the drives in the evening (media) and overnight (backup), but during the day I basically never need to access them. Would it be better to just switch off the system when I'm not using it during the day and then switch it back on before I do use it instead of just leaving it on 24/7?

Better I suppose meaning in terms of health of the drives.

Crime on a Dime
Nov 28, 2006

BlankSystemDaemon posted:

So, there's a bit more to it than what Wendel is getting into here.
The GF(2^8) operation used for RAID6 is so computationally expensive, that if you were to do it in software, you'd need a supercomputer to do it.
Luckily for us, the Reed-Solomon codes which use this exact bit of finite field theory mathematics have also been used for CDs, DVDs, BluRays, digital video broadcast, QR codes, and many other things - so all systems come with chips to offload the actual operations into integrated circuitry that's on-die which can handle the computation for us, and it's further possible to vectorize the calculations onto SSE2, AVX2 or AVX512 (which is what ZFS does).

As for btrfs, it still requires you to manually balance an array if you've lost part of it at any point (a manual scrub doesn't do this, and on every other RAID implementation it's part of the resilvering process), and won't even mount properly unless you give it the right flags if there's a device missing but still enough mirroring or distributed parity to reconstruct the information.
This assumes, of course, that the RAID5/6 implementation has been provably fixed after it was broken but apparently it's still subject to writeholes, btrfs developers think you don't need checksums with distributed raid, and there's no SCSI DISCARD/ATA TRIM support for SSDs.
The policy-based availability per-directory that Wendel is talking about in the video sounds interesting, though. Do you happen to know what it's called? Because it's not particularly well-documented so as to show up in my searches (which is another problem with the whole resilver also needing rebalancing that I mentioned, because it's not mentioned anywhere in the official documentation).

If you remember back to over a decade ago, you might remember there were headlines going around about how disk capacity was getting to become too big for traditional RAID5 (also known as P distributed parity) to deal with given the stagnant number of operations per second that a disk is capable of (given that bandwidth is a function of the number and size of operations)?
The number of operations per second on drives haven't gotten any better since then, and it's getting to the point where we're past where RAID6 (also known as P+Q distributed parity) can be reasonably expected to work, so what happens when disk capacities on spinning rust is expected to grow by by 10x or 100x within the next 20 years?

Sounds like you know a lot about this stuff. I have a nas with 6 drives using btrfs and RAID6. I opted to use 2 parity/(whatever it's called in raid6) drives.

One of the drives is getting some errors, it hasn't failed but I'd like to replace it before it does... what's the go with this manual balancing you mention?

Klyith
Aug 3, 2007

GBS Pledge Week

Shrimp or Shrimps posted:

So reading the last couple of pages it occurred to me that I hadn't even considered power usage / wear of a 24/7 on Truenas box I put together at home with just consumer level drives. I am a total noob and don't know anything about HDD power states or if Truenas has this functionality built into it. Searching is giving me a lot of 4 year old truenas forum stuff of people having to write scripts to spin down their drives, and I don't know what that even means.

Could somebody here please give me an eli5 on my use-case? Thank you in advance: I only access the drives in the evening (media) and overnight (backup), but during the day I basically never need to access them. Would it be better to just switch off the system when I'm not using it during the day and then switch it back on before I do use it instead of just leaving it on 24/7?

Better I suppose meaning in terms of health of the drives.

TrueNAS is made for professional & server environments more than as a home NAS, so it doesn't do power saving. Possibly truenas scale will add more of this in the future, Linux has more power management built in than BSD.

Honestly don't worry too much about it -- plenty of people run drives 24/7 and they still have a healthy lifetime. Servers don't sleep their drives and while enterprise drives are better than consumer, they're not made of unobtanium. Pushing back on "power saving kills HDDs" does not equal "running 24/7 kills HDDs".


If I were you I wouldn't bother shutting the thing down every day, sounds like a pain in the butt to have to manually manage that. Unless possibly the mobo you're using has an option for automatic scheduled power-on from hard off. Or IMPI stuff that would be easy to trigger.

Power consumption, each drive is around 5-10 watts when idle depending on type and RPM. Not enough to break your power bill, but not nothing. This is why OOTB consumer PCs, default settings on Windows, etc are to sleep drives at an IMO too-aggressive timeout of 10 or 20 minutes. It's for the eco ratings (and battery life on laptops).

Hadlock
Nov 9, 2004


I don't have anything nice to say about that title card, but I will say that is exactly what I'd imagine someone making a video on that topic would look like

BlankSystemDaemon
Mar 13, 2009



Crime on a Dime posted:

Sounds like you know a lot about this stuff. I have a nas with 6 drives using btrfs and RAID6. I opted to use 2 parity/(whatever it's called in raid6) drives.

One of the drives is getting some errors, it hasn't failed but I'd like to replace it before it does... what's the go with this manual balancing you mention?
To be clear, I know about FreeBSD and ZFS, and avoid Linux and BTRFS like the plague.

Going purely by the documentation (which as I mentioned previously, is full of holes), it's probably btrfs-replace(8), but you'll want to talk to someone who's been through it so often that they know the process by heart.
I can't say that with BTRFS, but I can say it with ZFS.

withoutclass
Nov 6, 2007

Resist the siren call of rhinocerosness

College Slice

Shrimp or Shrimps posted:

So reading the last couple of pages it occurred to me that I hadn't even considered power usage / wear of a 24/7 on Truenas box I put together at home with just consumer level drives. I am a total noob and don't know anything about HDD power states or if Truenas has this functionality built into it. Searching is giving me a lot of 4 year old truenas forum stuff of people having to write scripts to spin down their drives, and I don't know what that even means.

Could somebody here please give me an eli5 on my use-case? Thank you in advance: I only access the drives in the evening (media) and overnight (backup), but during the day I basically never need to access them. Would it be better to just switch off the system when I'm not using it during the day and then switch it back on before I do use it instead of just leaving it on 24/7?

Better I suppose meaning in terms of health of the drives.

I ran my last set of drives on TrueNAS with no modifications or special scripts for 7-8 years and I only replaced my drives recently to increase my storage capacity. Anecdotal of course but it's probably not a big deal.

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer

withoutclass posted:

I ran my last set of drives on TrueNAS with no modifications or special scripts for 7-8 years and I only replaced my drives recently to increase my storage capacity. Anecdotal of course but it's probably not a big deal.

I am coming up on 5 years on a set of 5 6 TB WD Red Pro drives. Not a single error yet.

Adbot
ADBOT LOVES YOU

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I’ve been eyeing a 1700 Xeon D board to replace my NAS given my workstation’s X570 and 3900X combo seems like a bad idea for the next 10 years like my Haswell i3 has accomplished for me. Buying a Broadwell Xeon D 1500 in 2022 seems like a dumb idea when the 1700s just came out. The triple channel memory layout is certainly something I didn’t expect. Bumping up to a micro ATX seems like a requirement given I’ll need either more PCIe onboard devices or physical slots to support the SAS connectors on my DAS. The PCIe 4.0 lanes being used for the SAS controllers is a little interesting but I have to keep in mind these boards are intended for all-SSD edge compute layouts where all that bandwidth for storage is a Good Idea.

I know the CPU pricing on ARK put the lowest tier Xeon D 1700 CPU around only $130 for a unit and the next step up was about $300. I wouldn’t be surprised if these boards hit around $600 for entry level similar to the Xeon D 1500 series back in 2016.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply