Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Hadlock
Nov 9, 2004

Is it still best practice to use at least two different kinds of disks in an array to minimize chances of multiple drive failure? If I were buying 8 disks I think I'd buy 4 of X @ $130 and 4 of Y @ $145 or whatever, or a 3-3-2 mixture

Adbot
ADBOT LOVES YOU

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Hadlock posted:

Is it still best practice to use at least two different kinds of disks in an array to minimize chances of multiple drive failure? If I were buying 8 disks I think I'd buy 4 of X @ $130 and 4 of Y @ $145 or whatever, or a 3-3-2 mixture

They don't necessarily have to be different types (unless you want to get really paranoid), but if you're buying more than like 4 it's not a terrible idea to buy some from different vendors to all but ensure you get different production batches, just in case they had a bad week or something when making them.

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

Hadlock posted:

Is it still best practice to use at least two different kinds of disks in an array to minimize chances of multiple drive failure? If I were buying 8 disks I think I'd buy 4 of X @ $130 and 4 of Y @ $145 or whatever, or a 3-3-2 mixture

last time I bought drives WD was ahead of everyone in reliability so I figured it was better to roll the dice with an easier target

dutchbstrd
Apr 28, 2004
Think for Yourself, Question Authority.
Thanks guys, will try the hammer method.

corgski
Feb 6, 2007

Silly goose, you're here forever.

Charles posted:

Isn't there firmware in the chips that's somehow unique to each drive set at the factory? I watched a Linus Tech Tips about a data recovery service. I think they had to reconstruct that data.

I haven’t done the board swap on any drive larger than 120GB so it’s possible newer drives have more stringent requirements but back in the day it was as simple as “find the exact same model number and revision and it’ll probably spin up”

Kia Soul Enthusias
May 9, 2004

zoom-zoom
Toilet Rascal

corgski posted:

I haven’t done the board swap on any drive larger than 120GB so it’s possible newer drives have more stringent requirements but back in the day it was as simple as “find the exact same model number and revision and it’ll probably spin up”

I'll have to watch the LTT video again and see if I'm remembering or just conflating it with something different.

Kia Soul Enthusias
May 9, 2004

zoom-zoom
Toilet Rascal
https://www.youtube.com/watch?v=eyr14_B230o&t=319s

You made me watch 3 LTT videos :P At least I did it at 2x speed. Here it is, at 5:20. The factory calibration is stored on an 8kb chip.

Hadlock
Nov 9, 2004

Random datapoint, I decided to convert my synology 3 disk nas from 3 disk SHR1 1 drive fault tolerance, to .... SHRX* 2 drive fault tolerance 4 disk at 4pm last wednesday by adding one drive

Currently only up to 67% checking parity consistency on 2.0tb of data after 5.X days of conversion

I was expecting it to take 1-3 days to complete the process, probably closer to one, a little suprised its taking this long, but progress seems to be steady and number go up, so

Anyways, when the synology tech says it might be faster to do it X rather than Y way, faster might be days or weeks. I can imagine this might take a full month if I had to rebuilt a 10tb array. Jesus :psyduck:

*not sure if it's SHR1 2 drive fault tolerance or SHR2 simply means SHR1 with 2 drive fault tolerance, but the UI simply calls it SHR with no number, with 2 drive fault tolerance

corgski
Feb 6, 2007

Silly goose, you're here forever.

Charles posted:

https://www.youtube.com/watch?v=eyr14_B230o&t=319s

You made me watch 3 LTT videos :P At least I did it at 2x speed. Here it is, at 5:20. The factory calibration is stored on an 8kb chip.
Oh you didn’t need to put yourself through that torture. :v:

That said keep that detail in mind if I somehow find myself needing to recover anything newer than 2005-ish. Swapping a single SMD flash chip isn’t that hard either and unlikely to be destroyed by claw hammering alone - catastrophically failing power supplies on the other hand will wreck those.

But seriously a zero wipe is (nearly always) sufficient, and if it isn’t you should be paying for shredding.

BlankSystemDaemon
Mar 13, 2009



DrDork posted:

I just drive a nail through it. Way faster and easier.
With most modern OS' on a modern processor you get entirely software-based FDE for free, and considering FDE is designed for data at rest, I don't know why you wouldn't just do that.
Then you just have to pick the software that you trust does the best job.

Hadlock posted:

Is it still best practice to use at least two different kinds of disks in an array to minimize chances of multiple drive failure? If I were buying 8 disks I think I'd buy 4 of X @ $130 and 4 of Y @ $145 or whatever, or a 3-3-2 mixture
Usually if you just mix different sequences of serial numbers together, that lets you juggle the numbers in your favour of not losing an entire array in case there's a bad batch - but if your economy allows for it, I would definitely grab some disks from different vendors.

Hadlock posted:

Random datapoint, I decided to convert my synology 3 disk nas from 3 disk SHR1 1 drive fault tolerance, to .... SHRX* 2 drive fault tolerance 4 disk at 4pm last wednesday by adding one drive

Currently only up to 67% checking parity consistency on 2.0tb of data after 5.X days of conversion

I was expecting it to take 1-3 days to complete the process, probably closer to one, a little suprised its taking this long, but progress seems to be steady and number go up, so

Anyways, when the synology tech says it might be faster to do it X rather than Y way, faster might be days or weeks. I can imagine this might take a full month if I had to rebuilt a 10tb array. Jesus :psyduck:

*not sure if it's SHR1 2 drive fault tolerance or SHR2 simply means SHR1 with 2 drive fault tolerance, but the UI simply calls it SHR with no number, with 2 drive fault tolerance
If they're doing erasure codes for P+Q striping with distributed parity, that'll mean that any two drives can fail before a single URE can take out the entire array.
Which makes it a mystery why they aren't doing P+Q+R since that's exactly the same finite field Galois matrix transformation that's used for P+Q.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

Degaussing is the way to destroy data

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Bob Morales posted:

Degaussing is the way to destroy data

Sure, and those are a lot of fun, but I also don't have one at my house, so....

D. Ebdrup posted:

With most modern OS' on a modern processor you get entirely software-based FDE for free, and considering FDE is designed for data at rest, I don't know why you wouldn't just do that.
Then you just have to pick the software that you trust does the best job.

Mostly because encrypting ZFS natively is still a reasonably new feature, and frankly it adds another way to lose all my data while providing me zero protection against any sort of actual threat I'm likely to face. For other people it might make more sense.

A nail is going in my old drives, regardless.

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

One day I will have access to an industrial shredder

KOTEX GOD OF BLOOD
Jul 7, 2012

Peter Gutmann, namesake of the 35-pass "Gutmann method":

quote:

In the time since this paper was published, some people have treated the 35-pass overwrite technique described in it more as a kind of voodoo incantation to banish evil spirits than the result of a technical analysis of drive encoding techniques. As a result, they advocate applying the voodoo to PRML and EPRML drives even though it will have no more effect than a simple scrubbing with random data. In fact performing the full 35-pass overwrite is pointless for any drive since it targets a blend of scenarios involving all types of (normally-used) encoding technology, which covers everything back to 30+-year-old MFM methods (if you don't understand that statement, re-read the paper). If you're using a drive which uses encoding technology X, you only need to perform the passes specific to X, and you never need to perform all 35 passes. For any modern PRML/EPRML drive, a few passes of random scrubbing is the best you can do. As the paper says, "A good scrubbing with random data will do about as well as can be expected". This was true in 1996, and is still true now.

Or, you know.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
As someone that's worked at MOSSAD-level security infrastructure, not everything that's done is for technical reasons as much as for theatrical reasons and the distraction that can cause nation states to waste time is worth the comparatively small investments. The calculus of security isn't just about the technical parts but when nation state kinds of money and teams are involved you're not looking at the same kinds of stuff that people typically get PhDs in computer security for anymore as much as what people with political science degrees do.

Axe-man
Apr 16, 2005

The product of hundreds of hours of scientific investigation and research.

The perfect meatball.
Clapping Larry

Hadlock posted:

Random datapoint, I decided to convert my synology 3 disk nas from 3 disk SHR1 1 drive fault tolerance, to .... SHRX* 2 drive fault tolerance 4 disk at 4pm last wednesday by adding one drive

Currently only up to 67% checking parity consistency on 2.0tb of data after 5.X days of conversion

I was expecting it to take 1-3 days to complete the process, probably closer to one, a little suprised its taking this long, but progress seems to be steady and number go up, so

Anyways, when the synology tech says it might be faster to do it X rather than Y way, faster might be days or weeks. I can imagine this might take a full month if I had to rebuilt a 10tb array. Jesus :psyduck:

*not sure if it's SHR1 2 drive fault tolerance or SHR2 simply means SHR1 with 2 drive fault tolerance, but the UI simply calls it SHR with no number, with 2 drive fault tolerance

Behind the scenes what is going on is that synology is using LVM to create a mdadm raid 6. It is notoriously slow. I do believe that Synology is usually the most conservative when it comes to this, but yeah, most Synology techs will say recovering from backups is much faster.

With the new SMR WD reveal i can see a lot of people biting nails while stuff like this happens. :ohdear:

BlankSystemDaemon
Mar 13, 2009



What the heck-rear end kind of algorithm does mdadm use for P+Q if it's so slow as to render backup faster!?
The one in ZFS is not as fast as mirroring or XOR of course, but it's not exactly a slouch either, because the operations themselves are offloaded on the CPU.

The kind of supercomputer you need to do Galois matrix computations using finite fields in software is the kind you could simulate nuclear physics or global weather on, so since CDs and DVDs use the same fundemental error correction in the form of Reed-Solomon encoding, it's good that it can be done by a tiny ASIC.

EDIT: It turns out it uses the same calculations, so if it's as slow as that suggests, there's a bottleneck somewhere else.
Only notable thing is that ZFS takes it one step further and does P+Q+R striping with distributed parity.

BlankSystemDaemon fucked around with this message at 21:54 on Apr 28, 2020

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

Isn't there some switch that limits repair speed?

here we go, not sure if synology exposes this or has changed it from defaults or whatever

quote:

The /proc/sys/dev/raid/speed_limit_min is config file that reflects the current “goal” rebuild speed for times when non-rebuild activity is current on an array. The speed is in Kibibytes per second (1 kibibyte = 210 bytes = 1024 bytes), and is a per-device rate, not a per-array rate . The default is 1000.

The /proc/sys/dev/raid/speed_limit_max is config file that reflects the current “goal” rebuild speed for times when no non-rebuild activity is current on an array. The default is 100,000.

To see current limits, enter:
# sysctl dev.raid.speed_limit_min
# sysctl dev.raid.speed_limit_max

from https://www.cyberciti.biz/tips/linux-raid-increase-resync-rebuild-speed.html

Yaoi Gagarin
Feb 20, 2014

Maybe when bcachefs gets mainlined Linux will finally have a Good Native Filesystem™

BlankSystemDaemon
Mar 13, 2009



taqueso posted:

Isn't there some switch that limits repair speed?

here we go, not sure if synology exposes this or has changed it from defaults or whatever


from https://www.cyberciti.biz/tips/linux-raid-increase-resync-rebuild-speed.html
Ah yeah, ZFS has an equivalent of that which tunes scrubbing and resilvering(1) - and more importantly, it's affected by the kernels frequency.
Linux and Solaris default to 100Hz whereas FreeBSD is tickless and defaults to 1000Hz with some device drivers supporting polling instead of interrupts.

One side-effect of this is that without tweaking, ZFS on FreeBSD is a lot faster at scrubbing and resilvering.

(1): In FreeBSD, they're defined via sysctls under the vfs.zfs.scrub_delay and resilver_delay OIDs respectively. I imagine Linux has some /proc that takes a magic bit-value/-mask?

VostokProgram posted:

Maybe when bcachefs gets mainlined Linux will finally have a Good Native Filesystem™
I'm one of the people who're hoping ZFS gets some actual competition, because that's where opensource stands to gain the most - but it's not like bcachefs will reach maturity over-night, and even then it'll still be missing a LOT of the features that any modern-ZFS admin take for granted.
Plus, people might still be slightly burned from btrfs.

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

I was looking at the code last weekend, it's at the point where its working but missing all the fancy features. FEC 'should be coming real soon', it's absolutely going to be awhile.

I think people that are burned on btrfs are exited to see something reasonable being developed, though

Yaoi Gagarin
Feb 20, 2014

D. Ebdrup posted:



I'm one of the people who're hoping ZFS gets some actual competition, because that's where opensource stands to gain the most - but it's not like bcachefs will reach maturity over-night, and even then it'll still be missing a LOT of the features that any modern-ZFS admin take for granted.
Plus, people might still be slightly burned from btrfs.

That's my hope too. Also I think that since bcache is a fairly popular program to begin with and bcachefs is just a posix API over that same storage layer that people won't approach it with the same apprehension as btrfs.

It reminds me of a talk I saw once where the presenter had tried to build a filesystem on top of an RDBMS. So like tables for inodes, directories, etc. It was super slow, but it did work.

Trastion
Jul 24, 2003
The one and only.
I want to buy a NAS to replace a bunch of random drives for storage, mainly for my Plex server. What is a good NAS/drive combo to go with? I currently have around 14TB of data so I will need that plus some for future.

6 bay nas with 8tb disks? What raid version? Nothing on it is real important so it would just be the pain of downloading things again. Important stuff is backed up to a couple places already.

Should I worry about "expandable" NAS? or if I need more space later just add a 2nd one? That is my thinking right now. Get one that will handle things now plus some and later buy a 2nd when they will probably be cheaper and drives will be bigger/cheaper.

BlankSystemDaemon
Mar 13, 2009



VostokProgram posted:

That's my hope too. Also I think that since bcache is a fairly popular program to begin with and bcachefs is just a posix API over that same storage layer that people won't approach it with the same apprehension as btrfs.

It reminds me of a talk I saw once where the presenter had tried to build a filesystem on top of an RDBMS. So like tables for inodes, directories, etc. It was super slow, but it did work.
One of the projects to be explored at Berkeley, and which fortunately didn't have as much success as BSD and in-/post-gres, was a filesystem built on top of a database.
I've always thought the world didn't deserve that much of a punishment, personally.

Axe-man
Apr 16, 2005

The product of hundreds of hours of scientific investigation and research.

The perfect meatball.
Clapping Larry

taqueso posted:

Isn't there some switch that limits repair speed?

here we go, not sure if synology exposes this or has changed it from defaults or whatever

hardware or actual settings :ssh: You can actually make it rebuild faster:


By default, it is set to lower impact, since they assume people will want to use it to also play plex or whatever while it rebuilds.

The more disks though the longer the rebuild, those, i think are mostly hardware.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Trastion posted:

I want to buy a NAS to replace a bunch of random drives for storage, mainly for my Plex server. What is a good NAS/drive combo to go with? I currently have around 14TB of data so I will need that plus some for future.

6 bay nas with 8tb disks? What raid version? Nothing on it is real important so it would just be the pain of downloading things again. Important stuff is backed up to a couple places already.

Should I worry about "expandable" NAS? or if I need more space later just add a 2nd one? That is my thinking right now. Get one that will handle things now plus some and later buy a 2nd when they will probably be cheaper and drives will be bigger/cheaper.

Depends entirely on how comfortable you are with a DIY system. If you are, we can give some recommendations, and in that case a 6-drive setup isn't crazy.

If not, and you're thinking of a Synology or whatnot, note that a 6-bay is gonna be like $800, and then you add the disks.

6x8TB is also a ton of storage if you're only at 14TB. Even if you wanted to be overly protective with RAIDZ2/SHR2, you'd still have 32TB of usable space. 40TB if you were ok with 1-disk redundancy. If you don't expect to expand rapidly, I might consider a 4x8TB single-redundancy setup. That'd get you 24TB usable space, and a 4 bay Synology that can do Plex like a DS918+ is more like $550. Then if you wanted to expand in a few years, you could either do something like replace the 4 drives with 16TB ones (or whatever is the price:size sweet spot at the time) or get a expansion unit to add another 4+ drives.

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer
Synology just announced their 2020 4 bays models FYI.

Kia Soul Enthusias
May 9, 2004

zoom-zoom
Toilet Rascal

Smashing Link posted:

Synology just announced their 2020 4 bays models FYI.

Link to the announcement?

Is this one of them?
https://www.synology.com/en-us/products/DS420j

Also is the DS418 play a good buy?

Axe-man
Apr 16, 2005

The product of hundreds of hours of scientific investigation and research.

The perfect meatball.
Clapping Larry
Depends what you are using the DS418 for, always better to over estimate than under.

Sniep
Mar 28, 2004

All I needed was that fatty blunt...



King of Breakfast

Charles posted:

Link to the announcement?

Is this one of them?
https://www.synology.com/en-us/products/DS420j

Also is the DS418 play a good buy?

It would be given the model number

DS (DiskStation) - # of max possible disks - Year of Release - Feature suffix

so DS 4-disks 2020 j (value model)

Kia Soul Enthusias
May 9, 2004

zoom-zoom
Toilet Rascal
I saw the 20 so I figured, but don't know what the rest are.

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer

Charles posted:

Link to the announcement?

Is this one of them?
https://www.synology.com/en-us/products/DS420j

Also is the DS418 play a good buy?

https://nascompares.com/2020/04/27/synology-ds920-ds220-ds720-and-ds420-nas-revealed/amp/

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer
Quote=/=edit

Hadlock
Nov 9, 2004

Axe-man posted:

hardware or actual settings :ssh: You can actually make it rebuild faster:


By default, it is set to lower impact, since they assume people will want to use it to also play plex or whatever while it rebuilds.

The more disks though the longer the rebuild, those, i think are mostly hardware.

Oh interesting

I bumped the numbers up and cpu/memory went from ~5% utilization to ~15% utilization. Hopefully this means it'll finish updating before I move on friday now

Kia Soul Enthusias
May 9, 2004

zoom-zoom
Toilet Rascal

Ah, it's a leak, not an announcement, that's why I couldn't find it:

2 questions:
Do existing models prices usually drop?
Do existing models get good support?

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer

Charles posted:

Ah, it's a leak, not an announcement, that's why I couldn't find it:

2 questions:
Do existing models prices usually drop?
Do existing models get good support?

Not sure about price drops but I have 2 units from 2015 that still get updates. I have heard of 10 year old models still in use. The downside is the premium you pay for the support and ecosystem, like Apple. For that reason my last build was Unraid with eBay server parts.

Discussion Quorum
Dec 5, 2002
Armchair Philistine
I'm working on setting up my NAS, and in planning everything out, the following question came to me. I'm sure the answer is out there, but I can't seem to find the magic Google phrase.

If I have a large file on a Samba share and want to move or copy it to another point on the same share (say I want to move \\nas\linux_isos\notporn.avi to \\nas\freebsd_isos\), is the Samba protocol and/or implementation smart enough to move the file in-place? Or is it all handled by the client like a regular file move/copy, meaning it will get round-tripped over the network?

Less Fat Luke
May 23, 2003

Exciting Lemon
If it’s the same share it’ll be a server-side move.

eames
May 9, 2009

Digitimes reports that WD is planning to increase the price of enterprise drives. You know, the ones they recommended to consumers who don’t want SMR.

https://seekingalpha.com/news/3564710-western-digital-raising-hdd-prices-report

Adbot
ADBOT LOVES YOU

SCheeseman
Apr 23, 2003

I threw together a Ryzen 3 1200-based Linux server a few years ago that I've been using for Plex etc as well as a QNAP NAS that I want to retire. The way storage is set up is a bit of a mess at the moment, so the idea is to set up a software RAID5 with 9x8TB hard drives on the server (eventually expanding to 10 or 11 drives). I already have 6x8TB drives that aren't in an array (four in the NAS and 2 in the server) and I'm going to buy another 3x8TB drives, create the initial array then transfer stuff over, adding drives to the array as I empty them. They're SMR drives, so I imagine this will be slow as hell and I understand there will be speed penalty during RAID rebuilds, though the NAS is only really used to store video files for streaming so speed requirements aren't high. Is this a terrible idea?

Tips on filesystems and chunk sizes etc would be nice too.

SCheeseman fucked around with this message at 09:57 on Apr 29, 2020

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply