|
Is it still best practice to use at least two different kinds of disks in an array to minimize chances of multiple drive failure? If I were buying 8 disks I think I'd buy 4 of X @ $130 and 4 of Y @ $145 or whatever, or a 3-3-2 mixture
|
# ? Apr 28, 2020 04:14 |
|
|
# ? Jun 6, 2024 05:39 |
|
Hadlock posted:Is it still best practice to use at least two different kinds of disks in an array to minimize chances of multiple drive failure? If I were buying 8 disks I think I'd buy 4 of X @ $130 and 4 of Y @ $145 or whatever, or a 3-3-2 mixture They don't necessarily have to be different types (unless you want to get really paranoid), but if you're buying more than like 4 it's not a terrible idea to buy some from different vendors to all but ensure you get different production batches, just in case they had a bad week or something when making them.
|
# ? Apr 28, 2020 04:18 |
|
Hadlock posted:Is it still best practice to use at least two different kinds of disks in an array to minimize chances of multiple drive failure? If I were buying 8 disks I think I'd buy 4 of X @ $130 and 4 of Y @ $145 or whatever, or a 3-3-2 mixture last time I bought drives WD was ahead of everyone in reliability so I figured it was better to roll the dice with an easier target
|
# ? Apr 28, 2020 04:31 |
|
Thanks guys, will try the hammer method.
|
# ? Apr 28, 2020 04:46 |
|
Charles posted:Isn't there firmware in the chips that's somehow unique to each drive set at the factory? I watched a Linus Tech Tips about a data recovery service. I think they had to reconstruct that data. I haven’t done the board swap on any drive larger than 120GB so it’s possible newer drives have more stringent requirements but back in the day it was as simple as “find the exact same model number and revision and it’ll probably spin up”
|
# ? Apr 28, 2020 05:01 |
|
corgski posted:I haven’t done the board swap on any drive larger than 120GB so it’s possible newer drives have more stringent requirements but back in the day it was as simple as “find the exact same model number and revision and it’ll probably spin up” I'll have to watch the LTT video again and see if I'm remembering or just conflating it with something different.
|
# ? Apr 28, 2020 05:55 |
|
https://www.youtube.com/watch?v=eyr14_B230o&t=319s You made me watch 3 LTT videos :P At least I did it at 2x speed. Here it is, at 5:20. The factory calibration is stored on an 8kb chip.
|
# ? Apr 28, 2020 06:24 |
|
Random datapoint, I decided to convert my synology 3 disk nas from 3 disk SHR1 1 drive fault tolerance, to .... SHRX* 2 drive fault tolerance 4 disk at 4pm last wednesday by adding one drive Currently only up to 67% checking parity consistency on 2.0tb of data after 5.X days of conversion I was expecting it to take 1-3 days to complete the process, probably closer to one, a little suprised its taking this long, but progress seems to be steady and number go up, so Anyways, when the synology tech says it might be faster to do it X rather than Y way, faster might be days or weeks. I can imagine this might take a full month if I had to rebuilt a 10tb array. Jesus *not sure if it's SHR1 2 drive fault tolerance or SHR2 simply means SHR1 with 2 drive fault tolerance, but the UI simply calls it SHR with no number, with 2 drive fault tolerance
|
# ? Apr 28, 2020 08:25 |
|
Charles posted:https://www.youtube.com/watch?v=eyr14_B230o&t=319s That said keep that detail in mind if I somehow find myself needing to recover anything newer than 2005-ish. Swapping a single SMD flash chip isn’t that hard either and unlikely to be destroyed by claw hammering alone - catastrophically failing power supplies on the other hand will wreck those. But seriously a zero wipe is (nearly always) sufficient, and if it isn’t you should be paying for shredding.
|
# ? Apr 28, 2020 10:20 |
DrDork posted:I just drive a nail through it. Way faster and easier. Then you just have to pick the software that you trust does the best job. Hadlock posted:Is it still best practice to use at least two different kinds of disks in an array to minimize chances of multiple drive failure? If I were buying 8 disks I think I'd buy 4 of X @ $130 and 4 of Y @ $145 or whatever, or a 3-3-2 mixture Hadlock posted:Random datapoint, I decided to convert my synology 3 disk nas from 3 disk SHR1 1 drive fault tolerance, to .... SHRX* 2 drive fault tolerance 4 disk at 4pm last wednesday by adding one drive Which makes it a mystery why they aren't doing P+Q+R since that's exactly the same finite field Galois matrix transformation that's used for P+Q.
|
|
# ? Apr 28, 2020 12:07 |
|
Degaussing is the way to destroy data
|
# ? Apr 28, 2020 12:35 |
|
Bob Morales posted:Degaussing is the way to destroy data Sure, and those are a lot of fun, but I also don't have one at my house, so.... D. Ebdrup posted:With most modern OS' on a modern processor you get entirely software-based FDE for free, and considering FDE is designed for data at rest, I don't know why you wouldn't just do that. Mostly because encrypting ZFS natively is still a reasonably new feature, and frankly it adds another way to lose all my data while providing me zero protection against any sort of actual threat I'm likely to face. For other people it might make more sense. A nail is going in my old drives, regardless.
|
# ? Apr 28, 2020 14:36 |
|
One day I will have access to an industrial shredder
|
# ? Apr 28, 2020 15:48 |
|
Peter Gutmann, namesake of the 35-pass "Gutmann method":quote:In the time since this paper was published, some people have treated the 35-pass overwrite technique described in it more as a kind of voodoo incantation to banish evil spirits than the result of a technical analysis of drive encoding techniques. As a result, they advocate applying the voodoo to PRML and EPRML drives even though it will have no more effect than a simple scrubbing with random data. In fact performing the full 35-pass overwrite is pointless for any drive since it targets a blend of scenarios involving all types of (normally-used) encoding technology, which covers everything back to 30+-year-old MFM methods (if you don't understand that statement, re-read the paper). If you're using a drive which uses encoding technology X, you only need to perform the passes specific to X, and you never need to perform all 35 passes. For any modern PRML/EPRML drive, a few passes of random scrubbing is the best you can do. As the paper says, "A good scrubbing with random data will do about as well as can be expected". This was true in 1996, and is still true now. Or, you know.
|
# ? Apr 28, 2020 15:54 |
|
As someone that's worked at MOSSAD-level security infrastructure, not everything that's done is for technical reasons as much as for theatrical reasons and the distraction that can cause nation states to waste time is worth the comparatively small investments. The calculus of security isn't just about the technical parts but when nation state kinds of money and teams are involved you're not looking at the same kinds of stuff that people typically get PhDs in computer security for anymore as much as what people with political science degrees do.
|
# ? Apr 28, 2020 19:40 |
|
Hadlock posted:Random datapoint, I decided to convert my synology 3 disk nas from 3 disk SHR1 1 drive fault tolerance, to .... SHRX* 2 drive fault tolerance 4 disk at 4pm last wednesday by adding one drive Behind the scenes what is going on is that synology is using LVM to create a mdadm raid 6. It is notoriously slow. I do believe that Synology is usually the most conservative when it comes to this, but yeah, most Synology techs will say recovering from backups is much faster. With the new SMR WD reveal i can see a lot of people biting nails while stuff like this happens.
|
# ? Apr 28, 2020 21:27 |
What the heck-rear end kind of algorithm does mdadm use for P+Q if it's so slow as to render backup faster!? The one in ZFS is not as fast as mirroring or XOR of course, but it's not exactly a slouch either, because the operations themselves are offloaded on the CPU. The kind of supercomputer you need to do Galois matrix computations using finite fields in software is the kind you could simulate nuclear physics or global weather on, so since CDs and DVDs use the same fundemental error correction in the form of Reed-Solomon encoding, it's good that it can be done by a tiny ASIC. EDIT: It turns out it uses the same calculations, so if it's as slow as that suggests, there's a bottleneck somewhere else. Only notable thing is that ZFS takes it one step further and does P+Q+R striping with distributed parity. BlankSystemDaemon fucked around with this message at 21:54 on Apr 28, 2020 |
|
# ? Apr 28, 2020 21:47 |
|
Isn't there some switch that limits repair speed? here we go, not sure if synology exposes this or has changed it from defaults or whatever quote:The /proc/sys/dev/raid/speed_limit_min is config file that reflects the current “goal” rebuild speed for times when non-rebuild activity is current on an array. The speed is in Kibibytes per second (1 kibibyte = 210 bytes = 1024 bytes), and is a per-device rate, not a per-array rate . The default is 1000. from https://www.cyberciti.biz/tips/linux-raid-increase-resync-rebuild-speed.html
|
# ? Apr 28, 2020 21:58 |
|
Maybe when bcachefs gets mainlined Linux will finally have a Good Native Filesystem™
|
# ? Apr 28, 2020 22:08 |
taqueso posted:Isn't there some switch that limits repair speed? Linux and Solaris default to 100Hz whereas FreeBSD is tickless and defaults to 1000Hz with some device drivers supporting polling instead of interrupts. One side-effect of this is that without tweaking, ZFS on FreeBSD is a lot faster at scrubbing and resilvering. (1): In FreeBSD, they're defined via sysctls under the vfs.zfs.scrub_delay and resilver_delay OIDs respectively. I imagine Linux has some /proc that takes a magic bit-value/-mask? VostokProgram posted:Maybe when bcachefs gets mainlined Linux will finally have a Good Native Filesystem™ Plus, people might still be slightly burned from btrfs.
|
|
# ? Apr 28, 2020 22:25 |
|
I was looking at the code last weekend, it's at the point where its working but missing all the fancy features. FEC 'should be coming real soon', it's absolutely going to be awhile. I think people that are burned on btrfs are exited to see something reasonable being developed, though
|
# ? Apr 28, 2020 22:30 |
|
D. Ebdrup posted:
That's my hope too. Also I think that since bcache is a fairly popular program to begin with and bcachefs is just a posix API over that same storage layer that people won't approach it with the same apprehension as btrfs. It reminds me of a talk I saw once where the presenter had tried to build a filesystem on top of an RDBMS. So like tables for inodes, directories, etc. It was super slow, but it did work.
|
# ? Apr 28, 2020 22:34 |
|
I want to buy a NAS to replace a bunch of random drives for storage, mainly for my Plex server. What is a good NAS/drive combo to go with? I currently have around 14TB of data so I will need that plus some for future. 6 bay nas with 8tb disks? What raid version? Nothing on it is real important so it would just be the pain of downloading things again. Important stuff is backed up to a couple places already. Should I worry about "expandable" NAS? or if I need more space later just add a 2nd one? That is my thinking right now. Get one that will handle things now plus some and later buy a 2nd when they will probably be cheaper and drives will be bigger/cheaper.
|
# ? Apr 28, 2020 22:38 |
VostokProgram posted:That's my hope too. Also I think that since bcache is a fairly popular program to begin with and bcachefs is just a posix API over that same storage layer that people won't approach it with the same apprehension as btrfs. I've always thought the world didn't deserve that much of a punishment, personally.
|
|
# ? Apr 28, 2020 22:54 |
|
taqueso posted:Isn't there some switch that limits repair speed? hardware or actual settings You can actually make it rebuild faster: By default, it is set to lower impact, since they assume people will want to use it to also play plex or whatever while it rebuilds. The more disks though the longer the rebuild, those, i think are mostly hardware.
|
# ? Apr 28, 2020 23:12 |
|
Trastion posted:I want to buy a NAS to replace a bunch of random drives for storage, mainly for my Plex server. What is a good NAS/drive combo to go with? I currently have around 14TB of data so I will need that plus some for future. Depends entirely on how comfortable you are with a DIY system. If you are, we can give some recommendations, and in that case a 6-drive setup isn't crazy. If not, and you're thinking of a Synology or whatnot, note that a 6-bay is gonna be like $800, and then you add the disks. 6x8TB is also a ton of storage if you're only at 14TB. Even if you wanted to be overly protective with RAIDZ2/SHR2, you'd still have 32TB of usable space. 40TB if you were ok with 1-disk redundancy. If you don't expect to expand rapidly, I might consider a 4x8TB single-redundancy setup. That'd get you 24TB usable space, and a 4 bay Synology that can do Plex like a DS918+ is more like $550. Then if you wanted to expand in a few years, you could either do something like replace the 4 drives with 16TB ones (or whatever is the price:size sweet spot at the time) or get a expansion unit to add another 4+ drives.
|
# ? Apr 28, 2020 23:26 |
|
Synology just announced their 2020 4 bays models FYI.
|
# ? Apr 28, 2020 23:35 |
|
Smashing Link posted:Synology just announced their 2020 4 bays models FYI. Link to the announcement? Is this one of them? https://www.synology.com/en-us/products/DS420j Also is the DS418 play a good buy?
|
# ? Apr 29, 2020 00:06 |
|
Depends what you are using the DS418 for, always better to over estimate than under.
|
# ? Apr 29, 2020 00:28 |
|
Charles posted:Link to the announcement? It would be given the model number DS (DiskStation) - # of max possible disks - Year of Release - Feature suffix so DS 4-disks 2020 j (value model)
|
# ? Apr 29, 2020 01:12 |
|
I saw the 20 so I figured, but don't know what the rest are.
|
# ? Apr 29, 2020 01:16 |
|
Charles posted:Link to the announcement? https://nascompares.com/2020/04/27/synology-ds920-ds220-ds720-and-ds420-nas-revealed/amp/
|
# ? Apr 29, 2020 01:31 |
|
Quote=/=edit
|
# ? Apr 29, 2020 01:32 |
|
Axe-man posted:hardware or actual settings You can actually make it rebuild faster: Oh interesting I bumped the numbers up and cpu/memory went from ~5% utilization to ~15% utilization. Hopefully this means it'll finish updating before I move on friday now
|
# ? Apr 29, 2020 02:32 |
|
Smashing Link posted:https://nascompares.com/2020/04/27/synology-ds920-ds220-ds720-and-ds420-nas-revealed/amp/ Ah, it's a leak, not an announcement, that's why I couldn't find it: 2 questions: Do existing models prices usually drop? Do existing models get good support?
|
# ? Apr 29, 2020 03:18 |
|
Charles posted:Ah, it's a leak, not an announcement, that's why I couldn't find it: Not sure about price drops but I have 2 units from 2015 that still get updates. I have heard of 10 year old models still in use. The downside is the premium you pay for the support and ecosystem, like Apple. For that reason my last build was Unraid with eBay server parts.
|
# ? Apr 29, 2020 03:51 |
|
I'm working on setting up my NAS, and in planning everything out, the following question came to me. I'm sure the answer is out there, but I can't seem to find the magic Google phrase. If I have a large file on a Samba share and want to move or copy it to another point on the same share (say I want to move \\nas\linux_isos\notporn.avi to \\nas\freebsd_isos\), is the Samba protocol and/or implementation smart enough to move the file in-place? Or is it all handled by the client like a regular file move/copy, meaning it will get round-tripped over the network?
|
# ? Apr 29, 2020 04:23 |
|
If it’s the same share it’ll be a server-side move.
|
# ? Apr 29, 2020 04:25 |
|
Digitimes reports that WD is planning to increase the price of enterprise drives. You know, the ones they recommended to consumers who don’t want SMR. https://seekingalpha.com/news/3564710-western-digital-raising-hdd-prices-report
|
# ? Apr 29, 2020 07:58 |
|
|
# ? Jun 6, 2024 05:39 |
|
I threw together a Ryzen 3 1200-based Linux server a few years ago that I've been using for Plex etc as well as a QNAP NAS that I want to retire. The way storage is set up is a bit of a mess at the moment, so the idea is to set up a software RAID5 with 9x8TB hard drives on the server (eventually expanding to 10 or 11 drives). I already have 6x8TB drives that aren't in an array (four in the NAS and 2 in the server) and I'm going to buy another 3x8TB drives, create the initial array then transfer stuff over, adding drives to the array as I empty them. They're SMR drives, so I imagine this will be slow as hell and I understand there will be speed penalty during RAID rebuilds, though the NAS is only really used to store video files for streaming so speed requirements aren't high. Is this a terrible idea? Tips on filesystems and chunk sizes etc would be nice too. SCheeseman fucked around with this message at 09:57 on Apr 29, 2020 |
# ? Apr 29, 2020 09:49 |