Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Crunchy Black
Oct 24, 2017

by Athanatos

BSD, I just want to reiterate that having someone with your knowledge of the low-level and communication ability is always amazingly helpful, insightful and fun. Thanks for hanging out.

mom and dad fight a lot posted:

I hope it's okay to ask this. I was redirected here from the PC Building Thread. Basically:

- I'm wanting to throw 2 HDD's in my desktop PC for long-term storage
- HDD's would be in Raid1 (well...mirrored volumes in Win10) for when one of them fails.
- they wouldn't be used that often; surely read more often than written. Mainly just a place to dump photos/videos/documents/etc.
- chose HDDs because they're like, half the cost of SSDs

Problem is WD's most bang-for-buck HDD's are shingled magnetic recording. Will it make a significant difference if I stick with SMR vice CMR for this use case? Am I gonna regret it?

SMR for this purpose will be fine. Basically, if you're not writing a lot and it's not a NAS, it'll be okay. There's a lot more technical stuff behind that, but that should be all you need to know.

Adbot
ADBOT LOVES YOU

freeasinbeer
Mar 26, 2015

by Fluffdaddy
And it sounds like SMR might be fine for some NAS; just not ZFS.

Edit: this should not be read as an endorsement to use it for any consumer related NAS stuff. It offers some theoretical benefits for specific workloads; and I’d be interested to see if something like rocksdb or some newer exotic file systems make it an interesting proposition.

freeasinbeer fucked around with this message at 15:34 on Dec 30, 2021

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Just seen some Youtube video where a guy goes on about combining Mergerfs with SnapRAID, because ZFS is "too inflexible" for his taste (mainly the RAIDZ expansion argument, which I hope will finally get settled in OpenZFS 3.0), combined with some cron script using rsync to implement a cache SSD (also via Mergerfs).

That certainly seems to be an odd way to go about things. Why the third party solutions, when you have mdadm and bcache? Which probably are also more robust due to higher install base highlighting issues with these?

Nulldevice
Jun 17, 2006
Toilet Rascal

Combat Pretzel posted:

Just seen some Youtube video where a guy goes on about combining Mergerfs with SnapRAID, because ZFS is "too inflexible" for his taste (mainly the RAIDZ expansion argument, which I hope will finally get settled in OpenZFS 3.0), combined with some cron script using rsync to implement a cache SSD (also via Mergerfs).

That certainly seems to be an odd way to go about things. Why the third party solutions, when you have mdadm and bcache? Which probably are also more robust due to higher install base highlighting issues with these?

MergerFS/Snapraid is very flexible and similar to what unraid does. Largest disk(s) are parity, and you can mix and match drive sizes. Merger just puts them in one big pool. There's some technical stuff that you have to do to Samba and NFS, and merger will allow you to spread writes across the pool, or fill one disk at a time and reserve an amount you specify. You can also add drives with data already on them and not lose anything. In the end it's just one large mount point with data protection for as many disks as you supply for parity. Default config file specifies up to six parity disks.

edit: I should clarify something with the merger/snapraid setup that I did have pop up a few times, the filesystem loop. If I had a TV show that spanned two disks, I would end up in a loop and had to manually repair the issue. This happened only a few times, but it happened more than I'd like. Snapraid didn't detect it, so I'm pretty sure it was a problem with mergerfs.

Nulldevice fucked around with this message at 16:39 on Dec 31, 2021

Crunchy Black
Oct 24, 2017

by Athanatos
That's certainly cool but flexibility isn't my main design target when I'm talking about high speed local large-scale data storage.

Nulldevice
Jun 17, 2006
Toilet Rascal

Crunchy Black posted:

That's certainly cool but flexibility isn't my main design target when I'm talking about high speed local large-scale data storage.

It is cool for some purposes, but I prefer TrueNAS or a ZFS based system. I retired the snapraid system years ago. I still help a friend maintain one however.

Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.

Crunchy Black posted:

BSD, I just want to reiterate that having someone with your knowledge of the low-level and communication ability is always amazingly helpful, insightful and fun. Thanks for hanging out.

agreed, teaching us something new every day.

mom and dad fight a lot
Sep 21, 2006

If you count them all, this sentence has exactly seventy-two characters.

BlankSystemDaemon posted:

Drive-managed SMR works for traditional filesystems like NTFS, so you should be fine.
No guarentees though.

Have you thought about shucking? You can often get drives for cheaper than the SMR drives.
I...didn't know this was a thing. I'll have to look into that. Thanks!
Edit: Oh right, some of them are drive managed, and some aren't. I remember hearing about that. Eh, maybe using mechanical drives 'aint a good idea, it's so loving complicated now.

Crunchy Black posted:

SMR for this purpose will be fine. Basically, if you're not writing a lot and it's not a NAS, it'll be okay. There's a lot more technical stuff behind that, but that should be all you need to know.
Glad to hear, thanks! I don't know much beyond "here's how SMR works and also it can take forever to write sometimes". Had no idea how Win10 mirrored drives would handle it. I've got a 500GB M.2 drive doing most everything else, but when that starts becoming insufficient, I'll hopefully have enough $$$ budgeted for a decent 2.5" SSD, and not have to lean on the HDD's temporarily. :ohdear:

freeasinbeer posted:

And it sounds like SMR might be fine for some NAS; just not ZFS.

Edit: this should not be read as an endorsement to use it for any consumer related NAS stuff. It offers some theoretical benefits for specific workloads; and I’d be interested to see if something like rocksdb or some newer exotic file systems make it an interesting proposition.
Good to know, thanks! One of these days I'll join the sexy exciting world of NAS, but for now 2 disks in RAID1 is good enough.

mom and dad fight a lot fucked around with this message at 18:01 on Dec 30, 2021

Motronic
Nov 6, 2009
Probation
Can't post for 9 hours!
So I grabbed 4 6TB WD Red+ drives (WD60EFZX-68B) to replace a pool of 4 WD Reds that had 5-6 years on them in my Free/TrueNAS box. Was going to upgrade the size but everything was really expensive and these were on sale on Newegg. One of the first 4 simply wasn't recognized. It never spun up all the way. So I returned it and bought another one. Another one of the first batch ended up throwing a bunch of unrecoverable errors during resilvering. Also, it turns out that my "return" was actually a replacement, because a few days later I got a shipping notification and a new drive showed up.

I'm not going to mess with this pool until the holidays are over, but two questions: What the hell WD? and What's the best way to test/do a warranty claim on the one that threw a bunch of errors. Is there still some utility they have you run on it? I've got an external USB drive sled and various windows/macos/linux boxes I can use to run something on. I'm hoping that's good enough because it will be a huge pain in the rear end to actually install that drive in something. (or maybe newegg will just take it back and this is unnecessary)

CopperHound
Feb 14, 2012

mom and dad fight a lot posted:

I...didn't know this was a thing. I'll have to look into that. Thanks!
https://shucks.top/

BlankSystemDaemon
Mar 13, 2009



Crunchy Black posted:

BSD, I just want to reiterate that having someone with your knowledge of the low-level and communication ability is always amazingly helpful, insightful and fun. Thanks for hanging out.
Thank you for saying so - although I'm not so sure about my communication ability. I struggle between being too concise and being too verbose. Someone should really invent a flag between -q and -v. :v:

freeasinbeer posted:

And it sounds like SMR might be fine for some NAS; just not ZFS.
It depends on the writing patterns (including for ZFS). It is theoretically possible to configure a zpool such that it'll function with ZFS, even during scrub. But even if you're using hardware RAID, random I/O of any kind is going to be significantly worse with SMR disks than it is for regular spinning rust.
However, the kinds of hoops you have to jump through to use ZFS means several gigabytes worth of dirty data in memory (meaning data goes lost if the system crashes, loses power, gets an uncorrectable error or a transient error without ECC), no synchronous I/O, that every single write is issued asynchronously, and that you don't read and write at the same time. It's functionally indistinguishable from the people who insist on setting ZFS up in such a way that it's practically guaranteed to lose them data, because they know just enough to be dangerous but not enough to spot the danger.

It's also worth mentioning that SMR disks can work for a type of serial access replacement, ie. you use da(4) like you would sa(4), where da0 a SMR disk connected via USB, and all you do is zfs send … > /dev/da0 then when you want to restore you do cat /dev/da0 > zfs receive ….
That's how I've been using a few SMR drives I mistakenly bought, to have an extra backup that I can use as a last-ditch effort once corrective receive lands in OpenZFS.

Combat Pretzel posted:

Just seen some Youtube video where a guy goes on about combining Mergerfs with SnapRAID, because ZFS is "too inflexible" for his taste (mainly the RAIDZ expansion argument, which I hope will finally get settled in OpenZFS 3.0), combined with some cron script using rsync to implement a cache SSD (also via Mergerfs).

That certainly seems to be an odd way to go about things. Why the third party solutions, when you have mdadm and bcache? Which probably are also more robust due to higher install base highlighting issues with these?
I have to confess that don't understand the appeal of flexibility when it comes to storage - because none of the solutions that give the amount of flexibility that people want have any kind of hint that they care about availability or things like checksums, and they mostly seem designed around the idea that you can trust disks.
I know I'm repeating myself here, but that's not been true for over 20 years and it's still not true today.

Nulldevice posted:

It is cool for some purposes, but I prefer TrueNAS or a ZFS based system. I retired the snapraid system years ago. I still help a friend maintain one however.
But what does it teach you? How quickly you can lose data? That you can lose data without knowing it? Those are things NTFS, EXT2-4 and UFS can teach you.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
Flexibility for me is being able to add a drive at a time of any size and expand as I go.

And I don't care about bitrot or data loss as what's important is backed up remotely. I touch linux/cloud stuff in my day job, I just want something that's easy and works for my plex/isos/a few self hosted services for my home.

For me, and I can only speak for me, UnRAID fits that perfectly and was worth every penny. Especially when you can repurpose old hardware that's more powerful than a store bought NAS.

Enos Cabell
Nov 3, 2004


Matt Zerella posted:

Flexibility for me is being able to add a drive at a time of any size and expand as I go.

And I don't care about bitrot or data loss as what's important is backed up remotely. I touch linux/cloud stuff in my day job, I just want something that's easy and works for my plex/isos/a few self hosted services for my home.

For me, and I can only speak for me, UnRAID fits that perfectly and was worth every penny. Especially when you can repurpose old hardware that's more powerful than a store bought NAS.

Yup, this is exactly why I went with UnRAID. Everything that I can't afford to lose is backed up offsite, and I love being able to add drives as my needs and budget require. Over the past 3 1/2 years I've expanded from 5 to 9 drives, and the only problems I've had were due to a crappy USB port on my motherboard that likes to fry USB drives.

YerDa Zabam
Aug 13, 2016



Motronic posted:

So I grabbed 4 6TB WD Red+ drives (WD60EFZX-68B) to replace a pool of 4 WD Reds that had 5-6 years on them in my Free/TrueNAS box. Was going to upgrade the size but everything was really expensive and these were on sale on Newegg. One of the first 4 simply wasn't recognized. It never spun up all the way. So I returned it and bought another one. Another one of the first batch ended up throwing a bunch of unrecoverable errors during resilvering. Also, it turns out that my "return" was actually a replacement, because a few days later I got a shipping notification and a new drive showed up.

I'm not going to mess with this pool until the holidays are over, but two questions: What the hell WD? and What's the best way to test/do a warranty claim on the one that threw a bunch of errors. Is there still some utility they have you run on it? I've got an external USB drive sled and various windows/macos/linux boxes I can use to run something on. I'm hoping that's good enough because it will be a huge pain in the rear end to actually install that drive in something. (or maybe newegg will just take it back and this is unnecessary)

You just had bad luck. Well, a bad batch more like.
I doubt they will test them. The practicalities of testing drives for a few sectors makes it very unlikely .
I used to work for Amazon and we tested zero. 99% of the time we wouldn't even open the box. Either back to the manufacturer or trashed.
Newegg might be different though, but if they are brand new, and likely from a bad batch they should just straight refund you.

If you are worried about new drives then maybe put a trial of Unraid on USB and run preclear on it. Most components fail at the beginning, or end of their lifetime and preclear gives it a good workout and shows if there are any issues. Takes a while though. Took me about 9 hours for an 8TB drive iirc
I can't think of anything that will use a USB sled and be low level enough though. There's definitely others in the tread that'll know for sure though
-e- badblocks works fine with USB drives, my assumption was wrong

.. E..
Ah, were you meaning to do a test and return the replacements if they throw errors? I think I got confused there. The preclear thing should work, and the return/replace/refund still stands.
The one that's already throwing up errors though, gently caress that straight back to them for sure.
Well, as long as it's definitely not something at your end. I had a missing single cap on the backplane that was loving my poo poo up in many weird ways

YerDa Zabam fucked around with this message at 14:13 on Dec 31, 2021

Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.

Enos Cabell posted:

Yup, this is exactly why I went with UnRAID. Everything that I can't afford to lose is backed up offsite, and I love being able to add drives as my needs and budget require. Over the past 3 1/2 years I've expanded from 5 to 9 drives, and the only problems I've had were due to a crappy USB port on my motherboard that likes to fry USB drives.

Same. I have what little irreplaceable data I have backed up to Google Drive and my media library is easily replaceable if I ever can't rebuild from parity. For someone with no budget and no Linux experience that just had a lot of old computer hardware laying around and wanted to try out a home server it's been great. I think I started in 2016 with a pair of old 500gb drives and now I have 8 3-4tb drives in my array acquired whenever I had the money and when the existing drives filled up.

It's also great not having Plex on my gaming machine anymore, now if that computer gets messed up I can just nuke Windows and start fresh instead of combing through forum posts from 2007 and digging through the registry for days. Obviously that's not exclusive to unRAID though, that's just a nice benefit of having a home server.

Scruff McGruff fucked around with this message at 19:54 on Dec 30, 2021

Motronic
Nov 6, 2009
Probation
Can't post for 9 hours!

Adolf Glitter posted:

You just had bad luck. Well, a bad batch more like.
I doubt they will test them. The practicalities of testing drives for a few sectors makes it very unlikely .
I used to work for Amazon and we tested zero. 99% of the time we wouldn't even open the box. Either back to the manufacturer or trashed.
Newegg might be different though, but if they are brand new, and likely from a bad batch they should just straight refund you.

If you are worried about new drives then maybe put a trial of Unraid on USB and run preclear on it. Most components fail at the beginning, or end of their lifetime and preclear gives it a good workout and shows if there are any issues. Takes a while though. Took me about 9 hours for an 8TB drive iirc

.. E..
Ah, were you meaning to do a test and return the replacements if they throw errors? I think I got confused there. The preclear thing should work, and the return/replace/refund still stands.
The one that's already throwing up errors though, gently caress that straight back to them for sure.
Well, as long as it's definitely not something at your end. I had a missing single cap on the backplane that was loving my poo poo up in many weird ways

I can't think of anything that will use a USB sled and be low level enough though. There's definitely others in the tread that'll know for sure though

Cool thanks. And no, I just meant making sure they'd take it back. The one that wouldn't even power up was pretty obvious.

Since I accidentally bought an extra one I'll just toss it in as an online spare. I'm 99.9% sure there's nothing wrong with my backplane/ports/etc. The one that wouldn't power on got replaced with something in the same bay and it's fine. The one that threw errors was the second one replaced in the pool and I put the remaining drive I had on hand at the time in there and it was fine. I just need to throw the warranty replacement in to complete this pool and then I guess I'll warranty the other one and wait for it to come back as my spare.

r u ready to WALK
Sep 29, 2001

i powered my old decommissioned fileserver back up to play around with zfs 2.1.2 and possibly upgrading my new server





now taking bets on whether the pool will magically repair itself or explode spectacularly
it's 12x 3TB WD Red drives from early 2013 that's been shut down since 2019 until now, resilvering already restarted from 0% twice due to disk io hanging and the lsi sas2008 controller resetting the links
i already swapped out one drive that appeared completely dead with a slightly newer spare

BlankSystemDaemon
Mar 13, 2009



r u ready to WALK posted:

i powered my old decommissioned fileserver back up to play around with zfs 2.1.2 and possibly upgrading my new server





now taking bets on whether the pool will magically repair itself or explode spectacularly
it's 12x 3TB WD Red drives from early 2013 that's been shut down since 2019 until now, resilvering already restarted from 0% twice due to disk io hanging and the lsi sas2008 controller resetting the links
i already swapped out one drive that appeared completely dead with a slightly newer spare
It's raidz3 with two disks that experienced read errors, which can either be because the disks don't have TLER and have a very long error recovery time (above 30 seconds), or because the disks have experienced UREs and have tried to relocate the data and failed, or some of the many other weird things harddisk firmware gets up to.

Your system log is likely full of messages where the second line looks something like CAM status: SCSI Read Error and where the other lines can help determine where the error is (although for all practical purposes, since you're using ZFS, it doesn't matter since it knows the on-disk state of the data as well as the LBA mappings for them).

You still have one disks worth of distributed parity left, and in the case of an URE happening on one of the good disks, you're still not going to suffer minor dataloss (ie. only the file where the URE happens will be affected, unlike hardware RAID which would just throw up its hands and say all your data were kaput).

You might want to set kern.cam.ada.retry_count=0, kern.cam.cd.retry_count=0, and kern.cam.da.retry_count=0 via sysctl(8), as that can sometimes help avoid triggering the too many errors condition.

If all else fails and you have a couple of disks to spare, you can try using recoverdisk(1) to image the disks onto a bigger disk (an 8TB shucked disk, for example), then if it completes (this can take a long time; I had it running for two years at one point, and phk (the author) told me about someone who had it running for three years), you might be able to reimport the disk by using ggatel(8) to make the image files appear as GEOM devices.

BlankSystemDaemon fucked around with this message at 22:14 on Dec 30, 2021

A Bag of Milk
Jul 3, 2007

I don't see any American dream; I see an American nightmare.
Well I got my TrueNAS Core server up and running a few days ago and woo boy I'm in way more over my head than I thought I was going to be when I started. I'm fully on board with deep diving into documentation and such, but I'd really like to get my Plex server up and running - something that I think should be possible even if I don't fully comprehend this operating system yet. I got my Plex plugin installed and running and in 'up' status with it's own jail and such, but I'm not sure why when I try to access the admin portal I get the 'url not found/refused connect' error in my browser.

A Bag of Milk fucked around with this message at 05:25 on Dec 31, 2021

BlankSystemDaemon
Mar 13, 2009



A Bag of Milk posted:

Well I got my TrueNAS Core server up and running a few days ago and woo boy I'm in way more over my head than I thought I was going to be when I started. I'm fully on board with deep diving into documentation and such, but I'd really like to get my Plex server up and running - something that I think should be possible even if I don't fully comprehend this operating system yet. I got my Plex plugin installed and running and in 'up' status with it's own jail and such, but I'm not sure why when I try to access the admin portal I get the 'url not found/refused connect' error in my browser.
I'm pretty sure TrueNAS uses vnet, which is a form of network virtualization used with jails - what it does is ensure that each jail has its own network stack (ie. separate IP, MAC address, but also the in-memory buffers for sockets and the like) - so since you're getting connection refused (which means that the port you're connecting to doesn't have a daemon running on it), I'd suggest that you ensure you're connecting to the right IP (ie. the IP of the jail, not the host).

YerDa Zabam
Aug 13, 2016



Motronic posted:

Cool thanks. And no, I just meant making sure they'd take it back. The one that wouldn't even power up was pretty obvious.

Since I accidentally bought an extra one I'll just toss it in as an online spare. I'm 99.9% sure there's nothing wrong with my backplane/ports/etc. The one that wouldn't power on got replaced with something in the same bay and it's fine. The one that threw errors was the second one replaced in the pool and I put the remaining drive I had on hand at the time in there and it was fine. I just need to throw the warranty replacement in to complete this pool and then I guess I'll warranty the other one and wait for it to come back as my spare.

Ignore my rash assumption earlier about USB being a barrier. You can run badblocks in truenas (and elsewhere) on USB connected drives just fine. Good to know for any shuckers out there too I guess

Nulldevice
Jun 17, 2006
Toilet Rascal

BlankSystemDaemon posted:


But what does it teach you? How quickly you can lose data? That you can lose data without knowing it? Those are things NTFS, EXT2-4 and UFS can teach you.

One of the things I learned from it is how loops can develop in the file system between disks. It can be corrected by moving files from the full drive over to the drive with the rest of the data on it, but is a rather large pain in the rear end. It's one of things I noticed that the snapraid scrub didn't catch. Another reason I got off the system. Generally speaking, the loops were a rare occurrence and I had it happen maybe two or three times over a couple of years. And yes, data loss was very possible depending on how often you calculated parity. If you delete a file and want to recover it, you can do so, so long as parity hasn't been recalculated already. If it has, your file is gone if you don't have backups. If one of your member disks dies before parity calculation, all new writes are gone. You restore from parity and reacquire missing data if possible. Not what I'd call reliable. I'd say I learned a great deal about proper file storage in this scenario. Mind you, merger doesn't have to be a part of this situation, you can keep the disks separate and just have a fuckton of file shares, but that's inconvenient.

I ended up building a TrueNAS system that met my needs for performance and space. Transferred the data over to that system, and moved Plex and associated services to another system. I use the TrueNAS servers I have strictly for storage and that's about it.

What did I learn overall? Keep your storage servers for storage. Use other machines to handle the applications, and don't use wingnut half baked solutions.

r u ready to WALK
Sep 29, 2001

I played around with smartmontools and https://github.com/AnalogJ/scrutiny on that failing raidz3 pool, I think I'm starting to see the problem




pretty sure all the drives are just yearning for the sweet release of death after 7 years power on time

BlankSystemDaemon
Mar 13, 2009



They're definitely not happy campers, and are doing considerably worse than the 2TB HD204UIs that I had in my old server before retiring it after having more than 90000 power-on hours (just over 10¼ years).

cruft
Oct 25, 2007

It's time for me to replace my three "archive" USB hard drives with a JBOD enclosure. The main concern is low noise, I don't even care much about performance, since it's just streaming video.

Any recommendations?

BlankSystemDaemon
Mar 13, 2009



Are we talking about an actual JBOD chassis via SFF-8088 or JBOD DAS via USB3?

Krailor
Nov 2, 2001
I'm only pretending to care
Taco Defender

cruft posted:

It's time for me to replace my three "archive" USB hard drives with a JBOD enclosure. The main concern is low noise, I don't even care much about performance, since it's just streaming video.

Any recommendations?

If you don't mind paying the premium QNAP makes some nice 4 bay expansion enclosures in both USB and SFF-8088 formats.

SFF-8088 would provide better performance and is a more robust solution but it requires that you also have a free PCIe x4 (or larger) slot in your computer in order to add a HBA card.

Since you were previously using external USB drives you're probably good sticking with a USB enclosure.

Sheep
Jul 24, 2003

cruft posted:

It's time for me to replace my three "archive" USB hard drives with a JBOD enclosure. The main concern is low noise, I don't even care much about performance, since it's just streaming video.

Any recommendations?

Depending on how much space you need, 1TB NVME drives are silent and also not terribly expensive anymore. SSDs are also dropping in price like rocks.

You can get PCI cards that will let you mount up to two NVME drives on them as well if you don't want to deal with external enclosures, but even an external enclosure is only like $10.

cruft
Oct 25, 2007

Right, sorry, should have realized what thread I was posting in.

I'm connecting this to a Raspberry Pi 4. I currently have 2TB+2TB+4TB USB drives on a powered hub. I need this to be something I can expect my niece to maintain, so I have to stick with USB.

My plan is to give the two 2TB disks to my father's system, replace them with 4TB disks in an enclosure, and eventually replace the leftover 4TB USB disk with 6-12TB.

By 2022 standards this is laughable. I console myself with the knowledge that by 1997 standards this is worth millions.

So the thing I'm looking for would make each disk look like a USB mass storage device, and be pretty quiet.

Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.
Sabrent has some good options but you pay for the name there.

Mediasonic has a 4-bay for a good price, I've used their internal drive bay adapters and they've been solid but as a company they're a bit of a mystery, appearing and disappearing every so often.

Scruff McGruff fucked around with this message at 01:05 on Jan 2, 2022

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
hey, psa:

in the windows thread I recently asked a question about random system hangs where the answer ended up being that it was happening when a network drive was unreachable. Even if what you're doing has nothing to do with that particular share (or any share at all), Windows will helpfully go make sure it's reachable. If the share is offline, explorer will hang until it times out, that's why it hangs.

one corollary here is that I've heard some USB drives that I have on a network share (via windows) spinning up and down. I was upstairs in my living room just walking by when I heard it, there was nothing running on it. I'm wondering if that's related - of course if it can connect, windows on the other end is probably going to spin up the drive, right?

If you've got external drives directly mapped as shares on your pc, you may want to keep an eye on your load/unload cycles. mine look to have about 6k cycles on 6k hours which isn't great.

it's much better to shuck and run them as internal and force them to always spin, I think

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
6K load cycles in 6K hours is OK. Better than smartctl temperature reads triggering a load cycle every 5 minutes.

kri kri
Jul 18, 2007

I recently purchased 3x 14TB easy stores. Two out of the three failed preclear in their enclosures. Test your drives folks!!

Also the recent posts about unraid match mine. I have been using it since 2016 and it’s been virtually bulletproof. Sitting at 104TB with one more 14tb to swap in, it’s such a nice solution.

Just started to get into Borg backup, playing around with borgmatic and Vorta2. Vorta is a nice gui layer over borg. I have set up 4 external drives so far and it’s working well.

BlankSystemDaemon
Mar 13, 2009



"Virtually bulletproof" is doing a lot of work in that sentence, since you have absolutely no way of knowing if any of your data is corrupt short of maintaining out-of-tree checksums programmatically that can be automatically compared at regular intervals.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

BlankSystemDaemon posted:

"Virtually bulletproof" is doing a lot of work in that sentence, since you have absolutely no way of knowing if any of your data is corrupt short of maintaining out-of-tree checksums programmatically that can be automatically compared at regular intervals.

I don't care about this at home.

E: also, unraid does checksumming on a scheduled basis

Matt Zerella fucked around with this message at 17:42 on Jan 3, 2022

BlankSystemDaemon
Mar 13, 2009



Matt Zerella posted:

I don't care about this at home.

E: also, unraid does checksumming on a scheduled basis
I don't understand how you can not care, when we know that harddisks lie deliberately to prevent you from returning them and/or not buying from the vendor in the future.

If you mean via the plugin that has to be enabled, and then the option that also has to be enabled I'm not sure that's the big win that you think it is - because unless you enable it before you put any files on a brand new setup (and say, not something that's upgraded from unRAID5 where it wasn't available), you can't prove you don't already have bitrot since checksumming has to be enabled by default before anything's ever written.

Also, the choice of checksumming primitives aren't exactly comforting - you either get three options that are very very expensive both in CPUtime and wallclock (one of which may perhaps optionally be offloaded, on some platforms, if you have a CPU that's new enough or special enough; like an AMD Zen2 or some Intel CPUs including part of the Atom lineup).
The fourth option? MD5. Which... yeah, no, that's not an option in the year of gently caress 2022. There's no crc32c or fletcher (both of which have excellent copyfree implementations and work well for detecting bitrot).

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

BlankSystemDaemon posted:

I don't understand how you can not care, when we know that harddisks lie deliberately to prevent you from returning them and/or not buying from the vendor in the future.

If you mean via the plugin that has to be enabled, and then the option that also has to be enabled I'm not sure that's the big win that you think it is - because unless you enable it before you put any files on a brand new setup (and say, not something that's upgraded from unRAID5 where it wasn't available), you can't prove you don't already have bitrot since checksumming has to be enabled by default before anything's ever written.

Also, the choice of checksumming primitives aren't exactly comforting - you either get three options that are very very expensive both in CPUtime and wallclock (one of which may perhaps optionally be offloaded, on some platforms, if you have a CPU that's new enough or special enough; like an AMD Zen2 or some Intel CPUs including part of the Atom lineup).
The fourth option? MD5. Which... yeah, no, that's not an option in the year of gently caress 2022. There's no crc32c or fletcher (both of which have excellent copyfree implementations and work well for detecting bitrot).

Most of my storage is linux ISOs. I don't give a crap about checking for corruption or backing it up. I have Sonarr/Radarr backing up to google drive. In the event of a catastrophe I restore my docker directories and spin everything back up and let them handle getting the files back. What little other data I have stored on it is not very important. I don't even worry about vendor because unraid lets me mix and match as needed. If a drive dies, I drill a hole in it and send it to ewaste.

All my documents are stored in iCloud and don't even touch the NAS.

ECC/ CoW / bitrot, whatever, doesn't mean a thing to me. So yeah, unraid is bulletproof for that.

I lost a drive once and the parity rebuilt it fine. I don't want to import my day job into my NAS/Media Server.

BlankSystemDaemon
Mar 13, 2009



"UnRAID is bulletproof if you don't care about your data at all" sounds right, but that's not what anyone's said.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

BlankSystemDaemon posted:

"UnRAID is bulletproof if you don't care about your data at all" sounds right, but that's not what anyone's said.

It's bulletproof as in its low maintenance. I really think you've got a different definition on that (and that's ok!).

Of course it isn't if you want enterprise level features at home.

But in the context of the original statement, its well put together software with good community, great tutorials and actual support.

If you want something in-between an off the shelf NAS unit with a lovely CPU and janitoring something you'd see in your day job, Unraid is bulletproof, that's how I read the original statement? It even saves you a ton of money if you're repurposing old hardware.

I appreciate your posts in here and attention to detail but a whole lot of people don't want to deal with that.

Adbot
ADBOT LOVES YOU

Raymond T. Racing
Jun 11, 2019

Unraid parity != checksumming

but also who cares, I don't wanna buy 5 drives at a time to use ZFS, I don't have the money for that

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply