|
BlankSystemDaemon posted:Goddamn BSD, I just want to reiterate that having someone with your knowledge of the low-level and communication ability is always amazingly helpful, insightful and fun. Thanks for hanging out. mom and dad fight a lot posted:I hope it's okay to ask this. I was redirected here from the PC Building Thread. Basically: SMR for this purpose will be fine. Basically, if you're not writing a lot and it's not a NAS, it'll be okay. There's a lot more technical stuff behind that, but that should be all you need to know.
|
# ? Dec 30, 2021 15:22 |
|
|
# ? May 10, 2024 06:04 |
|
And it sounds like SMR might be fine for some NAS; just not ZFS. Edit: this should not be read as an endorsement to use it for any consumer related NAS stuff. It offers some theoretical benefits for specific workloads; and I’d be interested to see if something like rocksdb or some newer exotic file systems make it an interesting proposition. freeasinbeer fucked around with this message at 15:34 on Dec 30, 2021 |
# ? Dec 30, 2021 15:28 |
|
Just seen some Youtube video where a guy goes on about combining Mergerfs with SnapRAID, because ZFS is "too inflexible" for his taste (mainly the RAIDZ expansion argument, which I hope will finally get settled in OpenZFS 3.0), combined with some cron script using rsync to implement a cache SSD (also via Mergerfs). That certainly seems to be an odd way to go about things. Why the third party solutions, when you have mdadm and bcache? Which probably are also more robust due to higher install base highlighting issues with these?
|
# ? Dec 30, 2021 15:34 |
|
Combat Pretzel posted:Just seen some Youtube video where a guy goes on about combining Mergerfs with SnapRAID, because ZFS is "too inflexible" for his taste (mainly the RAIDZ expansion argument, which I hope will finally get settled in OpenZFS 3.0), combined with some cron script using rsync to implement a cache SSD (also via Mergerfs). MergerFS/Snapraid is very flexible and similar to what unraid does. Largest disk(s) are parity, and you can mix and match drive sizes. Merger just puts them in one big pool. There's some technical stuff that you have to do to Samba and NFS, and merger will allow you to spread writes across the pool, or fill one disk at a time and reserve an amount you specify. You can also add drives with data already on them and not lose anything. In the end it's just one large mount point with data protection for as many disks as you supply for parity. Default config file specifies up to six parity disks. edit: I should clarify something with the merger/snapraid setup that I did have pop up a few times, the filesystem loop. If I had a TV show that spanned two disks, I would end up in a loop and had to manually repair the issue. This happened only a few times, but it happened more than I'd like. Snapraid didn't detect it, so I'm pretty sure it was a problem with mergerfs. Nulldevice fucked around with this message at 16:39 on Dec 31, 2021 |
# ? Dec 30, 2021 15:56 |
|
That's certainly cool but flexibility isn't my main design target when I'm talking about high speed local large-scale data storage.
|
# ? Dec 30, 2021 16:03 |
|
Crunchy Black posted:That's certainly cool but flexibility isn't my main design target when I'm talking about high speed local large-scale data storage. It is cool for some purposes, but I prefer TrueNAS or a ZFS based system. I retired the snapraid system years ago. I still help a friend maintain one however.
|
# ? Dec 30, 2021 16:09 |
|
Crunchy Black posted:BSD, I just want to reiterate that having someone with your knowledge of the low-level and communication ability is always amazingly helpful, insightful and fun. Thanks for hanging out. agreed, teaching us something new every day.
|
# ? Dec 30, 2021 16:09 |
|
BlankSystemDaemon posted:Drive-managed SMR works for traditional filesystems like NTFS, so you should be fine. Edit: Oh right, some of them are drive managed, and some aren't. I remember hearing about that. Eh, maybe using mechanical drives 'aint a good idea, it's so loving complicated now. Crunchy Black posted:SMR for this purpose will be fine. Basically, if you're not writing a lot and it's not a NAS, it'll be okay. There's a lot more technical stuff behind that, but that should be all you need to know. freeasinbeer posted:And it sounds like SMR might be fine for some NAS; just not ZFS. mom and dad fight a lot fucked around with this message at 18:01 on Dec 30, 2021 |
# ? Dec 30, 2021 17:58 |
|
So I grabbed 4 6TB WD Red+ drives (WD60EFZX-68B) to replace a pool of 4 WD Reds that had 5-6 years on them in my Free/TrueNAS box. Was going to upgrade the size but everything was really expensive and these were on sale on Newegg. One of the first 4 simply wasn't recognized. It never spun up all the way. So I returned it and bought another one. Another one of the first batch ended up throwing a bunch of unrecoverable errors during resilvering. Also, it turns out that my "return" was actually a replacement, because a few days later I got a shipping notification and a new drive showed up. I'm not going to mess with this pool until the holidays are over, but two questions: What the hell WD? and What's the best way to test/do a warranty claim on the one that threw a bunch of errors. Is there still some utility they have you run on it? I've got an external USB drive sled and various windows/macos/linux boxes I can use to run something on. I'm hoping that's good enough because it will be a huge pain in the rear end to actually install that drive in something. (or maybe newegg will just take it back and this is unnecessary)
|
# ? Dec 30, 2021 18:02 |
|
mom and dad fight a lot posted:I...didn't know this was a thing. I'll have to look into that. Thanks!
|
# ? Dec 30, 2021 18:02 |
Crunchy Black posted:BSD, I just want to reiterate that having someone with your knowledge of the low-level and communication ability is always amazingly helpful, insightful and fun. Thanks for hanging out. freeasinbeer posted:And it sounds like SMR might be fine for some NAS; just not ZFS. However, the kinds of hoops you have to jump through to use ZFS means several gigabytes worth of dirty data in memory (meaning data goes lost if the system crashes, loses power, gets an uncorrectable error or a transient error without ECC), no synchronous I/O, that every single write is issued asynchronously, and that you don't read and write at the same time. It's functionally indistinguishable from the people who insist on setting ZFS up in such a way that it's practically guaranteed to lose them data, because they know just enough to be dangerous but not enough to spot the danger. It's also worth mentioning that SMR disks can work for a type of serial access replacement, ie. you use da(4) like you would sa(4), where da0 a SMR disk connected via USB, and all you do is zfs send … > /dev/da0 then when you want to restore you do cat /dev/da0 > zfs receive …. That's how I've been using a few SMR drives I mistakenly bought, to have an extra backup that I can use as a last-ditch effort once corrective receive lands in OpenZFS. Combat Pretzel posted:Just seen some Youtube video where a guy goes on about combining Mergerfs with SnapRAID, because ZFS is "too inflexible" for his taste (mainly the RAIDZ expansion argument, which I hope will finally get settled in OpenZFS 3.0), combined with some cron script using rsync to implement a cache SSD (also via Mergerfs). I know I'm repeating myself here, but that's not been true for over 20 years and it's still not true today. Nulldevice posted:It is cool for some purposes, but I prefer TrueNAS or a ZFS based system. I retired the snapraid system years ago. I still help a friend maintain one however.
|
|
# ? Dec 30, 2021 18:05 |
|
Flexibility for me is being able to add a drive at a time of any size and expand as I go. And I don't care about bitrot or data loss as what's important is backed up remotely. I touch linux/cloud stuff in my day job, I just want something that's easy and works for my plex/isos/a few self hosted services for my home. For me, and I can only speak for me, UnRAID fits that perfectly and was worth every penny. Especially when you can repurpose old hardware that's more powerful than a store bought NAS.
|
# ? Dec 30, 2021 18:18 |
|
Matt Zerella posted:Flexibility for me is being able to add a drive at a time of any size and expand as I go. Yup, this is exactly why I went with UnRAID. Everything that I can't afford to lose is backed up offsite, and I love being able to add drives as my needs and budget require. Over the past 3 1/2 years I've expanded from 5 to 9 drives, and the only problems I've had were due to a crappy USB port on my motherboard that likes to fry USB drives.
|
# ? Dec 30, 2021 18:29 |
|
Motronic posted:So I grabbed 4 6TB WD Red+ drives (WD60EFZX-68B) to replace a pool of 4 WD Reds that had 5-6 years on them in my Free/TrueNAS box. Was going to upgrade the size but everything was really expensive and these were on sale on Newegg. One of the first 4 simply wasn't recognized. It never spun up all the way. So I returned it and bought another one. Another one of the first batch ended up throwing a bunch of unrecoverable errors during resilvering. Also, it turns out that my "return" was actually a replacement, because a few days later I got a shipping notification and a new drive showed up. You just had bad luck. Well, a bad batch more like. I doubt they will test them. The practicalities of testing drives for a few sectors makes it very unlikely . I used to work for Amazon and we tested zero. 99% of the time we wouldn't even open the box. Either back to the manufacturer or trashed. Newegg might be different though, but if they are brand new, and likely from a bad batch they should just straight refund you. If you are worried about new drives then maybe put a trial of Unraid on USB and run preclear on it. Most components fail at the beginning, or end of their lifetime and preclear gives it a good workout and shows if there are any issues. Takes a while though. Took me about 9 hours for an 8TB drive iirc -e- badblocks works fine with USB drives, my assumption was wrong .. E.. Ah, were you meaning to do a test and return the replacements if they throw errors? I think I got confused there. The preclear thing should work, and the return/replace/refund still stands. The one that's already throwing up errors though, gently caress that straight back to them for sure. Well, as long as it's definitely not something at your end. I had a missing single cap on the backplane that was loving my poo poo up in many weird ways YerDa Zabam fucked around with this message at 14:13 on Dec 31, 2021 |
# ? Dec 30, 2021 19:27 |
|
Enos Cabell posted:Yup, this is exactly why I went with UnRAID. Everything that I can't afford to lose is backed up offsite, and I love being able to add drives as my needs and budget require. Over the past 3 1/2 years I've expanded from 5 to 9 drives, and the only problems I've had were due to a crappy USB port on my motherboard that likes to fry USB drives. Same. I have what little irreplaceable data I have backed up to Google Drive and my media library is easily replaceable if I ever can't rebuild from parity. For someone with no budget and no Linux experience that just had a lot of old computer hardware laying around and wanted to try out a home server it's been great. I think I started in 2016 with a pair of old 500gb drives and now I have 8 3-4tb drives in my array acquired whenever I had the money and when the existing drives filled up. It's also great not having Plex on my gaming machine anymore, now if that computer gets messed up I can just nuke Windows and start fresh instead of combing through forum posts from 2007 and digging through the registry for days. Obviously that's not exclusive to unRAID though, that's just a nice benefit of having a home server. Scruff McGruff fucked around with this message at 19:54 on Dec 30, 2021 |
# ? Dec 30, 2021 19:39 |
|
Adolf Glitter posted:You just had bad luck. Well, a bad batch more like. Cool thanks. And no, I just meant making sure they'd take it back. The one that wouldn't even power up was pretty obvious. Since I accidentally bought an extra one I'll just toss it in as an online spare. I'm 99.9% sure there's nothing wrong with my backplane/ports/etc. The one that wouldn't power on got replaced with something in the same bay and it's fine. The one that threw errors was the second one replaced in the pool and I put the remaining drive I had on hand at the time in there and it was fine. I just need to throw the warranty replacement in to complete this pool and then I guess I'll warranty the other one and wait for it to come back as my spare.
|
# ? Dec 30, 2021 21:14 |
|
i powered my old decommissioned fileserver back up to play around with zfs 2.1.2 and possibly upgrading my new server now taking bets on whether the pool will magically repair itself or explode spectacularly it's 12x 3TB WD Red drives from early 2013 that's been shut down since 2019 until now, resilvering already restarted from 0% twice due to disk io hanging and the lsi sas2008 controller resetting the links i already swapped out one drive that appeared completely dead with a slightly newer spare
|
# ? Dec 30, 2021 21:33 |
r u ready to WALK posted:i powered my old decommissioned fileserver back up to play around with zfs 2.1.2 and possibly upgrading my new server Your system log is likely full of messages where the second line looks something like CAM status: SCSI Read Error and where the other lines can help determine where the error is (although for all practical purposes, since you're using ZFS, it doesn't matter since it knows the on-disk state of the data as well as the LBA mappings for them). You still have one disks worth of distributed parity left, and in the case of an URE happening on one of the good disks, you're still not going to suffer minor dataloss (ie. only the file where the URE happens will be affected, unlike hardware RAID which would just throw up its hands and say all your data were kaput). You might want to set kern.cam.ada.retry_count=0, kern.cam.cd.retry_count=0, and kern.cam.da.retry_count=0 via sysctl(8), as that can sometimes help avoid triggering the too many errors condition. If all else fails and you have a couple of disks to spare, you can try using recoverdisk(1) to image the disks onto a bigger disk (an 8TB shucked disk, for example), then if it completes (this can take a long time; I had it running for two years at one point, and phk (the author) told me about someone who had it running for three years), you might be able to reimport the disk by using ggatel(8) to make the image files appear as GEOM devices. BlankSystemDaemon fucked around with this message at 22:14 on Dec 30, 2021 |
|
# ? Dec 30, 2021 22:12 |
|
Well I got my TrueNAS Core server up and running a few days ago and woo boy I'm in way more over my head than I thought I was going to be when I started. I'm fully on board with deep diving into documentation and such, but I'd really like to get my Plex server up and running - something that I think should be possible even if I don't fully comprehend this operating system yet. I got my Plex plugin installed and running and in 'up' status with it's own jail and such, but I'm not sure why when I try to access the admin portal I get the 'url not found/refused connect' error in my browser.
A Bag of Milk fucked around with this message at 05:25 on Dec 31, 2021 |
# ? Dec 31, 2021 05:05 |
A Bag of Milk posted:Well I got my TrueNAS Core server up and running a few days ago and woo boy I'm in way more over my head than I thought I was going to be when I started. I'm fully on board with deep diving into documentation and such, but I'd really like to get my Plex server up and running - something that I think should be possible even if I don't fully comprehend this operating system yet. I got my Plex plugin installed and running and in 'up' status with it's own jail and such, but I'm not sure why when I try to access the admin portal I get the 'url not found/refused connect' error in my browser.
|
|
# ? Dec 31, 2021 12:09 |
|
Motronic posted:Cool thanks. And no, I just meant making sure they'd take it back. The one that wouldn't even power up was pretty obvious. Ignore my rash assumption earlier about USB being a barrier. You can run badblocks in truenas (and elsewhere) on USB connected drives just fine. Good to know for any shuckers out there too I guess
|
# ? Dec 31, 2021 14:20 |
|
BlankSystemDaemon posted:
One of the things I learned from it is how loops can develop in the file system between disks. It can be corrected by moving files from the full drive over to the drive with the rest of the data on it, but is a rather large pain in the rear end. It's one of things I noticed that the snapraid scrub didn't catch. Another reason I got off the system. Generally speaking, the loops were a rare occurrence and I had it happen maybe two or three times over a couple of years. And yes, data loss was very possible depending on how often you calculated parity. If you delete a file and want to recover it, you can do so, so long as parity hasn't been recalculated already. If it has, your file is gone if you don't have backups. If one of your member disks dies before parity calculation, all new writes are gone. You restore from parity and reacquire missing data if possible. Not what I'd call reliable. I'd say I learned a great deal about proper file storage in this scenario. Mind you, merger doesn't have to be a part of this situation, you can keep the disks separate and just have a fuckton of file shares, but that's inconvenient. I ended up building a TrueNAS system that met my needs for performance and space. Transferred the data over to that system, and moved Plex and associated services to another system. I use the TrueNAS servers I have strictly for storage and that's about it. What did I learn overall? Keep your storage servers for storage. Use other machines to handle the applications, and don't use wingnut half baked solutions.
|
# ? Dec 31, 2021 16:24 |
|
I played around with smartmontools and https://github.com/AnalogJ/scrutiny on that failing raidz3 pool, I think I'm starting to see the problem pretty sure all the drives are just yearning for the sweet release of death after 7 years power on time
|
# ? Dec 31, 2021 19:59 |
They're definitely not happy campers, and are doing considerably worse than the 2TB HD204UIs that I had in my old server before retiring it after having more than 90000 power-on hours (just over 10¼ years).
|
|
# ? Dec 31, 2021 22:40 |
|
It's time for me to replace my three "archive" USB hard drives with a JBOD enclosure. The main concern is low noise, I don't even care much about performance, since it's just streaming video. Any recommendations?
|
# ? Jan 1, 2022 16:11 |
Are we talking about an actual JBOD chassis via SFF-8088 or JBOD DAS via USB3?
|
|
# ? Jan 1, 2022 17:59 |
|
cruft posted:It's time for me to replace my three "archive" USB hard drives with a JBOD enclosure. The main concern is low noise, I don't even care much about performance, since it's just streaming video. If you don't mind paying the premium QNAP makes some nice 4 bay expansion enclosures in both USB and SFF-8088 formats. SFF-8088 would provide better performance and is a more robust solution but it requires that you also have a free PCIe x4 (or larger) slot in your computer in order to add a HBA card. Since you were previously using external USB drives you're probably good sticking with a USB enclosure.
|
# ? Jan 1, 2022 18:37 |
|
cruft posted:It's time for me to replace my three "archive" USB hard drives with a JBOD enclosure. The main concern is low noise, I don't even care much about performance, since it's just streaming video. Depending on how much space you need, 1TB NVME drives are silent and also not terribly expensive anymore. SSDs are also dropping in price like rocks. You can get PCI cards that will let you mount up to two NVME drives on them as well if you don't want to deal with external enclosures, but even an external enclosure is only like $10.
|
# ? Jan 1, 2022 18:49 |
|
Right, sorry, should have realized what thread I was posting in. I'm connecting this to a Raspberry Pi 4. I currently have 2TB+2TB+4TB USB drives on a powered hub. I need this to be something I can expect my niece to maintain, so I have to stick with USB. My plan is to give the two 2TB disks to my father's system, replace them with 4TB disks in an enclosure, and eventually replace the leftover 4TB USB disk with 6-12TB. By 2022 standards this is laughable. I console myself with the knowledge that by 1997 standards this is worth millions. So the thing I'm looking for would make each disk look like a USB mass storage device, and be pretty quiet.
|
# ? Jan 1, 2022 19:18 |
|
Sabrent has some good options but you pay for the name there. Mediasonic has a 4-bay for a good price, I've used their internal drive bay adapters and they've been solid but as a company they're a bit of a mystery, appearing and disappearing every so often. Scruff McGruff fucked around with this message at 01:05 on Jan 2, 2022 |
# ? Jan 2, 2022 01:02 |
|
hey, psa: in the windows thread I recently asked a question about random system hangs where the answer ended up being that it was happening when a network drive was unreachable. Even if what you're doing has nothing to do with that particular share (or any share at all), Windows will helpfully go make sure it's reachable. If the share is offline, explorer will hang until it times out, that's why it hangs. one corollary here is that I've heard some USB drives that I have on a network share (via windows) spinning up and down. I was upstairs in my living room just walking by when I heard it, there was nothing running on it. I'm wondering if that's related - of course if it can connect, windows on the other end is probably going to spin up the drive, right? If you've got external drives directly mapped as shares on your pc, you may want to keep an eye on your load/unload cycles. mine look to have about 6k cycles on 6k hours which isn't great. it's much better to shuck and run them as internal and force them to always spin, I think
|
# ? Jan 2, 2022 10:47 |
|
6K load cycles in 6K hours is OK. Better than smartctl temperature reads triggering a load cycle every 5 minutes.
|
# ? Jan 2, 2022 14:44 |
|
I recently purchased 3x 14TB easy stores. Two out of the three failed preclear in their enclosures. Test your drives folks!! Also the recent posts about unraid match mine. I have been using it since 2016 and it’s been virtually bulletproof. Sitting at 104TB with one more 14tb to swap in, it’s such a nice solution. Just started to get into Borg backup, playing around with borgmatic and Vorta2. Vorta is a nice gui layer over borg. I have set up 4 external drives so far and it’s working well.
|
# ? Jan 2, 2022 23:14 |
"Virtually bulletproof" is doing a lot of work in that sentence, since you have absolutely no way of knowing if any of your data is corrupt short of maintaining out-of-tree checksums programmatically that can be automatically compared at regular intervals.
|
|
# ? Jan 3, 2022 12:24 |
|
BlankSystemDaemon posted:"Virtually bulletproof" is doing a lot of work in that sentence, since you have absolutely no way of knowing if any of your data is corrupt short of maintaining out-of-tree checksums programmatically that can be automatically compared at regular intervals. I don't care about this at home. E: also, unraid does checksumming on a scheduled basis Matt Zerella fucked around with this message at 17:42 on Jan 3, 2022 |
# ? Jan 3, 2022 17:20 |
Matt Zerella posted:I don't care about this at home. If you mean via the plugin that has to be enabled, and then the option that also has to be enabled I'm not sure that's the big win that you think it is - because unless you enable it before you put any files on a brand new setup (and say, not something that's upgraded from unRAID5 where it wasn't available), you can't prove you don't already have bitrot since checksumming has to be enabled by default before anything's ever written. Also, the choice of checksumming primitives aren't exactly comforting - you either get three options that are very very expensive both in CPUtime and wallclock (one of which may perhaps optionally be offloaded, on some platforms, if you have a CPU that's new enough or special enough; like an AMD Zen2 or some Intel CPUs including part of the Atom lineup). The fourth option? MD5. Which... yeah, no, that's not an option in the year of gently caress 2022. There's no crc32c or fletcher (both of which have excellent copyfree implementations and work well for detecting bitrot).
|
|
# ? Jan 3, 2022 20:29 |
|
BlankSystemDaemon posted:I don't understand how you can not care, when we know that harddisks lie deliberately to prevent you from returning them and/or not buying from the vendor in the future. Most of my storage is linux ISOs. I don't give a crap about checking for corruption or backing it up. I have Sonarr/Radarr backing up to google drive. In the event of a catastrophe I restore my docker directories and spin everything back up and let them handle getting the files back. What little other data I have stored on it is not very important. I don't even worry about vendor because unraid lets me mix and match as needed. If a drive dies, I drill a hole in it and send it to ewaste. All my documents are stored in iCloud and don't even touch the NAS. ECC/ CoW / bitrot, whatever, doesn't mean a thing to me. So yeah, unraid is bulletproof for that. I lost a drive once and the parity rebuilt it fine. I don't want to import my day job into my NAS/Media Server.
|
# ? Jan 3, 2022 20:44 |
"UnRAID is bulletproof if you don't care about your data at all" sounds right, but that's not what anyone's said.
|
|
# ? Jan 3, 2022 20:52 |
|
BlankSystemDaemon posted:"UnRAID is bulletproof if you don't care about your data at all" sounds right, but that's not what anyone's said. It's bulletproof as in its low maintenance. I really think you've got a different definition on that (and that's ok!). Of course it isn't if you want enterprise level features at home. But in the context of the original statement, its well put together software with good community, great tutorials and actual support. If you want something in-between an off the shelf NAS unit with a lovely CPU and janitoring something you'd see in your day job, Unraid is bulletproof, that's how I read the original statement? It even saves you a ton of money if you're repurposing old hardware. I appreciate your posts in here and attention to detail but a whole lot of people don't want to deal with that.
|
# ? Jan 3, 2022 21:01 |
|
|
# ? May 10, 2024 06:04 |
|
Unraid parity != checksumming but also who cares, I don't wanna buy 5 drives at a time to use ZFS, I don't have the money for that
|
# ? Jan 3, 2022 21:01 |