|
I also have 4 Toshiba non-NAS 3TBs and 2 6TBs with 4.5 and 2.1 years on them respectively with no issues. When I bought the 6TB drives, one of the 2 died about half way through my burn-in tests, but that could be due to the some poor packing for shipping more than anything (Walmart sale with the retail box just sliding around inside a much bigger shipping box and zero packing material). Exchanged in-store for the dude running today. I do know that their warranty program was mess a couple years back, so if that's something you care about it might be good to check on it. At one point they would issue you Toshiba store credit to buy a new drive but no drives were in stock so you were SOL, then later a visa gift card for what you paid... No idea what it looks like now.
|
# ¿ Jan 16, 2019 17:30 |
|
|
# ¿ May 19, 2024 07:12 |
|
"Media" is a Share on my Unraid 6.7.2 server. From Windows 10: code:
code:
|
# ¿ Dec 9, 2019 19:03 |
|
I moved to Unraid over a year ago, but before that was on DrivePool and scheduling this to run daily solved the "How do I know what's located on what drive if I need to restore from backup" issue then. https://community.covecube.com/index.php?/topic/1865-howto-file-location-catalog/
|
# ¿ Jan 19, 2020 15:29 |
|
Unraid's cache drive is really more akin to a (no smarts) tiered storage model, although it's not actually this and deals entirely with whole files so, in theory, you can potentially lose the primary array or the cache and the data on the remaining one is intact. You can pin workload entirely to the primary array so it never touches it, preferring cache, pinning cache, or simply using the cache array as a write location to be batch moved to the primary array on a on overnight schedule (and/or once high water marks are hit, so on). Common usage is to pin VM/docker files on the cache array (that is some flavor of performant storage), have your Linux ISOs go to the primary array (spinning rust, using either of the cache options above). It helps alleviate some of the performance issues that the primary array has with the dedicated parity drives being a bottleneck and keeps the stuff that needs speed on the disks that can handle it and works pretty well, near as I can tell, in the ISO server use case. I wouldn't recommend enabling Unraid's cache array without a minimum of 2 disks in it. Spend the extra 60 bucks and pick up another 500GB SSD or just don't use it - it's entirely optional. If you aren't going to run the primary array without a parity, don't use the cache array without a second disk for the exact same reasons.
|
# ¿ Mar 20, 2020 20:02 |
|
Hughlander posted:Tangentially NAS related... About a year ago I had some similar symptoms on my Unraid box - spontaneous reboots, generally with some decent I/O load, clean logs. While attempting to reduce variables during troubleshooting I found that I couldn't reproduce the issue if I spun down 2 (unused) hard disks and ejected their trays. It didn't matter what position/adapter the 2 were attached to, so that eliminated a lot of variables for me. My root cause was ultimately a power supply that had been fine for about 2 years that appeared to be in the process of flaking out - replaced that and all has been well. That whole thing was a giant pain in the rear end to figure out, so best of luck...
|
# ¿ Apr 9, 2020 19:28 |
|
eames posted:I can only assume that people are so mad at WD because the Red drives have/had a very good reputation in this community I think this is the root of it. Seagate can't do a whole lot to make their rep much worse for many folks that do this stuff and have been for awhile. WD bought HGST (the other good one) and Toshbia has the poo poo warranty issues. Now they are all confirmed problematic for their own reasons. I've always been one to pick up drives that have some reasonable level of expected reliability, but are also as good as I can pull on the $/TB scale so I don't expect my buying habits will change much next time it's needed. It's certainly annoying, though.
|
# ¿ Apr 24, 2020 16:38 |
|
disaster pastor posted:I'm past the point of frustration with my Unraid server backup, and I don't even know if there's a better way to be doing this. It's not a super popular opinion around here I don't think, but I'm still using Crashplan with my Unraid server. Downsides: - It's not going to help your speed issues any (understatement) - Their app is a poo poo show and accounts for about 1/4 - 1/3 the memory usage on my Unraid server, just be aware of that if you have a significant amount/size of files you are backing up. - When there is a compatibility breaking update for the Linux app, there can be a lag for the container to update. Twice in ~2 years I've got the email from Crashplan that backups aren't happening and this was the cause. Upsides: - Aside from that container update lag, I've not had any notable issues with stability of the backups using the docker in the apps store. No real babysitting required. - I haven't seen anything remotely cost competitive and usable once you get into the double digit TB of backups (sitting at ~18TB local marked for backup, ~19TB actually stored in my backups currently including deleted/changed data). I wish there was a better option and if there is I'd love to hear it, I'm just not willing to move from $10/month to 7-10 times that amount with something like B2...
|
# ¿ Jul 22, 2020 18:51 |
|
TraderStav posted:Uhhh... is Crashplan too good to be true? I just signed up for the 30-day trial and it's unlimited data for $10/mo. I just downloaded a docker on my UnRaid and pointed at the account and am currently uploading shares to it. Has anyone here actually backed up their whole array, including ISOs? At some point do they come and bother you about using up too much of the 'unlimited' space? Or is there some other drawback like what happens if you go get the data out of them? I've been using them for years. Started with the personal account on a Windows system, got migrated to the business account for the low cost of twice the price when they discontinued the service, migrated that data over to Unraid and successfully adopted the backup into the docker container there so I didn't have to re-upload. The downsides are the slowish uploads (if your daily change rate is fairly low, mostly an issue on initial seed) and poo poo-tacular client that just gobbles CPU/RAM when it is working - especially once you get into double digit TB of backups. I've not personally had any issues with restores or download speed, although I haven't attempted a restore larger than 300GB or so. I'm at around 20TB protected currently and never heard a peep out of them - it's just worked. Basically if you can find versioned backups of anywhere near that much data for $10/month, I've never seen it. I haven't been tracking it, but at least at one point you could get unlimited cloud data with google business accounts for a similar price point, but the software options to leverage them for backups when I briefly looked at it was a disaster if you care about versioning. Possible that something has changed since then. One tip: Use your own encryption key and not their generic master key. If you don't do it from the start, you have to restart your backup from scratch to change it. Start it off correctly (and back that sucker up a few places that don't rely on your server in case the poo poo hits the fan).
|
# ¿ Oct 15, 2020 23:11 |
|
TraderStav posted:That may have been easier than I thought, I just hit the generate key button and used the randomly generated long string, saved it in my 1password. Will find a few other places to store it. It's been years since I did it and I don't think the client offered an option to generate one when I did so, but I can't imagine that won't work Get a few things backed up, login to Crashplan's website, perform a recovery on something that's backed up using your new key. As long as the backup opens I'd think you are good to go, but might as well give the web recovery a short test while you are in there!
|
# ¿ Oct 16, 2020 21:43 |
|
When I was using Windows + DrivePool, I had a copy of Hard Disk Sentinel that I'd use for burn-in on a new drive. The UI looked straight out of w2k, but it had the option to do various types of writes with read verification. This is from memory, but I believe I used to do a quick smart test, extended smart test, full sequential random data write/read, a butterfly write/read, then an all zeros in random order write/read. On 3ish TB drives those tests would take over a week if memory serves, but I do specifically remember one of the last drives I tested, a 6TB Toshiba, died during the butterfly test and I was able to do a return with the store on it. Just looked, and it looks like the software got an update at the start of this year, although the UI still looks like the same lovable dumpster fire https://www.hdsentinel.com/ Edit: If anyone cares, these days I'm on UnRAID and typically do a 3 cycle pre-clear with post validation to burn in - and about half the time I get annoyed at how long it is taking and stop it sometime in the middle of the second cycle Fancy_Lad fucked around with this message at 16:54 on Feb 4, 2022 |
# ¿ Feb 4, 2022 16:47 |
|
e.pilot posted:yeah unraid doesn’t care what the hardware is so long as the USB drive doesn’t change, I’ve got a second unraid install I tinker with that I’ve moved between 3 completely different hardware configs and it didn’t give a poo poo If you are moving to relatively recent hardware, you want to make sure you are running the latest released Unraid. I pulled a couple thousand during the early Chia days with my spare space, so did a full hardware refresh on the server and it wouldn't boot until I booted the USB drive in my laptop, upgraded Unraid, then plugged it back in to the server. Whoops.
|
# ¿ Mar 22, 2022 16:58 |
|
Astro7x posted:Are there any recommended HDDs for a setup like that that also isn't overkill and paying more than for what I need? Klyith posted:Big drives 14TB and up are expensive, and they're all pretty much server-grade. The cheap way to buy them is sales on external drives -- which would mean you can forget buying a special 2-bay enclosure and just buy externals. Assuming US, this is a good resource to find external drives on sale: https://shucks.top/
|
# ¿ Jan 25, 2023 17:38 |
|
Kibner posted:I need to do some case modification to let my HDD drive bays fit properly (it has some little tabs that stick out that I either need to bend flat or remove). As someone who as delt with this before a couple times, I'm going to suggest bending the tabs. Even on a good quality case it was less work overall vs a Dremel and you don't have to deal with metal shaving cleanup in your case nor filing down the sharp edges and the blood sacrifice that will eventually be extracted if you don't. There are probably better ways to do it, but a harbor freight c-clamp and elbow grease did the trick on my last one with minimal fuss.
|
# ¿ Nov 6, 2023 21:45 |
|
wolrah posted:But then you have a period of time where the data could potentially have been silently corrupted and your parity will be built from that corrupted data. With UnRaid's standard arrays, if data were silently corrupted on a data disk, I'm pretty sure that even if the parity has the 'uncorrupted' information, it would simply get rewritten anyway when the next parity check occurs. UnRaid's standard arrays don't protect from bit rot. If that's a concern, you'll probably want ZFS. For noncritical data like linux ISOs, not sure it's a big deal. It isn't for me at least.
|
# ¿ Nov 7, 2023 18:21 |
|
Nitrousoxide posted:IMO just don't use a NAS OS and file system that can suffer silent bitrot. Synology (BTRFS) and TrueNAS (ZFS) both have hash checks and can self-repair as long as the damage is within the limits allowed by the parity for the array. Everything is tradeoffs. Synology's hardware isn't cheap, especially once you get to a certain scale. Being able to use whatever size drives you have sitting around or are on sale whenever you need them can be a massive cost savings vs having to replace an entire array at once. I have 8 array disks + 2 parity from sizes of 4TB to 12TB. Currently have 68TB online usable. If that were unpinned with ZFS, I'd be looking at 32TB - less than half. Funny enough cutting out my single 4TB would bring me back to 56TB, although that would require shrinking the ZFS array in a real world situation. Since I've shrank my array in several stages from a peak of 16 disks I can say that's fairly trivial in UnRAID, how does TrueNAS handle it? With UnRAID you can cover both bases by using ZFS for what you care about and the array for stuff that isn't worth paying for the additional level of protection. Is it for everyone? No. But the use case of a home linux iso server with a several docker apps and maybe a VM or two is pretty compelling IMO.
|
# ¿ Nov 8, 2023 00:01 |
|
|
# ¿ May 19, 2024 07:12 |
|
Scruff McGruff posted:Given the cost in the US of any case that comes with hot-swap bays, it might be worth considering the piecemeal approach. Try to find an old case on Marketplace with a lot of 5.25" bays for like and then get some hot swap adapters (you may need to swap in some Noctua fans on them for quiet). Just be aware that anything with a backplane like this is adding another potential failure point. I had 3 similar Icy Dock 5 bay units that trucked along just fine for many years with just one fan replacement needed. Then all of a sudden over the course of 5 days about 6 months ago I had 3 disks drop out of the array, all in one of the units. It killed dead one drive, one is operational with bad sectors now, and the 3rd just dropped out of the array but isn't showing any warnings in SMART and appears to be working correctly after stress testing it for a week (and the last 5 months of service since I reused it). I'm suspecting something power related and while I'm not 100% positive, I am fairly confident in saying that it was the Icy Dock dying on me that caused it and not just stupidly bad luck. Eletriarnation posted:Honestly, unless there's some data out there showing that shucked drives fail at a substantially higher rate than normal NAS models then this feels like FUD to me. I'd also love to see this data because last time I went poking it sure seemed like it was r/datahoarder's version of an old wives' tale.
|
# ¿ Jan 11, 2024 22:34 |