Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit
app backup just saved my rear end this week when btrfs did it’s thing and corrupted

Adbot
ADBOT LOVES YOU

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit
got everything from the beelink gk55 migrated to the ML30, so far so good, once the parity finishes syncing I’m going to move the plex container to it from the gen10+ so it can take advantage of the p400

dug out my old stack of 4tb drives to replace the single 12tb external I had on the gk55

it lives in my shed as a faux offsite, it’s currently 0°f outside and the processor is idling at 5°c :madmax:

Klyith
Aug 3, 2007

GBS Pledge Week

Flyndre posted:

I've set up OneDrive to sync to my NAS using Synology's Cloud Sync package. I plan a workflow where I might keep working files on my Mac in OneDrive (photos due to be edited for instance), before I transfer them to a mounted SMB folder on the NAS.

Does anybody know if this setup might somehow interfere with snapshots and Hyper Backup? I guess my worry is that the NAS might interpret the files being removed from the cloud as deletions while being added to another folder as being new files, making snapshots and backups balloon in size.

Hyper backup: definitely no interference. Hyper Backup is de-duplicating (via file hashes), so as long as the process of moving them between your Mac, the OneDrive cloud, and the NAS does not modify them* only one copy of the data will be kept.

*No guarantee that said process will not modify files. Some apple software has in my not-at-all-recent experience been lovely about loving with metadata without asking.

NAS snapshots: I think yes, it will store data twice in snapshots. Only expensive high-end units support de-duplication on the drives apparently. I doubt this is a big enough deal to care about.

Captain Apollo
Jun 24, 2003

King of the Pilots, CFI
Is my processor load on this ml30 supposed to be routinely between 80 - 100%? What's the most common culprit?

Wibla
Feb 16, 2011

No. Have you run "top" to see what's using so much cpu? Assuming you're on *nix.

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit

Captain Apollo posted:

Is my processor load on this ml30 supposed to be routinely between 80 - 100%? What's the most common culprit?

it’s a 7 year old dual core processor

while the E1270v5 is just as old, it’s at least a quad core with hyper threading, and a basically free upgrade

is your parity drive still syncing?

Captain Apollo
Jun 24, 2003

King of the Pilots, CFI
Weird it's all quiet on the western front now. It was doing some serious hard work earlier but seems to be good and normal now.

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit
$7 key from ebay to get full ILO access? don’t mind if I do

SpartanIvy
May 18, 2007
Hair Elf

e.pilot posted:

$7 key from ebay to get full ILO access? don’t mind if I do

What benefits does ILO access provide? I pretty much disabled it in the BIOS as soon as I received the machine.

E: also which key did you buy? For $7 I'd like to tinker with it too.

SpartanIvy
May 18, 2007
Hair Elf
Double post because my HDDs come in tomorrow. What's the current standard for initial testing for hard drives before setting up in a NAS? I see lots of talk about burn in when I Google it but I can't find much talking about the actual process.

wolrah
May 8, 2006
what?

SpartanIvy posted:

What benefits does ILO access provide? I pretty much disabled it in the BIOS as soon as I received the machine.

E: also which key did you buy? For $7 I'd like to tinker with it too.
Remote management without needing to hook a keyboard/mouse/monitor directly to the thing.

I have a stack of servers sitting behind me that I have fully set up the BIOS, installed the OS, and configured the software having never connected anything more than one power cable and one ethernet cable.

For a home NAS it basically means you can shove the thing wherever it's convenient without having to worry about easy access to connect devices.

If the thing is sitting right next to your main PC it might not matter to you.

Klyith
Aug 3, 2007

GBS Pledge Week

SpartanIvy posted:

Double post because my HDDs come in tomorrow. What's the current standard for initial testing for hard drives before setting up in a NAS? I see lots of talk about burn in when I Google it but I can't find much talking about the actual process.

Write some data. Do a speed test with crystaldisk or atto to see that performance is as expected and fly the heads around a bit. If you want to be super-thorough you could do a surface scan.

IMO a burn in like doing a zero write to the whole drive is useless. Backblaze stats don't show a bathtub curve of high 1st-year failures in drives these days, drives pretty much are either DOA or work. So the idea that you could weed out a bad drive with burn in seems unlikely.

BlankSystemDaemon
Mar 13, 2009



If you're running something based on FreeBSD, diskinfo -cit on a disk that's not got any data written to it (preferably a disk that's completely uninitialized) can be quite revealing.
Especially if you have an idea of how it should perform, from having run it on similar drives that you know are good.

gariig
Dec 31, 2004
Beaten into submission by my fiance
Pillbug

SpartanIvy posted:

Double post because my HDDs come in tomorrow. What's the current standard for initial testing for hard drives before setting up in a NAS? I see lots of talk about burn in when I Google it but I can't find much talking about the actual process.

I did a burn in of my 18gb Exos from Server Part Deals. Generally I saw using badblocks but it's REAL old and couldn't get it to read my drives. I think if you mess with the block size you might be able to but the commands I saw didn't work (mostly here). I ended up going with this command from Arch Wiki of running the following commands
code:
sudo sh -c 'shred -v -n 0 -z /dev/mapper/test-drive
sudo cmp -b /dev/zero /dev/mapper/test-drive'
At the end if everything matched you'll get an error that looks like the following because /dev/zero is infinite
code:
cmp: EOF on /dev/mapper/test-drive after byte 18000207937536, in line 1
I had these in a Sabrent SATA to USB caddy and it took ~5 days. The rebuild in my Synology ds918+ running SHR-1 with ~12 tb took only like half a day. I'm on my second one for testing. Should know by Wednesday if it's good

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit

SpartanIvy posted:

What benefits does ILO access provide? I pretty much disabled it in the BIOS as soon as I received the machine.

E: also which key did you buy? For $7 I'd like to tinker with it too.

HP’s flavor of IPMI, in other words a PiKVM but built in

e: sorry it was $8, lol
https://www.ebay.com/itm/254154313587


also got the p400 up and running, 4x4k transcodes without missing a beat, I ran out of things to stream to, this thing is a budget beefcake

e.pilot fucked around with this message at 22:25 on Jan 30, 2023

IOwnCalculus
Apr 2, 2003





Klyith posted:

Write some data. Do a speed test with crystaldisk or atto to see that performance is as expected and fly the heads around a bit. If you want to be super-thorough you could do a surface scan.

IMO a burn in like doing a zero write to the whole drive is useless. Backblaze stats don't show a bathtub curve of high 1st-year failures in drives these days, drives pretty much are either DOA or work. So the idea that you could weed out a bad drive with burn in seems unlikely.

I think SpartanIvy is getting a batch of refurb drives anyway.

I run nwipe on any new-to-me drives with a DoD Short test, most failures that I uncover this way show up within the first pass - often pretty early because throughput will drop to a crawl.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

SpartanIvy posted:

What benefits does ILO access provide? I pretty much disabled it in the BIOS as soon as I received the machine.

E: also which key did you buy? For $7 I'd like to tinker with it too.

iLO access can be quite useful, especially for the Integrated Management Log and console access. For TrueNAS and Unraid use the iLO Advanced license isn't that necessary, HPE allows console access during POST for configuring BIOS and devices. But with those eBay prices might as well get it.

Of course if you enable iLO remember to to update it regularly.
https://support.hpe.com/connect/s/softwaredetails?language=en_US&softwareId=MTX_a9afd91f006a44f8b1b5d0e09b

Updating is a bit tricky. Extract the "ilo4_281.bin" from the .exe with 7-Zip and upload it through the iLO, "Administration - Firmware".

Update the BIOS to at least version 2.82. It's a critical update, so downloading it doesn't require a support agreement.
https://support.hpe.com/connect/s/softwaredetails?softwareId=MTX_ee67dcd89ef74e2da195fdff53&language=en_US

Other hardware components have firmware updates too, but I think most of them can't be updated through iLO, so they would be more tricky.

SpartanIvy
May 18, 2007
Hair Elf

Saukkis posted:

iLO access can be quite useful, especially for the Integrated Management Log and console access. For TrueNAS and Unraid use the iLO Advanced license isn't that necessary, HPE allows console access during POST for configuring BIOS and devices. But with those eBay prices might as well get it.

Of course if you enable iLO remember to to update it regularly.
https://support.hpe.com/connect/s/softwaredetails?language=en_US&softwareId=MTX_a9afd91f006a44f8b1b5d0e09b

Updating is a bit tricky. Extract the "ilo4_281.bin" from the .exe with 7-Zip and upload it through the iLO, "Administration - Firmware".

Update the BIOS to at least version 2.82. It's a critical update, so downloading it doesn't require a support agreement.
https://support.hpe.com/connect/s/softwaredetails?softwareId=MTX_ee67dcd89ef74e2da195fdff53&language=en_US

Other hardware components have firmware updates too, but I think most of them can't be updated through iLO, so they would be more tricky.

Thanks for this. I knew there was a BIOS update available but hadn't gotten around to installing it. I didn't realize we were past the days of using bootable media to apply firmware fixes. Using a web interface from my PC feels like the future.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

SpartanIvy posted:

Thanks for this. I knew there was a BIOS update available but hadn't gotten around to installing it. I didn't realize we were past the days of using bootable media to apply firmware fixes. Using a web interface from my PC feels like the future.

Well bootable media could be convenient, but you would need to get your hands on Service Pack for ProLiant and that requires support contract or new hardware. If you have an RPM-based distribution you can probably install the firmware from OS. If neither applies it may be tricky. My solution for updating Dell servers with Ubuntu installs was to install CentOS 7 on a USB stick and run Dell System Update from it.

AlternateAccount
Apr 25, 2005
FYGM
Well, if you put a PCIe card in an ML30, it's going to complain about missing the front fan and then if you disable that warning it will run the other fan at 100% perpetually. Fun.

BlankSystemDaemon
Mar 13, 2009



Saukkis posted:

Well bootable media could be convenient, but you would need to get your hands on Service Pack for ProLiant and that requires support contract or new hardware. If you have an RPM-based distribution you can probably install the firmware from OS. If neither applies it may be tricky. My solution for updating Dell servers with Ubuntu installs was to install CentOS 7 on a USB stick and run Dell System Update from it.
The service pack is usually not required - it's a way to get a full disc that contains everything you'll need for a system, but if you know your way around their website, you can usually find a the firmware packages for download.
One exception appears to be the Microserver Gen10+, where you need an active service contract in order to download the firmware.

AlternateAccount posted:

Well, if you put a PCIe card in an ML30, it's going to complain about missing the front fan and then if you disable that warning it will run the other fan at 100% perpetually. Fun.
That's fairly normal, unfortunately - even Supermicro and TYAN will up fanspeed if one fan is missing and there's a daughterboard inserted, because the daughterboards depend entirely on the airflow of the fans to keep the components cooled.

HPE is special though.
My HPE DL380p Gen8 will run the fans at 30% with no daughterboards inserted, and 30% if it's a HPE branded daughterboard.
If you insert a daughterboard that's not HPE branded (even if it runs the exact same firmware as a HPE branded card), it'll boost the fans to 60%).

Astro7x
Aug 4, 2004
Thinks It's All Real

BlankSystemDaemon posted:

I snipped this because most of it has been addressed, but just want to add that the key term you're looking for is: JBOD DAS - uninitialized, it means just a bunch of disks (via) direct attach storage.

Ah yes, sometimes it's just not knowing the terminology I am looking for.

So what I ended up doing it getting this OWC Mercury Elite Pro Dual with 3-Port Hub and then bought 2 of these Seagate Exos 12TB renewed drives on Amazon. No problem getting both to show up as individual drives and using Time Machine to back up one to the other.

Still a little skeptical about getting renewed drives, but the whole thing only cost $400. Since it's DAS and I can mount Dropbox on it, I plan on using Dropbox to backup my most important data on there (home videos/photos), and not backup Plex Library type media. So worst case scenario if both drives die at the exact same time I just lose my Plex library, which is no big deal. And then my most important data follows the 3-2-1 rule. So I think I'm good!

Audax
Dec 1, 2005
"LOL U GOT OWNED"
Pardon my newbie question; there's just a lot of information in the thread and I've tried searching for a bit but couldn't find a recent answer. I currently have an Ubuntu PC with a a Plex install where I I create a Plex username for family to log in and view media. I'm interested in expanding that to an entry-level NAS with Plex still being the main use case. Looking at either the Synology DS420+ or QNAP TS-464.

I see that both Synology and QNAP have histories of being hit with ransomware attacks (QNAP as recent as Aug 2022). Is there any way to somewhat securely open up the server to outside-the-house Plex streaming or is this just like a don't loving do it kinda thing? Would compartmentalizing the data/users in some way work?

If that can be safely done, I'd also be interested in using the NAS to back up ~2TB of personal data, but that wouldn't be the main use case. I wouldn't care about accessing the PC backup items outside the house.

Has my dumb rear end just been lucky keeping a Plex server running raw on my Ubuntu box without issue?

Comatoast
Aug 1, 2003

by Fluffdaddy
What's wrong with your current Ubuntu/Plex setup? IMO that is a superior config to a proprietary nas.

Regarding outside access to plex, the recommended solution is a reverse proxy or a cloudflare tunnel. Though, cloudflare doesn't want you streaming video on their service. There is some docker for reverse-proxy on nginx that I keep seeing mentioned, but I can't recall the name.

FAT32 SHAMER
Aug 16, 2012



I was under the impression (that may be entirely wrong), but if you invite a plex user to your library they handle all of the tunneling and stuff and you don’t have to configure that nor open your device to the internet.

If I have to do a bunch of network janitoring I think my brother is about to be real bummed lmao

Audax
Dec 1, 2005
"LOL U GOT OWNED"

Comatoast posted:

What's wrong with your current Ubuntu/Plex setup? IMO that is a superior config to a proprietary nas.

Regarding outside access to plex, the recommended solution is a reverse proxy or a cloudflare tunnel. Though, cloudflare doesn't want you streaming video on their service. There is some docker for reverse-proxy on nginx that I keep seeing mentioned, but I can't recall the name.

It's not too great of a PC and wasn't really originally constructed with the forethought to be a media server. It is a a 6 year old Intel G4600 inside of an ASRock Deskmini 110W. The Deskmini has an known issue with integrated graphics which causes it to freeze occasionally. I've been unable to find a fix for it; sometimes it'll stay up for 3 weeks with no issue and sometimes it'll crash 3 times in one evening. It kind of evolved into the family server since I kept it plugged into to my TV due to the small form factor. I'd like to get something more stable and repurpose it for other tasks that won't annoy me as much. If I could go back in time I'd probably wait for another revision but otherwise I love the form factor.

FAT32 SHAMER posted:

I was under the impression (that may be entirely wrong), but if you invite a plex user to your library they handle all of the tunneling and stuff and you don’t have to configure that nor open your device to the internet.

If I have to do a bunch of network janitoring I think my brother is about to be real bummed lmao

I thought so too, but then I started reading about the malware & how/why to set up docker & my brain went uhhh what am I getting into.

Trapick
Apr 17, 2006

I've had good luck with Plex working almost entirely out-of-the-box on a few installs (desktop, TrueNAS Core, TrueNAS Scale) - but it'll depend on your router, as long as it supports UPnP you're probably fine. Plex is also nice that it reports in settings if it's accessible from the internet-at-large, makes it easy to troubleshoot.

Comatoast
Aug 1, 2003

by Fluffdaddy
There are a couple decisions that will guide your nas journey: how many drives do you need, do you insist on ZFS, and do you mind janitoring your own general purpose OS.

Personally, I find Ubuntu to be just as set-it-forget-it as a synology. You’re going to have to learn how to configure something, may as well be linux software instead of the proprietary synology gui.

I run a HP S01. It can only hold a single 3.5” drive, but that’s all I need. I did throw 16gb of memory, a cheap nvme drive and an Intel 10400T processor in it. Roughly $300 all in. If Dell or HP ever have an excellent deal (like they were doing around 2013) on a micro-atx tower with ECC memory, then I’ll upgrade to that.

Comatoast fucked around with this message at 22:57 on Jan 31, 2023

Corb3t
Jun 7, 2003

Trapick posted:

I've had good luck with Plex working almost entirely out-of-the-box on a few installs (desktop, TrueNAS Core, TrueNAS Scale) - but it'll depend on your router, as long as it supports UPnP you're probably fine. Plex is also nice that it reports in settings if it's accessible from the internet-at-large, makes it easy to troubleshoot.

Isn't keeping UPnP on a much bigger security risk for your network than forwarding Plex's port outside your network?

Trapick
Apr 17, 2006

Corb3t posted:

Isn't keeping UPnP on a much bigger security risk for your network than forwarding Plex's port outside your network?
I don't think either one would increase the risk of a Synology or QNAP box getting owned (much). Would depend on what else you've got locally though, UPnP is certainly a bad idea if you're going to connect random untrusted junk to your network.

edit: ok, I'll stand slightly corrected - QNAP especially seems to have a bunch of weird services opening up holes and getting wrecked, be very careful.

Trapick fucked around with this message at 23:23 on Jan 31, 2023

IOwnCalculus
Apr 2, 2003





FAT32 SHAMER posted:

I was under the impression (that may be entirely wrong), but if you invite a plex user to your library they handle all of the tunneling and stuff and you don’t have to configure that nor open your device to the internet.

If I have to do a bunch of network janitoring I think my brother is about to be real bummed lmao

Plex does have an option for this, called Plex Relay. The downside is that quality is extremely limited, 1Mbps if you don't have Plex Pass, 2Mbps if you do.

The much more sensible option is to simply open port 32400 TCP to the internet, without exposing anything else.

BlankSystemDaemon
Mar 13, 2009



It's not exactly difficult to setup headscale which lets you self-host a beacon for tailscale.

It's even available in FreeBSD Ports so can be installed in a jail.

Dyscrasia
Jun 23, 2003
Give Me Hamms Premium Draft or Give Me DEATH!!!!

Astro7x posted:

Ah yes, sometimes it's just not knowing the terminology I am looking for.

So what I ended up doing it getting this OWC Mercury Elite Pro Dual with 3-Port Hub and then bought 2 of these Seagate Exos 12TB renewed drives on Amazon. No problem getting both to show up as individual drives and using Time Machine to back up one to the other.

Still a little skeptical about getting renewed drives, but the whole thing only cost $400. Since it's DAS and I can mount Dropbox on it, I plan on using Dropbox to backup my most important data on there (home videos/photos), and not backup Plex Library type media. So worst case scenario if both drives die at the exact same time I just lose my Plex library, which is no big deal. And then my most important data follows the 3-2-1 rule. So I think I'm good!

I'm nearly complete with a 4x4tb to 5x10tb drive migration using all refurb drives. I like it so far, 3 out of 4 had 0 hours, one has 2.5 years and the last is in transit. I staggered purchases over about 4 months. The cost is so much less that I'm willing to play the numbers game with them. I think I'd prefer replacing a dead refurb compared to dealing with consumer level RMAs.

I'm not saying those 0 hour drives are in fact accurate values, but they've been working well so far.

Dyscrasia fucked around with this message at 00:55 on Feb 1, 2023

AlternateAccount
Apr 25, 2005
FYGM

BlankSystemDaemon posted:

That's fairly normal, unfortunately - even Supermicro and TYAN will up fanspeed if one fan is missing and there's a daughterboard inserted, because the daughterboards depend entirely on the airflow of the fans to keep the components cooled.

HPE is special though.
My HPE DL380p Gen8 will run the fans at 30% with no daughterboards inserted, and 30% if it's a HPE branded daughterboard.
If you insert a daughterboard that's not HPE branded (even if it runs the exact same firmware as a HPE branded card), it'll boost the fans to 60%).

Yeah but it suuuuuucks. Hopefully whoever's testing that $5 Delta gets a good result.

Astro7x
Aug 4, 2004
Thinks It's All Real

Dyscrasia posted:

I'm nearly complete with a 4x4tb to 5x10tb drive migration using all refurb drives. I like it so far, 3 out of 4 had 0 hours, one has 2.5 years and the last is in transit. I staggered purchases over about 4 months. The cost is so much less that I'm willing to play the numbers game with them. I think I'd prefer replacing a dead refurb compared to dealing with consumer level RMAs.

I'm not saying those 0 hour drives are in fact accurate values, but they've been working well so far.

How do you check the hours on a drive? I am on a Mac if that matters

FAT32 SHAMER
Aug 16, 2012



IOwnCalculus posted:

Plex does have an option for this, called Plex Relay. The downside is that quality is extremely limited, 1Mbps if you don't have Plex Pass, 2Mbps if you do.

The much more sensible option is to simply open port 32400 TCP to the internet, without exposing anything else.

Oh, I thought all that required janitoring like this:

BlankSystemDaemon posted:

It's not exactly difficult to setup headscale which lets you self-host a beacon for tailscale.

It's even available in FreeBSD Ports so can be installed in a jail.

So instead of using relay, your friends have to connect to a VPN that you’re hosting for your network to then be technically on the same network and not be rate limited to 1-2Mbps? I didn’t read all the documentation yet, but that sounds like a lot of effort unless this lets them stream at whatever your upload/their download speed is

What does port 32400 TCP do? Does it somehow surpass the 1Mbps from relay, or is it an easier way for them to do the same as a VPN?

Sorry for the somewhat off topic posts, I’ve avoided network janitoring since it didn’t really benefit me and had a whole lot of ways to harm me so a lot of this I’m just learning :shobon:

Trapick
Apr 17, 2006

Plex Relay means all traffic is going through one of their servers (the relay), they limit the traffic because they don't want to be paying for all that bandwidth. Either the stream goes server->relay->client or server->client; if the second, you need to either be on the same network (physically or virtually) or have a port open somehow.

Opening a port (e.g. 32400) is easier than setting up a VPN, yah. On your side either is probably fine, but your friends will have to do nothing if you open a port (they login to Plex, you've shared access with them, bingo bango) or go through a pain in the rear end for a VPN. If they're watching on a computer minor pain in the rear end, if on a Roku stick or xbox much more of a pain.

Open a port, keep plex updated/watch for news of exploits, don't worry too much about it.

edit: oh and wherever Plex is running, limit it - have it run under a user with minimal permissions, read-only access to your media, write access to its own metadata and stuff only, etc.

Trapick fucked around with this message at 07:35 on Feb 1, 2023

Yaoi Gagarin
Feb 20, 2014

BlankSystemDaemon posted:

It's not exactly difficult to setup headscale which lets you self-host a beacon for tailscale.

It's even available in FreeBSD Ports so can be installed in a jail.

I feel like we're getting close to the point where it'll be possible to buy a cheap device that runs a wireguard endpoint and send it to your non-technical friends and family, tell them to plug it in, and it automatically bridges to your network and everything is super easy

Morbus
May 18, 2004

Klyith posted:

IMO a burn in like doing a zero write to the whole drive is useless. Backblaze stats don't show a bathtub curve of high 1st-year failures in drives these days, drives pretty much are either DOA or work. So the idea that you could weed out a bad drive with burn in seems unlikely.

It's a waste of time mostly. Most drive failure modes are not going to be accelerated by just doing a bunch of write operations. A drive that fails after however many days of whatever "stress test" you put it through is in many cases a drive that would have failed by that time anyway. And just writing to every single sector isn't going to cause a drive destined to fail in 6 months to instead fail during your test. The most that can be said about tools like badblocks is that they may uncover SMART errors that otherwise wouldn't be apparent until you've filled the drive with data. But a large fraction of drives that fail do so without ever having any SMART errors...and SMART error signals themselves don't have great predictive value for drive failure, especially for individual drives.

Like, the most common read/write failure modes for HDDs are (in roughly descending order, and excluding extrinsic things like vibration or shock):

1.) Adjacent track interference--where (many) repeated writes to a given sector degrade the pattern written on adjacent tracks. Fundamentally this is a thermally activated process and is not going to be accelerated by just writing to every sector of the drive like badblocks does.

2.) Reader or writer degradation or instability. When not caught by in-factory tests, these are often a consequence of media or head defects that manifest simply over time, like corrosion or tribology problems. They will cause a failure when they do, and just writing a bunch of data won't speed things up.

3.) Unrecoverable defect on disk--again, if not caught during factory self-test, this is often the result of a slowly-building corrosion or contamination problem.

4.) Off-track write. The writer wasn't able to follow the track correctly when writing. Often caused by a poorly written servo or degradation to the servo pattern. Nothing you do in software can read or write to the servo pattern, so, again, just writing a bunch won't accelerate these problems

At their core most of these are "ticking time bomb" problems--which is why they are hard to catch during factory test and therefore why they are the most common. If you really want to do an accelerated stress test, just writing to every sector is pretty pointless. Running at sufficiently elevated temperature *is* an effective accelerated stress test, but not advisable since it will reduce the lifetime even of perfectly good drives. Anyway thats my ted talk

Adbot
ADBOT LOVES YOU

Wibla
Feb 16, 2011

Basic burn in tests are mostly just suitable to find DOA drives. I've experienced one DOA drive since I started building computers in the 90s. My seagate lp based array had failures when temps got high, that's something others have documented as well.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply