Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.
Well, let's start with the fact that plugins are just pre-built jails, so they can leave the discussion entirely. The important thing to note is that jails are entirely about isolation and security. They're the precursors to modern virtual machines, more or less. They're functionally designed to be treated as a virtual machine that has direct access to the same hardware as the host (almost like IOMMU for literally every device on the system, but not really). They're running the same kernel as the host though, so they're not really VMs.

Docker containers are 100% about providing a stable, consistent deployment of a package to any system, regardless of what the host environment looks like. As long as the host is running an appropriate kernel, and is of the correct architecture for the binaries in the container, it Just Works. You pull down an image, and it contains all of the software (libraries, binary executables, configuration, etc) you need to run the container, and you just hit run. Some containers are more full featured and can source in external configuration, or persist their configs outside the container, or whatever else, but that's extra functionality.

Beyond that, Docker containers are designed to be ephemeral. They function off a layered filesystem where the top layer is the actual running container with any changes that are live in it, but the image you pulled down is the next layer down. In practice, the top layer gets destroyed when you start a fresh copy of the image, because it isn't a part of the actual image. This means that every time you run a given container image, you start in exactly the same base state.

Jails, on the other hand, are designed to be logged into and have changes made, and they persist. Again, functionally, they're the same as VMs, in terms of how you use them.

What this boils down to is that when you go to upgrade a jail (especially in plugin form), there need to be migration procedures in place, because you're effectively abstracting a human going in and upgrading the software in the jail. With a Docker container, especially the kind that have no external configuration, you literally pull down a new version of the image and go. If it has external configuration, you just source that into the new version of the container and go, and the container itself should do that for you if it's set up that way. It's really an entirely different paradigm.

Edit: One way that's been helpful to explain this to less technical folks at my office: Pets vs cattle. Pets (jails, VMs, long-running EC2 instances) are groomed and cared for, if something goes wrong with them you try to fix it, you maintain them. Cattle (Docker, and other types of containers) are expendable. If your cow gets sick, you shoot it in the head and get another one from the barn. You know that you're doing the absolute minimum to keep it alive, but that you could replace it at any time and be just fine.

G-Prime fucked around with this message at 19:30 on Jan 7, 2017

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



OS level virtualization and hardware virtualization are very different - and hardware virtualization is MUCH older, dating all the way back to pre-UNIX days. With hardware virtualization, you're lying to the guest OS about the hardware, and the guest OS has to accept whatever lies it's told. This is done by having the host-OS hand over a part of the systems DRAM, some cores on the CPU (or the functional equivalent if you're on a single-threaded system), and a certain amount of diskspace (though thin-provisioning has improved this somewhat) which means that there's always a certain amount of preformance penalty associated with it, in addition to a large amount of overhead of having an entirely different OS running on the virtualized hardware. Most importantly, the host OS has absolutely NO idea what's being done with the CPU cycles, the DRAM or the filesystem I/O that the guest OS is using - at best you can hope for heartbeat monitoring and shared folders, but those are consessions to make administration ever-so-slightly easier.
With OS-level virtualization (jails, specifically - though this also applies to Zones on Illumos, to my knowledge) you're essentially just moving the root of the filesystem into its own directory (what chroot - originally added in UNIX, though nobody knows why - did and indeed still does, jails builds upon chroot), meaning that there's only a tiny bit of overhead (and on Illumos Zones, I believe it uses some form of deduplication to ensure that only files changed compared to the Global Zone take up diskspace), all the while the kernel is shared. Zones and Jails have now started incorporating ZFS datasets which make replication, templating and cloning, snapshots and rollback, and other such very nice features available. All of this means that the host OS can control cpu, memory, disk and process quotas/restrictions both for itself and the jail/zone, and even if you acquire root inside the jail/zone, you can't forkbomb or otherwise resource-starve the host-OS.

That said, you're right about Docker being made to give developers a consistent enviroment across multiple host enviroments, but my comment about Kleenex was meant to illustrate that Docker is a specific implimentation of a general idea. I didn't mean to imply that it's somehow inferior, it's just a different way to arrive at the same result (albeit the reasons are different, so there are differences in implimentation).

Additionally, yes - jails are about isolation (and security, because the two concepts are inseperable) - I believe I already stated that, and it's certainly evident from the title of the original paper. It's actually kind of interesting to read that, and then afterwards read the original Zones paper. Jails and Zones have certainly benefited from each other (FreeBSD vnet/vimage is directly inspired by Crossbow, for example).

BlankSystemDaemon fucked around with this message at 20:04 on Jan 7, 2017

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

FormatAmerica posted:

what's the advantage of using docker instead of plugins & jails for freenas?

I mean, i sort of understand that they're easier and closer "one step" for provisioning & configuration but other than that not really familiar with any other benefits.

That in and of itself is a pretty big benefit. The other issue is that, at least for FreeNAS (and presumably NAS4Free and other similar OSes) anything that's pre-compiled as an "officially supported" jail (as in, the one-click installable-through-the-GUI jails, vice ones that you roll your own) have to be more or less hand-made by the devs. So every time there's an update to the underlying app, someone has to go back through and update the package. This is why there's often a good bit of lag between when an application like Plex gets an update and when that update finally shows up in the jail packages.

Docker allows you to pull containers from a much deeper repository; my understanding is that FreeNAS Docker images are basically just generic docker containers with a couple of FreeNAS-specific metadata bits added on (environment variables, mostly), which should drastically reduce the work that's needed to keep them up to date for FreeNAS, as well as make life a ton easier for anyone who wants to use some docker app that isn't "blessed," since in theory it should work just fine so long as you feed it whatever particular environment variables/settings it needs.

Of course we'll see how it actually turns out in practice, since 10.x is quite a poo poo show right now--I think they only really released it to the public as a rough beta in September or thereabouts.

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.

Sorry, I didn't mean to imply that hardware and OS virt are the same. I was trying to boil down the use case for what appeared to be more of a layman (FormatAmerican). You're dead right.

The point of my post that matters is that for a plugin system like FreeNAS, Docker makes 100% perfect sense. Persist the config outside the container, do easy upgrades. None of the headaches of the current jail-based system, because you're destroying the old container and popping a new one, rather than running arbitrary upgrade commands in the hope that the user has never jumped in and modified the jail.

G-Prime fucked around with this message at 00:51 on Jan 8, 2017

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
I've seen a bunch of love both here and elsewhere recently for ZFS, and fewer people seem to be going with good ol mdadm / lvm / ext4. What's the big downside of mdadm that I'm missing? Being able to expand arrays 1 disk at a time seems pretty drat nice.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
A lot of it is not so much that mdadm / lvm / ext4 are bad as much as ZFS has a bunch of built-in features that assist in data protection/integrity that the others do not. The inability to expand existing pools (other than by sequentially replacing each disk with a larger one and then letting it auto-expand to its new size once all disks have been replaced) is obviously the biggest drawback of ZFS, but it's something that you can plan around. On the other hand, the extra data security ZFS gives you is not something you can "plan into" other file systems, so that pushes people towards ZFS despite its limitations.

IOwnCalculus
Apr 2, 2003





D. Ebdrup posted:

I was under the impression that you ran FreeBSD/FreeNAS, in which case they're called jails not docker containers and are a form of OS-level virtualization as opposed to hardware virtualization, invented primarily to limit what root can breakdo. Docker is to containers what Kleenex is to tissues - and jails date back to before the 2000s. More importantly, docker containers weren't invented with security in mind, security was an afterthought.

I do run FreeNAS, but I have an Ubuntu box right next to it that runs Crashplan, Plex, Deluge, Sonarr, SABNZBD, and pihole, all as Docker containers.

Droo
Jun 25, 2003

Twerk from Home posted:

I've seen a bunch of love both here and elsewhere recently for ZFS, and fewer people seem to be going with good ol mdadm / lvm / ext4. What's the big downside of mdadm that I'm missing? Being able to expand arrays 1 disk at a time seems pretty drat nice.

ZFS3 is just about perfect for a 12 volume array of big drives (imho), and there is no mdadm equivalent of that. I could see that alone swinging a lot of people over.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

FWIW, if I ever do a major upgrade of my server storage I'll seriously consider switching away from ZFS to a solution that allows single drive upgrades.

For bulk media storage I'm just not terribly concerned about the extra safety ZFS provides but I'm constantly annoyed about adding more storage. I can always run a smaller ZFS array for important data.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
There absolutely are use scenarios where ZFS is potentially overkill, and the downsides outweigh the added security.

The ideal case is where you can either reasonably predict your storage needs for the next few years, or afford to buy enough excess space that you won't likely run out before drives have improved to the point where simply replacing the entire array is a reasonable expense. Or, at minimum, drive sizes have increased to the point where you can throw all your data onto an external or two while you recreate your array with an extra few drives in it.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
That's why I'm interested in a solution like Storage Spaces, where you have a bundle of disks in a pool, which will then be allocated to the logical volumes, based on 256MB slabs sourced from the disks in a way it satisfied redundancy requirements. Right now I run a pair of mirrors in ZFS (essentially RAID-10), which includes all data. With SS I could run my important volumes as three way mirrors, put video and poo poo in parity volumes and run the VM images and anything other unimportant (like Steam games) in simple volumes, maximize everything. With Windows 10/Server 2016 forward, you can finally balance those allocations, so it becomes worthwhile for random expansion.

But it involves running Windows Server... :geno:

While I have per-se not a problem with this, I can foresee Microsoft pulling a good one on Windows 10 (some invasive privacy poo poo), causing me to jump to Linux and be required to revert everything.

--edit: I could however run these cheap-rear end used Mellanox ConnectX-2 cards on WinServer, they aren't and won't be supported on FreeNAS :(

Combat Pretzel fucked around with this message at 17:45 on Jan 8, 2017

eames
May 9, 2009

I tried a few different NAS operating systems for my new homeserver and — to my own surprise — like unRAID best so far.
They made a bunch of quirky design decisions that make it useless for any kind of mission critical purpose but perfect for my own use case.

Their RAID mode is basically RAID 4 without striping.
All parity is stored on one or two dedicated parity drives (the largest in the array), all data is stored on the remaining drives in a JBOD fashion. The main disadvantage of this is that write performance will always be bottlenecked by the parity drives, in my case ~120 MB/s. Their way of alleviating this is using a dedicated cache drive (usually a SSD) that gets written to the array at night.

Disadvantages:
  • proprietary implementation (based on Slackware)
  • utterly, hilariously terrible security practices, never use it for anything facing the internet unless it is in a VM
  • no bitrot protection, compression or encryption (yet)
  • sequential writes that bypass the cache drive are slow (~120 MB/s)
  • if the cache drive dies, all data since the last "flush" is lost. (they recommend mirrored cache drives)
  • price ($60 for the smallest license)
  • homepage/UI design looks questionable :stare:

Advantages:
  • data drives can be added and removed at will as long as they are equal or smaller than the parity drive
  • content on the data drives can be accessed like a regularly formatted disk, so if 3 out of 6 drives die at the same time, the data on the remaining 3 drives remains readable
  • volume drives can spin up/down independently of each other
  • parity drive only has to spin up when something gets written directly to the array
  • any transfers that involve the cache drive are very fast (>500 MB/s)
  • excellent docker implementation with many one-click installers, KVM virtual machines with PCI/GPU passthrough. I plugged a 1050ti into my NAS, made a Win 10 gaming VM just for fun and it runs games within 5-10% of native performance. Steam inhome streaming works too.

In my use case this means that I can have an array of n drives and all of them stay spun down most of the day. If I access Plex to watch a movie, only the single drive that holds the movie spins up. Metadata is permanently in the cache drive, folder directories are stored in RAM.
If I upload a data to the server or download+extract something to the server, everything is handled by the SSD cache and all drives stay spun down until 1AM where everything gets written to the disk.

My quadcore haswell NAS with four 8 TB reds and 9 dockers, an Ubuntu and a Windows VM had an average power consumption of 22W over the last 24 hours.
That should drop to ~15W with the next version, because then I can pass through the iGPU to Windows and use the optimized Intel iGPU driver to enable power saving states.

I would never recommend this for people with terabytes of irreplaceable data but as home/media server + regular backups for the important personal data it seems like a surprisingly decent concept. I'm probably going to buy a license when the trial period is over.

eames fucked around with this message at 20:31 on Jan 8, 2017

Horn
Jun 18, 2004

Penetration is the key to success
College Slice

Thermopyle posted:

FWIW, if I ever do a major upgrade of my server storage I'll seriously consider switching away from ZFS to a solution that allows single drive upgrades.

For bulk media storage I'm just not terribly concerned about the extra safety ZFS provides but I'm constantly annoyed about adding more storage. I can always run a smaller ZFS array for important data.

I switched from software raid to unRaid a few months back and I love it. Like you I'm mostly storing things which aren't all that important so I'd rather have the flexibility of adding drives whenever vs the security of ZFS. Everything irreplaceable is offsite at amazon so I would probably move away from raid entirely but it's kind of a habit at this point.

some kinda jackal
Feb 25, 2003

 
 
What would you guys recommend for online backup of a NAS, obviously as a ridiculously last resort in case my apartment actually goes up in flames. I've got a 20TB QNAP and while I won't be putting 20TB up online on day one I don't really want to have to worry that something will crap out after 5TB limit on an "unlimited" plan, for example.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
The default cloud storage answer is Crashplan, since it (1) works, (2) is affordable, and (3) is actually unlimited. Depending on how you have thing set up with your NAS, Crashplan is either pretty easy or semi-annoying to set up. Backblaze is also worth a look, but depending on how you want to set things up may not support your NAS needs. Sadly I don't think either offer seed drive options anymore, which is gonna hurt if you need to upload several TB on anything less than a FIOS connection.

some kinda jackal
Feb 25, 2003

 
 
I have 1G up at home so I can let it run for a couple of nights/weeks/whatever so that won't really be a huge problem. I was thinking crashplan as well so it's good to see I'm not way off base. Thanks!

BlankSystemDaemon
Mar 13, 2009



I've never ever maxed out my 1/1Gbps fiber to Crashplan.

some kinda jackal
Feb 25, 2003

 
 
I'm not expecting to max it out, but I only have like 2-3TB to upload right now so if I can get that done in the span of a week I'll be okay with it :haw:

EconOutlines
Jul 3, 2004

Recommendations on drives(NAS/Desktop/Tear-Apart Externals) for extra storage?

One of my 3TBs is getting RMAed and apparently Seagate seems ruthless with the process. Regardless, I have 1.13TB remaining for :filesz:, I'd rather just get a good deal. Gradually updating I guess.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
If you're running an actual NAS, get NAS drives. WD Reds are kinda the defacto standard, but if Seagates or HGST ones had a decent sale or were otherwise cheaper than Reds I'd have no hesitation picking those up instead.

I don't think anyone really recommends shucking drives as a long-term strategy, as such drives are generally built for lower wear tolerances, and shucking them tends to void whatever crappy warranty they had.

Desktop drives have so many iterations and models that I don't think anyone keeps track. Each company has had particular models that have had problems, and it seems everyone has their own particular brand which they refuse to buy because reasons. Backblaze's testing suggest HGST ones are the most reliable, but their methodology wasn't exactly perfect and it's hard to extrapolate reliability performance from one specific model to another.

EconOutlines
Jul 3, 2004

No, I'm not running a legitimate NAS, just DrivePool software. I'd just rather have less space and worry about disk errors/failure. Plus, I need transcoding, so setting up a NAS would be an i7/i5 deal.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Or an older Xeon deal. You can get a Xeon 1220v1 for $60 or a v2 for $100. Either would be plenty for transcoding.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
With the Postgres database system, you can put all your indexes on a table so they are super fast even if they aren't in memory.

Is there a way to do something like that with filesystem metadata, so your inodes/etc live on the SSD but the data itself lives on a HDD? The idea being to accelerate IOPS, but still get good capacity, by caching writes/etc and writing big fat blocks to the HDD rather than lots of IOs.

Is this more or less what the L2 cache does in ZFS?

Paul MaudDib fucked around with this message at 04:19 on Jan 10, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
I don't want to run my RAID in RAID mode anymore (CineRaid CR-H458). Is there a good argument to running the built-in "spanned" mode, vs running JBOD and setting up some kind of LVM on top?

I haven't heard good things about the Windows LVM, so it seems like if I want to run it as NTFS then spanned mode might be good in that respect. But it's primarily plugged into a Linux system day-to-day, so in that respect a LVM with a spanned volume makes sense to me there. But if I want to take it places (I like to take it on vacation with me so I have movies/etc), most systems don't have something that understands Linux partitions let alone a LVM group. Windows will have no idea what to do with that, right?

I guess I can drag along a Liva to plug into it, but that makes it a little clunkier to travel with. Or like, install an Ubuntu Server VM on my laptop and pass it through I guess?

edit: if I do JBOD and use LVM, it should be easier to replace disks with larger ones if I need to down the road, so I guess all signs are pointing that way for me.

Paul MaudDib fucked around with this message at 05:24 on Jan 10, 2017

Platystemon
Feb 13, 2012

BREADS
JBOD. It’s safer and more flexible.

BlankSystemDaemon
Mar 13, 2009



Paul MaudDib posted:

With the Postgres database system, you can put all your indexes on a table so they are super fast even if they aren't in memory.

Is there a way to do something like that with filesystem metadata, so your inodes/etc live on the SSD but the data itself lives on a HDD? The idea being to accelerate IOPS, but still get good capacity, by caching writes/etc and writing big fat blocks to the HDD rather than lots of IOs.

Is this more or less what the L2 cache does in ZFS?
No, L2ARC is literally the secondary version of ARC - it takes over when the ARC is full, stands for adaptive replacement cache, and works on the same principle as any other non-predictive caching, in that it caches the most-frequntly accessed data (per-block, not per-file) - and is only meant to be used once you've filled your machine to capacity with memory (as even NVMe SSDs are multiple orders of magnitude slower than DRAM) - because to use L2ARC you need to map what's in the L2ARC in the ARC, which takes up memory that could be used for caching.

OpenZFS will eventually also add a few more devices for, among other things, storing the deduplication table by itself rather than storing it in ARC or L2ARC depending on its size (DDT is literally just a table which records every allocated block based on its checksum, and each entry takes up ~320 bytes) - but I don't know that they're planning on storing inodes on a seperate device, and I'm not sure how much more IOPS it adds above what ARC already adds.

BlankSystemDaemon fucked around with this message at 11:24 on Jan 10, 2017

PiCroft
Jun 11, 2010

I'm sorry, did I break all your shit? I didn't know it was yours

Decairn posted:

Plex doesn't use the hardware transcoding on the Synology. Many people use a separate PC for Plex transcoding or live with the odd file that is too big to reliably stream.
As a file server, or Sonarr/BT/Usenet client etc its got plenty of power for home use.

Ah I see. Thanks for the heads up. I'm not overly bothered if I have to use a different machine to run the Plex Server, the backup space is more important to me at this stage.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Probably going to reinstall my FreeNAS box this weekend to get FreeNAS 10 Beta ahead of them finishing the upgrade procedure. I was thinking of tossing my old Samsung 840 Pro SSD in there as boot drive and system dataset. What are the best practises in regards to sizing the L2ARC, since IIRC there's some RAM requirements that scale with its size? --edit: One number I ran into was 1GB of RAM per 50GB of L2ARC.

--edit2: Apparently 70 bytes per filesystem block cached. I guess I can make some assumptions on a potential 4KB worst case.

Combat Pretzel fucked around with this message at 21:37 on Jan 10, 2017

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
FreeNAS 10 is...interesting. I dig the direction they're going with the web UI and docker containers, but be aware that right now performance is all over the place. In particular, anything running in a docker container seems to be moderately unstable and a bit slow. Seeking around on a video streamed over Plex, for example, was far smoother/snappier running on 9.2 than 10.x, and Transmission would regularly crash out. OS updates also take 30+ min a pop, and occasionally break major things (I had one update randomly break video out, so once it booted I couldn't access anything via my IPMI terminal anymore, which was annoying when another update broke the OS completely and I wanted to roll back, but now couldn't, so I ended up having to do an entire reinstall). The web UI is also painfully slow for no apparent reason.

Basically it's really beta. It absolutely is bringing some cool things to the OS, but I wouldn't use it for much past loving around with for shits and giggles right now.

IOwnCalculus
Apr 2, 2003






Thank you for saving me a bunch of wasted time in coming to that same conclusion myself.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
The upside is, of course, that you can just install the OS on a USB stick, and therefore easily swap back and forth between 9.2 and 10.x. I'll probably give 10 another go whenever they drop BETA3 because I'm a huge sucker for bleeding edge software that can possibly nuke my entire media collection again (whoops).

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I guess I'll wait and maybe play with it in a VM. Maybe going to do the whole sing and dance with 9.3 then.

Also, the L2ARC requirement is 170 bytes per block, not 70 bytes as the reddit post claimed. Oracle seemed to have changed things to reduce it to 80, hope OpenZFS does something similar. Not too happy about it, since it'll be 1.3GB for just an 64GB L2ARC, assuming the 8KB block size, that I have in my iSCSI volumes, all the way.

Combat Pretzel fucked around with this message at 22:06 on Jan 10, 2017

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Combat Pretzel posted:

I guess I'll wait and maybe play with it in a VM. Maybe going to do the whole sing and dance with 9.3 then.

Also, the L2ARC requirement is 170 bytes per block, not 70 bytes as the reddit post claimed. Oracle seemed to have changed things to reduce it to 80, hope OpenZFS does something similar. Not too happy about it, since it'll be 1.3GB for just an 64GB L2ARC, assuming the 8KB block size, that I have in my iSCSI volumes, all the way.

It's not that bad, especially since you want the SSD to be underprovisioned by 25% or so to make sure the IOPS stay 95th percentile of peak, sinstead of degrading to steady-state which is like 30% of peak.

MycroftXXX
May 10, 2006

A Liquor Never Brewed
Quick question from a layman. I need to set up a NAS for my very small office but I don't want any rando that's connected to our wifi to be able to access the files on the nas. If I were to purchase something like this, would I be able to easily put a password on the whole drive? It seems like I would be able to do this fairly simply, but I figure it wouldn't hurt to ask someone who may have set up something like this before.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

MycroftXXX posted:

Quick question from a layman. I need to set up a NAS for my very small office but I don't want any rando that's connected to our wifi to be able to access the files on the nas. If I were to purchase something like this, would I be able to easily put a password on the whole drive? It seems like I would be able to do this fairly simply, but I figure it wouldn't hurt to ask someone who may have set up something like this before.

You have different options here, but I just wanted to say random people shouldn't be able to connect to the wifi your office is using. If you want random people to be able to use wifi at your office go into your router settings and set up a guest network.

MagusDraco
Nov 11, 2011

even speedwagon was trolled
Is 1 reallocated sector on a new drive a huge deal? I ran it through bad blocks which does 4 passes on the drive with different patterns, reads, and verifies the data. The reallocated sector popped up sometime during that test. It's an 8TB WD Red and is still in the return period so that's not a problem. Just not sure if I need to get it replaced.

Star War Sex Parrot
Oct 2, 2003

havenwaters posted:

Is 1 reallocated sector on a new drive a huge deal? I ran it through bad blocks which does 4 passes on the drive with different patterns, reads, and verifies the data. The reallocated sector popped up sometime during that test. It's an 8TB WD Red and is still in the return period so that's not a problem. Just not sure if I need to get it replaced.
Swap it.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

havenwaters posted:

Is 1 reallocated sector on a new drive a huge deal? I ran it through bad blocks which does 4 passes on the drive with different patterns, reads, and verifies the data. The reallocated sector popped up sometime during that test. It's an 8TB WD Red and is still in the return period so that's not a problem. Just not sure if I need to get it replaced.

Replace it. Sure, it might possibly maybe be nothing. But there's a decent chance that it's an indication of further issues. Might as well toss it now while it's easy to replace.

MagusDraco
Nov 11, 2011

even speedwagon was trolled
Okay. That makes 2 drives to replace. The other just straight up died an hour into bad blocks. It couldn't finish an extended smart test and half the time the bios wouldn't see it even if I put it on a different power and sata cable so I knew it was super busted.

Adbot
ADBOT LOVES YOU

Platystemon
Feb 13, 2012

BREADS
IIRC Google’s study found that the first bad block was a serious bad sign.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply