Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
withoutclass
Nov 6, 2007

Resist the siren call of rhinocerosness

College Slice
Anyone have ideas/try out an ARM based NAS/motherboard arm combo? I've seen some DIY blogs of people slapping SATA and PCI Express cards into small ARM board, but was curious if anyone in the thread has tried it out.

I did find an n100 board with 6 SATA slots that's pretty tempting for about $200 too.

Adbot
ADBOT LOVES YOU

Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.

VostokProgram posted:

Perhaps someone can recommend a case for me? I've done a lot of googling but cannot seem to find anything that meets all these requirements:
1. supports at least micro-ATX motherboards
2. has 8 3.5" hot swap bays
3. can use a normal ATX power supply
4. can keep the 8 hard drives at safe temperature
5. quiet
6. not a heavy rackmount box

The closest thing I have found is the Silverstone CS381, but I see lots of people on the internet saying their drives run hot in that thing. They also make a CS382 and it has fans directly on the drive cage, but they are half blocked by its SAS backplane. I think the latter is new enough that I can't find any info about how hot it runs, maybe half-blocked fans are OK?

Given the cost in the US of any case that comes with hot-swap bays, it might be worth considering the piecemeal approach. Try to find an old case on Marketplace with a lot of 5.25" bays for like :20bux: and then get some hot swap adapters (you may need to swap in some Noctua fans on them for quiet).

evil_bunnY
Apr 2, 2003

PitViper posted:

Raidz expansion is my most-hoped-for feature, mostly because I was somewhat dumb when I built my first pool, and did a raidz1 with 4 disks, then added a second raidz1 of 4 more disks to the pool later. I rather wish I'd done one larger raidz2/3 pool instead, but to redo at this point would entail me buying 8 12-16TB disks and building basically a second NAS in order to transfer all the data to the new pool.
do..do you not have backups?

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
I don't have a full backup of my main RAID either. It's around $1000 of disks with the contents being mostly downloaded data. To keep it all backed up, I'd have to buy another $1000 of disks and build a second server to put them in. I can't really justify it, versus just doing my best to protect the single array and backing up the specific items which would be actually difficult or impossible to replace.

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer

Eletriarnation posted:

I don't have a full backup of my main RAID either. It's around $1000 of disks with the contents being mostly downloaded data. To keep it all backed up, I'd have to buy another $1000 of disks and build a second server to put them in. I can't really justify it, versus just doing my best to protect the single array and backing up the specific items which would be actually difficult or impossible to replace.

Can you invest in a single HDD that can fit your most critical data and keep it offsite?

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Yes, that's what I mean with the last part about "backing up the specific items which would be actually difficult or impossible to replace."

Wibla
Feb 16, 2011

withoutclass posted:

Anecdotal but I've been running shucked Easy Store drives for probably 1.5-2 years now without any issues.

Same, both 8TB and 14TB.

Computer viking
May 30, 2011
Now with less breakage.

On several related notes, I had one of those days where everything failed at once.

First, a disk failed in our 9 year old fileserver. It did, of course, go in the most annoying possible way, where it hung when you tried to do IO, so just importing the zpool to see what was going on was super tedious. I ended up doing some xargs/smartctl/grep shenanigans to find the deadest-looking disk and pulled that, which immediately made things more pleasant. For good and bad, I configured this pool during the height of the "raid5 is dead" panic, so it's a raid10 style layout - which did at least make it trivial to get it back to a normal state; just zpool attach the new disk to the remaining half of the mirror. I'll try to remember how you remove unavailable disks later. Nevermind that I have run out of disks and had to pull the (new, blank) bulk storage drive from my workstation as an emergency spare.

Of course, the event that apparently pushed the disk over the edge was doing a full backup to tape, as opposed to the incrementals I've been doing since last January. It's 100TB of low-churn data, but I'm still not sure how smart that schedule is. Also, I do not really look forward to trying to remember how job management works in bacula; it's been a couple of years.

This file server does two things: It's exported with samba to our department, and with NFS over a dedicated (50 cm long) 10gbit link to a calculation server we use. Since the file server was busy and it's a quiet week, I thought I'd do a version upgrade on the calculation server, too.

FreeBSD upgrades from source are trivial, so that part went fine. However, it did not boot afterwards; the EFI firmware just went straight to trying PXE. Looking into it, the EFI partition was apparently 800 kB, which somehow has worked up to today? Shrinking the swap partition and creating a roomy 64 MB one, then copying over the files from the USB stick's EFI partition worked.

Which revealed the next problem: Both the disks in the boot mirror have apparently died to the point where there's a torrent of "retry failed" messages drowning out the console, despite it being seemingly fine when doing the upgrade. I don't think a modest FreeBSD upgrade (13.1 to 13.2 , I think) would massively break support for a ten year old intel SATA controller, but ... idk, I turned it off and left.


And yes we run a modern and streamlined operation that's definitely not me fixing things with (sometimes literal) duck tape and bailing wire while also trying to do a different job.

e: Not mentioned is how the file server is a moody old HPE ProLiant that takes forever to boot and turns all fans to fire alarm style max if you hotplug/hot-pull drives without the cryptographically signed HPE caddies.

Computer viking fucked around with this message at 19:39 on Jan 11, 2024

BlankSystemDaemon
Mar 13, 2009



withoutclass posted:

Anyone have ideas/try out an ARM based NAS/motherboard arm combo? I've seen some DIY blogs of people slapping SATA and PCI Express cards into small ARM board, but was curious if anyone in the thread has tried it out.

I did find an n100 board with 6 SATA slots that's pretty tempting for about $200 too.
The idea is as sound as it's ever been, except for the fact that nobody is making any motherboards because there isn't really a spec for anything other than server boards.
Most of this is down to ARM not really specifying enough of a platform, as they stick to Server Base Boot Requirements and Server Base Manageability Requirements which only define the boot process and OOB BMC stuff - which, is good as far as it goes, but leaves a lot to be desired.

You're definitely not the only one who wants to see it though - low-power ARM boards would be loving awesome.

Computer viking posted:

FreeBSD upgrades from source are trivial, so that part went fine. However, it did not boot afterwards; the EFI firmware just went straight to trying PXE. Looking into it, the EFI partition was apparently 800 kB, which somehow has worked up to today? Shrinking the swap partition and creating a roomy 64 MB one, then copying over the files from the USB stick's EFI partition worked.
Sounds like a real lovely day, friend - sorry you had to go through that :(

I'm amazed you managed to get a copy of FreeBSD that had boot1.efi as the default, instead of loader.efi - was this manually configured, or did it come that way?

Computer viking
May 30, 2011
Now with less breakage.

BlankSystemDaemon posted:

Sounds like a real lovely day, friend - sorry you had to go through that :(

I'm amazed you managed to get a copy of FreeBSD that had boot1.efi as the default, instead of loader.efi - was this manually configured, or did it come that way?

Given the age of the machine, it was probably installed as 10.0 or 10.1 and continuously upgraded. The boot disks are a gmirror setup, so I suspect I may have done something manual instead of going with whatever the sysinstall defaults were at the time? I really can't remember, it's been a while and it has just quietly worked through upgrades without needing to think about the details before now.

FuturePastNow
May 19, 2014


BlankSystemDaemon posted:

Shucking still happens, yeah - but you gotta remember that shucked drives exist because the USB DAS disks that people are after are disks that are discarded for QA reasons when binning enterprise drives.

You may get lucky and get ones that fail the QA binning to become enterprise drives because of a minor flaw, but you can also get ones that'll constantly fail or misbehave in unpredictable ways, making the experience of using them an absolute nightmare and the rootcausing of the issue without something like dtrace even worse.

I've never heard that before. Are there any good articles or videos about companies selling enterprise drives that fail QA in USB enclosures?

BlankSystemDaemon
Mar 13, 2009



I forgot to reply to this last time, so here goes:

withoutclass posted:

Anecdotal but I've been running shucked Easy Store drives for probably 1.5-2 years now without any issues.
With the number of USB enclosures containing enterprise drives that failed QA, buying them regularly and not running into a bad one is basically just gambling - and as with any other form of odds game, you can absolutely be one of the folks who come out ahead.. Right up until you stop being ahead.

Also, how do you know you don't have issues?

Computer viking posted:

Given the age of the machine, it was probably installed as 10.0 or 10.1 and continuously upgraded. The boot disks are a gmirror setup, so I suspect I may have done something manual instead of going with whatever the sysinstall defaults were at the time? I really can't remember, it's been a while and it has just quietly worked through upgrades without needing to think about the details before now.
Yeah, gmirror is definitely not the default for bsdinstall (sysinstall went away a long time ago, but they look very similar and both use dialog, though nowadays it's bsddialog).

As boot1.efi says, it's been deprecated from before it was included in 13.0-RELEASE, so it was only really a matter of time.

FuturePastNow posted:

I've never heard that before. Are there any good articles or videos about companies selling enterprise drives that fail QA in USB enclosures?
Not sure what a video about it would contain; the people who use USB Bulk-Only Transfer and are happy with it for storage aren't the type of people to care about data availability or reliability.

It's not an open secret or anything, it's just that most people never really question where the drives in the enclosures come from.

BlankSystemDaemon fucked around with this message at 22:00 on Jan 11, 2024

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

BlankSystemDaemon posted:

I forgot to reply to this last time, so here goes:

With the number of USB enclosures containing enterprise drives that failed QA, buying them regularly and not running into a bad one is basically just gambling - and as with any other form of odds game, you can absolutely be one of the folks who come out ahead.. Right up until you stop being ahead.

Also, how do you know you don't have issues?

Counterpoint - putting your data on disks at all is gambling, which is why we have RAID and backups. I knew that shucked drives might have a higher failure rate, but for the price I saved on six 10TBs in my previous array I could have easily bought two cold spares as well. I never had a single failure in 4 1/2 years, so pretty clearly I came out ahead.

Most of us in this thread (with self builds, at least) seem to be running ZFS and if you're doing regular scrubs without seeing errors I think you can be pretty confident.

Honestly, unless there's some data out there showing that shucked drives fail at a substantially higher rate than normal NAS models then this feels like FUD to me.

Eletriarnation fucked around with this message at 22:16 on Jan 11, 2024

withoutclass
Nov 6, 2007

Resist the siren call of rhinocerosness

College Slice

BlankSystemDaemon posted:

I forgot to reply to this last time, so here goes:

With the number of USB enclosures containing enterprise drives that failed QA, buying them regularly and not running into a bad one is basically just gambling - and as with any other form of odds game, you can absolutely be one of the folks who come out ahead.. Right up until you stop being ahead.

Also, how do you know you don't have issues?



Fair enough! I don't know for certain although I'm not getting alerted and my NAS functions as expected. I'm just a home user and I don't store anything critical on the NAS so should I somehow manage to get myself into a data loss scenario it will just be one of annoyance rather than catastrophe.

Corb3t
Jun 7, 2003

Reddit self hosted is fully of people using shucked EasyStores for their servers, but by all means, spend more on Enterprise drives if you want that peace of mind. I would hope most people follow the 3-2-1 backup method and keep any essential data offsite, as well.

Fancy_Lad
May 15, 2003
Would you like to buy a monkey?

Scruff McGruff posted:

Given the cost in the US of any case that comes with hot-swap bays, it might be worth considering the piecemeal approach. Try to find an old case on Marketplace with a lot of 5.25" bays for like :20bux: and then get some hot swap adapters (you may need to swap in some Noctua fans on them for quiet).

Just be aware that anything with a backplane like this is adding another potential failure point. I had 3 similar Icy Dock 5 bay units that trucked along just fine for many years with just one fan replacement needed. Then all of a sudden over the course of 5 days about 6 months ago I had 3 disks drop out of the array, all in one of the units. It killed dead one drive, one is operational with bad sectors now, and the 3rd just dropped out of the array but isn't showing any warnings in SMART and appears to be working correctly after stress testing it for a week (and the last 5 months of service since I reused it).

I'm suspecting something power related and while I'm not 100% positive, I am fairly confident in saying that it was the Icy Dock dying on me that caused it and not just stupidly bad luck.

Eletriarnation posted:

Honestly, unless there's some data out there showing that shucked drives fail at a substantially higher rate than normal NAS models then this feels like FUD to me.

I'd also love to see this data because last time I went poking it sure seemed like it was r/datahoarder's version of an old wives' tale.

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer

Eletriarnation posted:

Yes, that's what I mean with the last part about "backing up the specific items which would be actually difficult or impossible to replace."

Sure that counts as a backup. Nice thing is you can expand as you go along. My really critical data is well less than 1TB, the majority being family photos. The main thing I need to remember to do is to export a comprehensive listing of my current media files so they can be replaced if needed. I would dread the hours involved in re-downloading though.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Wibla posted:

Same, both 8TB and 14TB.

Hey hey, all my 8tb and 14tb disks are easystore too.

Well, were easystore, 2 of the 8tb died.

BlankSystemDaemon
Mar 13, 2009



Eletriarnation posted:

Counterpoint - putting your data on disks at all is gambling, which is why we have RAID and backups. I knew that shucked drives might have a higher failure rate, but for the price I saved on six 10TBs in my previous array I could have easily bought two cold spares as well. I never had a single failure in 4 1/2 years, so pretty clearly I came out ahead.

Most of us in this thread (with self builds, at least) seem to be running ZFS and if you're doing regular scrubs without seeing errors I think you can be pretty confident.

Honestly, unless there's some data out there showing that shucked drives fail at a substantially higher rate than normal NAS models then this feels like FUD to me.
It's not even that they may fail easier, it's the ways they fail.
The headache of dealing with all the bizarre failure modes of drives that fail enterprise QA mean I'm not interested in it - because even if they work fine, if they start exhibiting trouble, it might be the sort of trouble that can be hard to rootcause without an extensive amount of time and effort.
Clearly we have different tolerances for pain, and I'm obviously not saying to only rely on shucked drives - but we'll just have to agree to disagree.

Also, there's plenty of people running unraid and all sorts of not-ZFS, so I think that's maybe a bit of a tall assumption, even if I agree it's the only way to know whether the data is good.

Wibla
Feb 16, 2011

BlankSystemDaemon posted:

The headache of dealing with all the bizarre failure modes of drives that fail enterprise QA mean I'm not interested in it - because even if they work fine, if they start exhibiting trouble, it might be the sort of trouble that can be hard to rootcause without an extensive amount of time and effort.

Is this something you've actually seen? because this smells a lot like a strawman :v:

I feel pretty comfortable running my shucked 14TB drives (bought from a Chia farmer, no less) in RAIDZ2, but I also follow the 3-2-1 backup strategy.

BlankSystemDaemon
Mar 13, 2009



Wibla posted:

Is this something you've actually seen? because this smells a lot like a strawman :v:

I feel pretty comfortable running my shucked 14TB drives (bought from a Chia farmer, no less) in RAIDZ2, but I also follow the 3-2-1 backup strategy.
Did you look at the video of Brendan Gregg shouting at disks in the datacenter?
That's what minor vibrations cause, now imagine there's a disk that's causing some issue, but you can't examine it because the bug is in the firmware and even finding it is gonna take many hours to collect stats which have to be analyzed?

I've seen things you people wouldn't believe... Storage servers with thousands of disks causing the Halon siren to go off... I watched dtrace IO histograms glitter in the dark on console screens in production. All those moments will be lost in time, like tears in rain... Time to back up.

BlankSystemDaemon fucked around with this message at 00:31 on Jan 12, 2024

Computer viking
May 30, 2011
Now with less breakage.

BlankSystemDaemon posted:

Yeah, gmirror is definitely not the default for bsdinstall (sysinstall went away a long time ago, but they look very similar and both use dialog, though nowadays it's bsddialog).
As boot1.efi says, it's been deprecated from before it was included in 13.0-RELEASE, so it was only really a matter of time.

Oh yeah, I forgot they changed over at some point. Specifically for 9.0 in 2011, apparently.

Wibla
Feb 16, 2011

BlankSystemDaemon posted:

Did you look at the video of Brendan Gregg shouting at disks in the datacenter?
That's what minor vibrations cause, now imagine there's a disk that's causing some issue, but you can't examine it because the bug is in the firmware and even finding it is gonna take many hours to collect stats which have to be analyzed?

I've seen things you people wouldn't believe... Storage servers with thousands of disks causing the Halon siren to go off... I watched dtrace IO histograms glitter in the dark on console screens in production. All those moments will be lost in time, like tears in rain... Time to back up.

I'd like to hear about actual things that you've experienced in this regard, though, not a reference to a video that is at best tenuously related to the matter at hand.

Nice blade runner quote adaptation though :sun:

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

VostokProgram posted:

Perhaps someone can recommend a case for me? I've done a lot of googling but cannot seem to find anything that meets all these requirements:
1. supports at least micro-ATX motherboards
2. has 8 3.5" hot swap bays
3. can use a normal ATX power supply
4. can keep the 8 hard drives at safe temperature
5. quiet
6. not a heavy rackmount box

The closest thing I have found is the Silverstone CS381, but I see lots of people on the internet saying their drives run hot in that thing. They also make a CS382 and it has fans directly on the drive cage, but they are half blocked by its SAS backplane. I think the latter is new enough that I can't find any info about how hot it runs, maybe half-blocked fans are OK?

I was on a quest for this as well, you can see my posts about the CS381 in this thread. I gave up on it though, the drives were running too hot for my tastes. Ended up getting a Node 804 - silent and drive temps are great. Just had to give up hot-swap - do you really need it?

I want stick my NAS in a rack now though, but I'm holding out for the Sliger CX3750. They teased me with a drawing a year ago in an email, still no word on a release date though!

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

BlankSystemDaemon posted:

It's not even that they may fail easier, it's the ways they fail.
The headache of dealing with all the bizarre failure modes of drives that fail enterprise QA mean I'm not interested in it - because even if they work fine, if they start exhibiting trouble, it might be the sort of trouble that can be hard to rootcause without an extensive amount of time and effort.
Clearly we have different tolerances for pain, and I'm obviously not saying to only rely on shucked drives - but we'll just have to agree to disagree.

Also, there's plenty of people running unraid and all sorts of not-ZFS, so I think that's maybe a bit of a tall assumption, even if I agree it's the only way to know whether the data is good.

The reason I'm calling this FUD is that you're not pointing at any specific known problem, you're just stating (without citation - do you know this somehow, or is it an assumption?) that these drives failed some unspecified test and therefore they are suspect - even though they're being sold brand new with a three year warranty and they appear to work perfectly. It's not about pain tolerance, because IME there has been zero pain using shucked drives.

Why do I care if failures are hard to root cause? That's Seagate/WD's problem - as a home user, if a drive dies all I'm going to do is replace it and RMA it if it's in warranty. Unless the failure is so elusive that I can't pin it down to a specific drive, it's not going to slow me down much.

Eletriarnation fucked around with this message at 01:43 on Jan 12, 2024

Cenodoxus
Mar 29, 2012

while [[ true ]] ; do
    pour()
done


That YouTube video is old enough to start driving. It wasn't even scientific by any stretch - it was just a Sun employee loving around with DTrace realizing that if he screamed directly into the disk shelf, the heads were sensitive enough to the vibration to incur some IO latency. Nowhere did it talk about lasting effects or damage.

If your application is so critical that this presents a meaningful risk factor, your storage budget is already well above "standing in line at Best Buy with a pile of EasyStores" or "bought some used SAS drives off a guy on Reddit".

Yaoi Gagarin
Feb 20, 2014

fletcher posted:

I was on a quest for this as well, you can see my posts about the CS381 in this thread. I gave up on it though, the drives were running too hot for my tastes. Ended up getting a Node 804 - silent and drive temps are great. Just had to give up hot-swap - do you really need it?

I want stick my NAS in a rack now though, but I'm holding out for the Sliger CX3750. They teased me with a drawing a year ago in an email, still no word on a release date though!



funny, I have a node 804 right now and I feel like it's such a pain to work with. drives are mounted upside down hanging from the top so you have to plug in cables from the bottom. I only have two 3.5" drives right now so it's not a huge deal, but even so I haven't been able to put them in adjacent slots because then my power supply cable bunches up. And I've got 3 2.5" drives just floating around too.


Maybe on the weekend I'll tear everything out and see if I can find a neater way to route cables. as is I can't see myself cramming 8 drives in it even though it can theoretically handle that

evil_bunnY
Apr 2, 2003

Eletriarnation posted:

I don't have a full backup of my main RAID either.
Ah. I have a tunnel to a friend's house and we replicate to each other. It's not that many extra discs.

BlankSystemDaemon
Mar 13, 2009



Wibla posted:

I'd like to hear about actual things that you've experienced in this regard, though, not a reference to a video that is at best tenuously related to the matter at hand.

Nice blade runner quote adaptation though :sun:
I wish I could remember, but chemo-brain kinda makes that hard..

One issue that came up was the result of an external system integrator switching the actual storage drives up in the same generation - I believe from HGST to WD?
It took time to rootcause that, simply because it didn't occur to anyone that a vendor would do that - and the difference was between two datacenters (one outfitted later than the other, so I assume there was a contract renegotiation between the vendor and supplier).
A colleague noticed it almost by accident, after I'd been analyzing graphs for days, trying to make sense of the pattern.
It's one of the reasons I started including drive model information in the GEOM labels used when adding devices to a zpool, so that it appears in zpool status.

Eletriarnation posted:

The reason I'm calling this FUD is that you're not pointing at any specific known problem, you're just stating (without citation - do you know this somehow, or is it an assumption?) that these drives failed some unspecified test and therefore they are suspect - even though they're being sold brand new with a three year warranty and they appear to work perfectly. It's not about pain tolerance, because IME there has been zero pain using shucked drives.

Why do I care if failures are hard to root cause? That's Seagate/WD's problem - as a home user, if a drive dies all I'm going to do is replace it and RMA it if it's in warranty. Unless the failure is so elusive that I can't pin it down to a specific drive, it's not going to slow me down much.
Where do you think the drives come from? They're sure as gently caress not making a product line to put them in external enclosures.
The big give-away is that they're (usually) white-label, and that they use 3.3V - which is exclusively found in enterprise drives.
All internal consumer products have to have some silly branding, and don't use 3.3V, whereas enterprise drives just have a white label and expect you to be able to read specs sheets.

So while it's not something I epistemological know because I've observed them taking failed QA drives and putting them in external enclosures, but it fits all the available facts, and is also something I've heard from several other people (including Jim Salter, former Ars writer who has (had?) actual industry contacts).

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

BlankSystemDaemon posted:

The big give-away is that they're (usually) white-label, and that they use 3.3V - which is exclusively found in enterprise drives.

Kind of true. Iirc the 3.3v line is used to put the drive in standby when voltage is applied. So you need to make sure the 3.3v line doesn't reach such a drive.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
I'm not saying that they're coming from a separate product line. I'm saying that you don't know if a given drive failed any tests, and you don't know what tests it failed. You just know that it has a white label. Maybe it's just surplus, or maybe the test failures are something inconsequential for home use.

Again, these drives have 3 year manufacturer warranties (not something you generally put on a drive you expect to fail!) and they seem to work for lots of folks in home servers. Am I recommending you load up a 45-drive shelf, or a whole rack with them? No, because (among other reasons) shucking that many drives and keeping the shells around in case you need to RMA one would be a pain in the rear end. But you shouldn't make vague insinuations about people taking risks playing an "odds game" when you don't have any actual odds indicating that they're taking a greater risk, especially considering again that all disks have a risk of failure and you should always be ready to go get a replacement if one starts acting up.

Eletriarnation fucked around with this message at 16:16 on Jan 12, 2024

Sub Rosa
Jun 9, 2010




Saying those drives fail a test is stating a very questionable assumption as a fact. They don't market external drives because they have a massive amount of failing drives to pawn off on people, they market external drives because doing so is profitable, and it doesn't take using faulty drives to make them profitable

Yes binning and market segmentation is a thing, but that is to maximize profit, not because lower price SKUs are faulty equipment.

BlankSystemDaemon
Mar 13, 2009



Eletriarnation posted:

I'm not saying that they're coming from a separate product line. I'm saying that you don't know if a given drive failed any tests, and you don't know what tests it failed. You just know that it has a white label. Maybe it's just surplus, or maybe the test failures are something inconsequential for home use.

Again, these drives have 3 year manufacturer warranties (not something you generally put on a drive you expect to fail!) and they seem to work for lots of folks in home servers. Am I recommending you load up a 45-drive shelf, or a whole rack with them? No, because (among other reasons) shucking that many drives and keeping the shells around in case you need to RMA one would be a pain in the rear end. But you shouldn't make vague insinuations about people taking risks playing an "odds game" when you don't have any actual odds indicating that they're taking a greater risk, especially considering again that all disks have a risk of failure and you should always be ready to go get a replacement if one starts acting up.
If they don't fail the tests, they'd be sold as enterprise drives.

The hyperscalers alone are out-buying many times the entire consumer market, and if they could buy more, they would.

Sub Rosa posted:

Saying those drives fail a test is stating a very questionable assumption as a fact. They don't market external drives because they have a massive amount of failing drives to pawn off on people, they market external drives because doing so is profitable, and it doesn't take using faulty drives to make them profitable

Yes binning and market segmentation is a thing, but that is to maximize profit, not because lower price SKUs are faulty equipment.
If they could sell them as enterprise drives, they would. They're selling them as external consumer drives because otherwise they'd have to recycle them.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Neither of us know why these drives are being sold as consumer drives instead of enterprise drives. The difference is I'm admitting it, and you're saying "well, I can't think of any reason other than them being inferior so that must be it."

Again, even if they failed tests - which tests, and why should we care considering that empirically they seem to work fine and the manufacturer is willing to stand behind them?

e: Even if WD is able to sell 100% of their capacity as enterprise drives, for more money, they might see value in retaining a presence in the consumer market. You are oversimplifying the situation to reach a predetermined conclusion.

Eletriarnation fucked around with this message at 16:57 on Jan 12, 2024

Wibla
Feb 16, 2011

Someone spreading FUD in the NAS thread? Say it isn't so!

Corb3t
Jun 7, 2003

You’re assuming a lot of things. Maybe Western Digital doesn’t want to spin up a new manufacturing line for different external drives, and it’s actually cheaper to just manufacture and use enterprise drives in external enclosures as well.

Maybe they miscalculated the demand for enterprise drives during COVID, and they have extra enterprise drives to use up.

You don’t know. Stop pretending that you do.

Internet Explorer
Jun 1, 2005





Even if it was true, my home NAS doesn't have enterprise needs. I don't need to pay extra for enterprise quality drives.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
One of the great mysteries of our age is why nobody makes drives with less than 7200rpm speed anymore.

We had 5400-5900 rpm NAS disks for a while, then we had WD lie to our faces and claim that a particularly bad 7200rpm drive was actually 5400rpm, even over SMART, then we haven't seen a slower disk since.

What happened? Isn't it more power efficient to spin slower? Modern hard disks are more speed than I need for a NAS workload, and if you need speed you aren't buying hard disk anyway. Why can't I save some watts?

Edit: and the real take away from this is just that if you're picky about getting reasonable disks just don't buy WD, ever. Between SMR disks that are poorly suited to any kind of RAID and this weird mislabeling, they just clearly are out to sneak worse parts into their product lines whenever they can. That's fine, Toshia and Seagate exist.

Twerk from Home fucked around with this message at 17:06 on Jan 12, 2024

Kibner
Oct 21, 2008

Acguy Supremacy

Twerk from Home posted:

One of the great mysteries of our age is why nobody makes drives with less than 7200rpm speed anymore.

We had 5400-5900 rpm NAS disks for a while, then we had WD lie to our faces and claim that a particularly bad 7200rpm drive was actually 5400rpm, even over SMART, then we haven't seen a slower disk since.

What happened? Isn't it more power efficient to spin slower? Modern hard disks are more speed than I need for a NAS workload, and if you need speed you aren't buying hard disk anyway. Why can't I save some watts?

I assume that's kinda what this dropdown in TrueNAS Scale is for? It's an option that is only available when I select a specific disk so maybe the options are only relevant to what features the disk supports? I really don't know.

Adbot
ADBOT LOVES YOU

IOwnCalculus
Apr 2, 2003





Twerk from Home posted:

One of the great mysteries of our age is why nobody makes drives with less than 7200rpm speed anymore.

RPM isn't the only factor in power draw; platter count seems to matter as much if not more. Check out the datasheet for the WD Red Plus: https://documents.westerndigital.co...ed-plus-hdd.pdf

There are two 8TB parts, WD80EFZZ and WD80EFPX. Both are 5640RPM, but the EFPX has a higher internal transfer rate and a lower product weight, which sounds like the EFPX has at least one fewer platter than the EFZZ. Average power draw for the EFPX is 5.2W vs 6.2W for the EFZZ.

At any rate, the largest difference in wattage in this whole product line is only 4.4W between the 10TB and the 2TB, and unless you need less than 2TB of space, the number of extra 2TB drives you need means your extra spindles are drawing way more power than the wattage saved per spindle. If we stick with comparing similar capacities, 10TB vs 8TB is costing you between 2-3 watts per spindle, which for even the most degenerate among us amounts to 100W of extra power (and for those of us at 30+ spindles, 100W isn't the real problem here :v: ). If you go from 8TB to 12TB, your difference is either 0.1 or 1.1W per drive.

quote:

Edit: and the real take away from this is just that if you're picky about getting reasonable disks just don't buy WD, ever. Between SMR disks that are poorly suited to any kind of RAID and this weird mislabeling, they just clearly are out to sneak worse parts into their product lines whenever they can. That's fine, Toshia and Seagate exist.

I don't disagree that WD submarining CMR is a poo poo move and the whole hard drive industry sucks, but Toshiba's RMA process is well behind WD and Seagate - last I looked into it, no advance RMA process available, and more annoying to work through in general.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply