Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BlankSystemDaemon
Mar 13, 2009



I can't remember if I've posted this before, but I'm just gonna do it again if so:
The READ column identifies number of failed READ commands issued by ZFS to the kernels disk driver subsystem.
The WRITE column is the same but for WRITE commands.
The CKSUM column identifies number of times that the drive returned-from-READ-command something other than what ZFS knows should be there, based on the checksum.

BlankSystemDaemon fucked around with this message at 00:42 on May 8, 2024

Adbot
ADBOT LOVES YOU

Zorak of Michigan
Jun 10, 2006


PitViper posted:

Read errors, nothing in the cksum column. Surprisingly, swapping cables on the two disks in question seems to have "resolved" the issue, in that I'm 60% of the way through replacing one of those two disks with not a single error reported. Still replacing them regardless.

And I'll probably just restore the affected video files from "backup", and a good portion of the other errors are in things like subtitle files that would get restored at the same time. Might just cost me a month or two of paying Comcast for unlimited data, or letting the queue work through itself over a couple months.

Honestly, I'd do another scrub once all the resilvering is done and see what you can see.

SEKCobra
Feb 28, 2011

Hi
:saddowns: Don't look at my site :saddowns:
Got myself another Synology NAS, gonna use it for btrfs snapshots and move the old one offsite with a wireguard tunnel and use it as a restic target. Anyone else done this before?

That Works
Jul 22, 2006

Every revolution evaporates and leaves behind only the slime of a new bureaucracy


I am not a pro computer person and have a weird thing that's probably got a completely obvious answer I am just missing, help appreciated.


I've got to download a bunch of large (1-10gb) *.gz files amounting to around 300-500gb. They are only accessible through a web browser interface though which allows me to download the entire folder or the files individually. I just need the entire folder.

However, the website itself seems to be throttling the downloads to about 1-10Mb/s meaning its going to take several hours or more. Ultimately I want these files on my home NAS which is a headless Unraid server.

The ideal solution would be to just fire up FTP from the NAS but I can't given the way its accessible (only through the website).

tl;dr - Does anyone know of a way to download data from a webpage interface directly from Unraid?

deong
Jun 13, 2001

I'll see you in heck!

That Works posted:

tl;dr - Does anyone know of a way to download data from a webpage interface directly from Unraid?

Kinda hard to tell from your description.
There are ftp web interfaces, if that's the case you'd be able to use filezilla to connect.

Can you connect to the web page in multiple tabs and download? If so, does it double the bandwidth? Its possible the website isn't hosted on a fast upload connection.

That Works
Jul 22, 2006

Every revolution evaporates and leaves behind only the slime of a new bureaucracy


deong posted:

Kinda hard to tell from your description.
There are ftp web interfaces, if that's the case you'd be able to use filezilla to connect.

Can you connect to the web page in multiple tabs and download? If so, does it double the bandwidth? Its possible the website isn't hosted on a fast upload connection.

Nothing that i can connect with via FTP.

Multiple tabs and download I'll try but I think that's still gonna be a lot of tedious work if I can't just get it directly to the NAS. It's a good backup solution if nothing else though, thank you.

Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.
I should think that, assuming you have the URLs for the files, you could run a wget or curl script from the Unraid console which would allow you to specify the Output directory directly on the NAS.

That Works
Jul 22, 2006

Every revolution evaporates and leaves behind only the slime of a new bureaucracy


Scruff McGruff posted:

I should think that, assuming you have the URLs for the files, you could run a wget or curl script from the Unraid console which would allow you to specify the Output directory directly on the NAS.

Of course...

This is the simple thing I was not thinking. I gotta read up on my wget syntax but I am sure I have done this in the past.

e: will wget let you authenticate etc for an https / pw protected site?

Literally have messed with it just a few times ever.

FAT32 SHAMER
Aug 16, 2012



I’m planning on moving my docker containers off of my Synology and onto a dedicated server. I’m assuming I can just mount shared folders as needed from the server and update the docker compose file to the new path, right? Or are there shenanigans of some kind involved?

Pablo Bluth
Sep 7, 2007

I've made a huge mistake.
You could try using the web developer tool's network to get a curl command that I think might include the required cookies.

Open dev tools > Network and download a file. Find the file in the list on the network, and right click. Somewhere in the context menu will be the option to copy the url as a curl command (windows or bash/posix). Depending on exactly how the source website has been written, you can probably generalise the command in to a script for all the files.

Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.

That Works posted:

Of course...

This is the simple thing I was not thinking. I gotta read up on my wget syntax but I am sure I have done this in the past.

e: will wget let you authenticate etc for an https / pw protected site?

Literally have messed with it just a few times ever.

I can't speak for every website but I know you can use --user to set a username in the header and --ask-password to get the command to prompt you to enter a password so that you're not storing it in plaintext.

unknown
Nov 16, 2002
Ain't got no stinking title yet!


That Works posted:

Of course...

This is the simple thing I was not thinking. I gotta read up on my wget syntax but I am sure I have done this in the past.

e: will wget let you authenticate etc for an https / pw protected site?

Literally have messed with it just a few times ever.

The username/password command line is if the site uses the old school HTTP authentication method. If the site has their own web login script (ie: the majority of sites), it generally just sets a cookie (or two) like SA does (eg: bbsession) which you'll need to copy from a session that logged in to a text file and load it with ----load-cookies=file option. And hope that it's IP agnostic.

That Works
Jul 22, 2006

Every revolution evaporates and leaves behind only the slime of a new bureaucracy


Scruff McGruff posted:

I can't speak for every website but I know you can use --user to set a username in the header and --ask-password to get the command to prompt you to enter a password so that you're not storing it in plaintext.

Stack overflow led me to this also, gonna try it first and if unsuccessful try the post above.

Thanks all.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Harik posted:

My old NAS is getting fairly long in the tooth, being cobbled together from a recycled netgear readyNAS motherboard (built circa 2011, picked it up in 2018) and a pair of old xeon x3450s.

It worked out pretty well but the ancient xeons limit to 8GB RAM is preventing me from offloading much unto it aside from serving files, so my desktop ends up running all my in-house containers. Not a great setup.

Looking at something like an older epyc system (https://www.ebay.com/itm/175307460477 / https://www.supermicro.com/en/products/motherboard/H11SSL-i) But I'm curious of anyone else has run across other recycled gear that's a good fit for a NAS + VM host.

Also, has anyone used PCIe U.2 adapters, such as https://www.amazon.com/StarTech-com-U-2-PCIe-Adapter-PEX4SFF8639/dp/B072JK2XLC ? I've had good luck with PCIe-NVMe adapters so I'm hoping it's a similar thing where it just brings out the signal lines and lets the drive do whatever.

took nearly a year because my dog got sick and wiped out my toy fund (he's fine now, good pupper)



I don't remember if anyone answered the question about PCIe -> U.2 adapters? I want to throw in a used enterprise U.2 for my torrent landing directory (nocow, not mirrored because i'm literally in the process of downloading linux isos and can just restart if the drive dies) and dunno, a mirrored pair of small optanes for service databases and other fast, write-heavy stuff. Maybe zfs metadata?

I've got 2 weeks before the last of the main hardware arrives and I can finally do this upgrade.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Those adapters exist, startech is a fairly reliable option in that space.

Kind of expensive for what they are (pretty much just a pcb with traces on it) and there are dodgy Chinese options too. But if the startech says Gen 4 capable you can be pretty confident it’ll work at that rate.

hogofwar
Jun 25, 2011

'We've strayed into a zone with a high magical index,' he said. 'Don't ask me how. Once upon a time a really powerful magic field must have been generated here, and we're feeling the after-effects.'
'Precisely,' said a passing bush.

FAT32 SHAMER posted:

I’m planning on moving my docker containers off of my Synology and onto a dedicated server. I’m assuming I can just mount shared folders as needed from the server and update the docker compose file to the new path, right? Or are there shenanigans of some kind involved?

Some containers don't like their databases/specific files on a network drive, if you are thinking of doing that

Computer viking
May 30, 2011
Now with less breakage.

Harik posted:

took nearly a year because my dog got sick and wiped out my toy fund (he's fine now, good pupper)



I don't remember if anyone answered the question about PCIe -> U.2 adapters? I want to throw in a used enterprise U.2 for my torrent landing directory (nocow, not mirrored because i'm literally in the process of downloading linux isos and can just restart if the drive dies) and dunno, a mirrored pair of small optanes for service databases and other fast, write-heavy stuff. Maybe zfs metadata?

I've got 2 weeks before the last of the main hardware arrives and I can finally do this upgrade.

I've used Startech U.3 to PCIe adapters at work, and they seem to be fine; I guess U.2 would be very similar.

fridge corn
Apr 2, 2003

NO MERCY, ONLY PAIN :black101:
Hello. I have a question. My dad has a NAS server setup for his music collection and is having difficulty playing music from it. Previously he has been using Sonos, but he has run into problems with Sonos having a hard track limit (something like 64,000 songs, which is not nearly enough for his entire collection) and also their app is currently hosed from a recent update. He is wondering if there is a better solution to playing music directly off a NAS server than Sonos? Any insight would be greatly appreciated thanks!!

hogofwar
Jun 25, 2011

'We've strayed into a zone with a high magical index,' he said. 'Don't ask me how. Once upon a time a really powerful magic field must have been generated here, and we're feeling the after-effects.'
'Precisely,' said a passing bush.

fridge corn posted:

Hello. I have a question. My dad has a NAS server setup for his music collection and is having difficulty playing music from it. Previously he has been using Sonos, but he has run into problems with Sonos having a hard track limit (something like 64,000 songs, which is not nearly enough for his entire collection) and also their app is currently hosed from a recent update. He is wondering if there is a better solution to playing music directly off a NAS server than Sonos? Any insight would be greatly appreciated thanks!!

What is he streaming the music to?

If there is a client compatible with what he is using, I've heard Navidrome mentioned frequently.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Computer viking posted:

I've used Startech U.3 to PCIe adapters at work, and they seem to be fine; I guess U.2 would be very similar.
I wasn't even aware there was u.3 and lol it exists because the sas lines weren't shared with pcie on u.2. why do we have to keep dragging sata/sas into every interface? :iiam:

On another note entirely, I'm having trouble understanding the arc_summary

code:
ARC status:                                                      HEALTHY
        Memory throttle count:                                         0

ARC size (current):                                    89.7 %   56.4 GiB
        Target size (adaptive):                        89.6 %   56.3 GiB
        Min size (hard limit):                          6.2 %    3.9 GiB
        Max size (high water):                           16:1   62.8 GiB
        Most Frequently Used (MFU) cache size:         37.9 %   19.9 GiB
        Most Recently Used (MRU) cache size:           62.1 %   32.6 GiB
        Metadata cache size (hard limit):              75.0 %   47.1 GiB
        Metadata cache size (current):                 11.2 %    5.3 GiB
        Dnode cache size (hard limit):                 10.0 %    4.7 GiB
        Dnode cache size (current):                     0.1 %    5.2 MiB

ARC hash breakdown:
        Elements max:                                              15.3M
        Elements current:                              82.8 %      12.7M
        Collisions:                                               516.2M
        Chain max:                                                    10
        Chains:                                                     2.9M

ARC misc:
        Deleted:                                                  277.2M
        Mutex misses:                                             101.1k
        Eviction skips:                                           103.8k
        Eviction skips due to L2 writes:                               0
        L2 cached evictions:                                     0 Bytes
        L2 eligible evictions:                                   2.6 TiB
        L2 eligible MFU evictions:                     21.6 %  568.5 GiB
        L2 eligible MRU evictions:                     78.4 %    2.0 TiB
        L2 ineligible evictions:                               138.6 GiB

ARC total accesses (hits + misses):                                 1.9G
        Cache hit ratio:                               87.2 %       1.7G
        Cache miss ratio:                              12.8 %     243.7M
        Actual hit ratio (MFU + MRU hits):             86.7 %       1.6G
        Data demand efficiency:                        85.0 %     969.5M
        Data prefetch efficiency:                      14.1 %     112.9M

Cache hits by cache type:
        Most frequently used (MFU):                    68.5 %       1.1G
        Most recently used (MRU):                      31.0 %     513.2M
        Most frequently used (MFU) ghost:               1.0 %      17.1M
        Most recently used (MRU) ghost:                 1.1 %      18.6M

Cache hits by data type:
        Demand data:                                   49.7 %     823.9M
        Demand prefetch data:                           1.0 %      15.9M
        Demand metadata:                               49.1 %     813.5M
        Demand prefetch metadata:                       0.3 %       4.4M

Cache misses by data type:
        Demand data:                                   59.8 %     145.7M
        Demand prefetch data:                          39.8 %      97.0M
        Demand metadata:                                0.3 %     634.0k
        Demand prefetch metadata:                       0.1 %     332.6k

DMU prefetch efficiency:                                          216.0M
        Hit ratio:                                     33.2 %      71.7M
        Miss ratio:                                    66.8 %     144.3M
I guess some of these units are counts and others are sizes but that's super unclear. Are those 1.9 billion reads, or only 1.9 billion bytes that were cache-eligible? I'm not sure if that's doing great or just doing great on an extremely limited subset of my IO.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
It's KiB, MiB, GiB etc. for data volumes, and K, M, G etc. for counts.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop
ok, that's reasonable. is that all reads? I question a fairly busy server with ~10 active VMs only doing 2 billion reads in a month.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
ARC total accesses? I presume so.

Regarding as to how much or little it is, you have to consider that these VMs run their own disk caches.

IOwnCalculus
Apr 2, 2003





fridge corn posted:

Hello. I have a question. My dad has a NAS server setup for his music collection and is having difficulty playing music from it. Previously he has been using Sonos, but he has run into problems with Sonos having a hard track limit (something like 64,000 songs, which is not nearly enough for his entire collection) and also their app is currently hosed from a recent update. He is wondering if there is a better solution to playing music directly off a NAS server than Sonos? Any insight would be greatly appreciated thanks!!

Plex with Plexamp but that might be on the overkill side.

fridge corn
Apr 2, 2003

NO MERCY, ONLY PAIN :black101:

hogofwar posted:

What is he streaming the music to?

If there is a client compatible with what he is using, I've heard Navidrome mentioned frequently.

At the moment he is streaming to Sonos speakers with the Sonos app which is where he's running into the aforementioned issues, but he is not adverse to buying new hardware/devices/speakers etc.

I'll have a look at navidrome thanks

fridge corn
Apr 2, 2003

NO MERCY, ONLY PAIN :black101:

IOwnCalculus posted:

Plex with Plexamp but that might be on the overkill side.

Overkill how? I'm not sure my father understands the meaning of overkill when it comes to his music collection :newlol:

Hughlander
May 11, 2005

That Works posted:

I am not a pro computer person and have a weird thing that's probably got a completely obvious answer I am just missing, help appreciated.


I've got to download a bunch of large (1-10gb) *.gz files amounting to around 300-500gb. They are only accessible through a web browser interface though which allows me to download the entire folder or the files individually. I just need the entire folder.

However, the website itself seems to be throttling the downloads to about 1-10Mb/s meaning its going to take several hours or more. Ultimately I want these files on my home NAS which is a headless Unraid server.

The ideal solution would be to just fire up FTP from the NAS but I can't given the way its accessible (only through the website).

tl;dr - Does anyone know of a way to download data from a webpage interface directly from Unraid?

https://www.httrack.com cli version

FAT32 SHAMER
Aug 16, 2012



hogofwar posted:

Some containers don't like their databases/specific files on a network drive, if you are thinking of doing that

Yeah, anything related to the container itself would be on the server, but like for plex I’d like to keep the data on the NAS

BlankSystemDaemon
Mar 13, 2009



Harik posted:

took nearly a year because my dog got sick and wiped out my toy fund (he's fine now, good pupper)



I don't remember if anyone answered the question about PCIe -> U.2 adapters? I want to throw in a used enterprise U.2 for my torrent landing directory (nocow, not mirrored because i'm literally in the process of downloading linux isos and can just restart if the drive dies) and dunno, a mirrored pair of small optanes for service databases and other fast, write-heavy stuff. Maybe zfs metadata?

I've got 2 weeks before the last of the main hardware arrives and I can finally do this upgrade.
For PCIe, M.2 (with the right keying), U.2, and U.3 are all compatible - and unless you need bifurcation, can all be electrically coupled to work with the right adapter.

Also, cute pupper. Please pet him from me :kimchi:

Harik posted:

I wasn't even aware there was u.3 and lol it exists because the sas lines weren't shared with pcie on u.2. why do we have to keep dragging sata/sas into every interface? :iiam:

On another note entirely, I'm having trouble understanding the arc_summary

I guess some of these units are counts and others are sizes but that's super unclear. Are those 1.9 billion reads, or only 1.9 billion bytes that were cache-eligible? I'm not sure if that's doing great or just doing great on an extremely limited subset of my IO.
U.3 is, at least, the SATA+SAS+PCIe interface we've been wanting for a long time.
So of course now's the time for the hyperscalers to move to E1.L or E3.S, or even using NVMe-over-PCIe for spinning rust, because it simplifies the design of the rack servers.
:negative:

For ARC, the only thing that really matters is the hit/miss ratio.
If you're below 90% and haven't maxed your memory, download more RAM. If you've maxed your memory, you can look into L2ARC.
Just remember that L2ARC isn't MFU+MRU like ARC (it's a simple LRU-evict cache), and that every LBA on your L2ARC device will take up 70 bytes of memory that could otherwise be used by the ARC (meaning you can OOM your system if you add one that's too big).

Combat Pretzel posted:

ARC total accesses? I presume so.

Regarding as to how much or little it is, you have to consider that these VMs run their own disk caches.
ARC is better than the virtualized guest OS' caching, though.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

BlankSystemDaemon posted:

For PCIe, M.2 (with the right keying), U.2, and U.3 are all compatible - and unless you need bifurcation, can all be electrically coupled to work with the right adapter.

Also, cute pupper. Please pet him from me :kimchi:

U.3 is, at least, the SATA+SAS+PCIe interface we've been wanting for a long time.
So of course now's the time for the hyperscalers to move to E1.L or E3.S, or even using NVMe-over-PCIe for spinning rust, because it simplifies the design of the rack servers.
:negative:

Pupper pet.

Yes, I get that they can all be mashed together on the same wires, but why tho? These are new drive designs for these new exotic connectors so just make a PCIe interface to spinning rust and call it a day. It's ridiculous to make these hyper-complex interfaces especially when drives are already incorporating flash caches and can benefit from the simplified bulk transfer of data to begin with! They make a profit selling $15 NVMe drives so it's not like the interface is stupidly expensive to implement. You don't even need the latest gen5 PCIe stuff for drives that can't transfer that fast anyway.

in short I hope all these "legacy interests demand their special snowflake chip implementing a protocol from 1979 still be commercially viable" decisions blow up in their faces and the nvme-everything faction wins out. It'd be good for the industry overall to stop subsidizing adaptec.

e: only brand new u.3 drives can be used with u.3 hosts, SAS and SATA and U.2 drives are all physically or electrically incompatible and you need all new designs and we need all these new designs so... adaptec can still sell SCSI chips. in 2024.

BlankSystemDaemon posted:

For ARC, the only thing that really matters is the hit/miss ratio.
If you're below 90% and haven't maxed your memory, download more RAM. If you've maxed your memory, you can look into L2ARC.
Just remember that L2ARC isn't MFU+MRU like ARC (it's a simple LRU-evict cache), and that every LBA on your L2ARC device will take up 70 bytes of memory that could otherwise be used by the ARC (meaning you can OOM your system if you add one that's too big).
the reddit /r/zfs sentiment is that the bot needs to just reply 'No, you don't need L2ARC' to any post that mentions it and they're probably right. The miniscule ghost rate I'm seeing tells me it wouldn't help there at all. I'm pulling nearly 90% hitrate off ARC alone and this whole machine is 18tb of U.2 flash so it's not like there's seek penalties or significant bandwidth limits. On my homelab running spinning rust I'd love to see metadata/tiny files on NVMe but I don't think you can do that without it being a failure point. if they were just *mirrored* to NVMe it'd be great, but I thought that losing your special vdev nuked the whole array. Definitely more research required before I do it. I got burned badly by bcache.

BlankSystemDaemon posted:

ARC is better than the virtualized guest OS' caching, though.
this. I'm looking into the balloon driver and a script monitoring the VMs for free memory and taking it away when more than X is used for cache. No point in double-caching things when I could hand that RAM back to the host for ARC.

Harik fucked around with this message at 13:16 on May 10, 2024

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop
tugm4470 was out of the h11ssl-i after I ordered and offered me an upgrade for free to the -c, I asked him to throw in the SAS breakout cables since I need all 16 ports and he agreed.

Let's see how this goes. I may actually have the SAS cables already though, I think they're the same breakout cables as my previous board.

Just need to be ready to -IT flash the controller when it arrives I guess.

E: drat it's here already, he upgraded me to priority shipping for free as well, was originally going to be here in 2 weeks. Beat most of the rest of the parts, I don't have a CPU cooler, PSU or NVMe for it yet.

Harik fucked around with this message at 01:41 on May 14, 2024

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

BlankSystemDaemon posted:

and that every LBA on your L2ARC device will take up 70 bytes of memory that could otherwise be used by the ARC
FFS, you keep claiming this. It's 70 bytes per ZFS data block.

code:
L2ARC size (adaptive):                                         411.1 GiB
        Compressed:                                    93.2 %  383.4 GiB
        Header size:                                    0.1 %  607.6 MiB
I'd say that's a decent trade-off.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
The reason for U.3 is that you could have drive slots that you could plug NVMe, SAS or SATA drives into as the controller interface high speed signals could switch between them, referred to as “tri mode phy” usually.

I forget why u.2 wasn’t like this from the start, annoyingly.

BlankSystemDaemon
Mar 13, 2009



Combat Pretzel posted:

FFS, you keep claiming this. It's 70 bytes per ZFS data block.

code:
L2ARC size (adaptive):                                         411.1 GiB
        Compressed:                                    93.2 %  383.4 GiB
        Header size:                                    0.1 %  607.6 MiB
I'd say that's a decent trade-off.
Huh, so it is.
The problem is, records in ZFS are variable size - and there's no real way to get the distribution across an entire pool.

You should still max out your memory before using L2ARC, though.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Unless you go bonkers with L2ARC, or you're severely memory limited, the trade-off may be worth it.

In my case, it's limited to pool metadata and ZVOLs with either 16KB or 64KB volblocksize. I'm giving away 0.61GB of the 52GB of ARC to keep 400GB of data warm on a Gen3 NVMe SSD. Works fine for running games on Steam (per fast loads after clearing ARC via a reboot).

If you're working with the default ZFS record size of 128KB (or bigger), you might get better ratios of headers vs data. Compression reduces it only so far (which I'm using on these ZVOLs, too).

Combat Pretzel fucked around with this message at 23:42 on May 10, 2024

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I'm hoping that Microsoft (and whoever else "spies") are going to release data, whether these extreme coronal mass ejections this weekend led to an increased amount of system crashes or not.

kliras
Mar 27, 2021
my 6tb wd red is on its last legs, so i need to replace it

- are there any noise issues with wd vs seagate worth worrying about? i got a seagate as a backup drive, and i had to script it to manually write to a file or the read head would make a clicking sound when it returned
- is anything beyond wd red worth getting nowadays? don't see a lot of blues, and black and gold just sound like "gamer" upsale
- anything interesting coming up, or should i just buy a new 12 or 18tb drive?

typical use cases are just various media and storage that don't get accessed that often

Wild EEPROM
Jul 29, 2011


oh, my, god. Becky, look at her bitrate.
Wd drives under 8tb use smr instead of cmr, making it basically useless for any kind of raid or zfs

phosdex
Dec 16, 2005

I think they make both and you gotta check carefully which one you're ordering.

Adbot
ADBOT LOVES YOU

Nulldevice
Jun 17, 2006
Toilet Rascal
WD Red Plus and higher are CMR. Standard Red 6TB and below are SMR.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply