Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Chumbawumba4ever97
Dec 31, 2000

by Fluffdaddy

Heners_UK posted:

I've seen a few stories now of people swapping old, smaller drives, for large ones. Call me cheap (lots of people do), but if you have these drives in a parity/raid protected array (and you have backups of truly important stuff that are not raid/parity, which you should, because raid/parity is not backup) then why not simply use the drives until they die? The data is merely a rebuild away at worst.

I'm guessing it's usually due to a lack of physical space

Adbot
ADBOT LOVES YOU

IOwnCalculus
Apr 2, 2003





Lack of physical bays to put them in, (likely justified) fear of using heavily aged drives in an array where losing multiples would result in major data loss

I don't have a realistic limit on number of drives now but I still don't run all the drives I have because a lot of them are now extremely old 3TB and I don't need the space. I'd rather keep extra spares on hand.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Heners_UK posted:

I've seen a few stories now of people swapping old, smaller drives, for large ones. Call me cheap (lots of people do), but if you have these drives in a parity/raid protected array (and you have backups of truly important stuff that are not raid/parity, which you should, because raid/parity is not backup) then why not simply use the drives until they die? The data is merely a rebuild away at worst.

Like they say...many people don't have the physical space to add more drives.

I've got 24 drives in a case that only actually has 12 bays...I couldn't jam another array in there.

So, when I need more storage I have to increase the size of my existing zpools by replacing each drive on one of them with larger capacity drives.

(And yes, this is a huge pain in the rear end and a big downside of ZFS for prosumer-level usage)

mrking
May 27, 2006

There's No Limit To What We Can't Accomplish



I use unraid. I've done two parity drive upgrades and many array drive upgrades and each time I have always let the array rebuild one drive at a time. So far no problems and I've replaced 6 old drives with 8 shucked 8tb and 10tb models.
The rebuilds take about a day while the server is running, and I just let it run like normal.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Did unraid ever fix the issue with scheduled parity checks not happening? I don’t get why they aren’t, the crontab looks correct. I just click the check now button every week or so.

Axe-man
Apr 16, 2005

The product of hundreds of hours of scientific investigation and research.

The perfect meatball.
Clapping Larry

Heners_UK posted:

I've seen a few stories now of people swapping old, smaller drives, for large ones. Call me cheap (lots of people do), but if you have these drives in a parity/raid protected array (and you have backups of truly important stuff that are not raid/parity, which you should, because raid/parity is not backup) then why not simply use the drives until they die? The data is merely a rebuild away at worst.

I have to admit that I see a ton of people who are requesting data recovery cause of this mindset, once Drives start to give out errors on the SMART chip, I would start looking to replace them. I've seen more than one bad drive, kill an entire parity table due to it spitting out garbage and the the person didn't replace it.

Too Poetic
Nov 28, 2008

I want to get a NAS to store 4K videos. If I map the drives on my PC it wont try and use the NAS for the encoding right?

KOTEX GOD OF BLOOD
Jul 7, 2012

What do you mean by encoding?

Too Poetic
Nov 28, 2008

KOTEX GOD OF BLOOD posted:

What do you mean by encoding?
I meant transcoding

susan b buffering
Nov 14, 2016

Too Poetic posted:

I want to get a NAS to store 4K videos. If I map the drives on my PC it wont try and use the NAS for the encoding right?

Correct. If you map the drives directly then then you are getting the files without any transcoding.

Hexyflexy
Sep 2, 2011

asymptotically approaching one
QTS 4.4 is finally out for QNAP boxes that have been in beta for like 5 months. Lets hope this doesn't explode, I think it's only been available 20 minutes.

Axe-man
Apr 16, 2005

The product of hundreds of hours of scientific investigation and research.

The perfect meatball.
Clapping Larry
Uhhhh my awful app posted to the wrong thread oops.

Wifi Toilet
Oct 1, 2004

Toilet Rascal
Crosspost since this is probably more relevant here than the upgrading thread:

Amazon deal of the day: WD Red Internal 8TB drive for $180.99

BurgerQuest
Mar 17, 2009

by Jeffrey of YOSPOS
Is anyone using unraid VM's with GPU passthrough? I'm thinking of adding an emulation VM to my NAS either Windows 10 or Linux (RetroArch front?) and have an old nVidia 970 GTX I could add and use.

dexefiend
Apr 25, 2003

THE GOGGLES DO NOTHING!
There are multiple YouTube videos that do this.

The only technical thing is the separation of the IOMMU groups to hold that video card out of Unraid, so it's available for the container/VM.

Edit: I just remembered one thing that a guy did to make it easier on himself was to use a USB hub. He assigned the USB hub IOMMU group to the VM as well, this meant anything plugged into that hub automatically went to the VM without further fuckery.

dexefiend fucked around with this message at 13:28 on May 28, 2019

Enos Cabell
Nov 3, 2004


If anyone was curious, I decided to just pull one of the old drives on my unraid server and swap in a new without copying anything or doing any prep on it. Wanted to see the behavior in a failed drive scenario. Was as simple as powering down the server, swapping drive, powering back up and selecting the new drive in a drop down list. Missing data is available in an "emulated" state while the array rebuilds the drive. About halfway done now after 12 hours.

Also, it's mostly been covered, but I'm not keeping the old drives running for a few reasons, primarily that I'm at my drive limit in Unraid and don't feel like shelling out for the unlimited drive license just yet.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

dexefiend posted:

There are multiple YouTube videos that do this.

The only technical thing is the separation of the IOMMU groups to hold that video card out of Unraid, so it's available for the container/VM.

Edit: I just remembered one thing that a guy did to make it easier on himself was to use a USB hub. He assigned the USB hub IOMMU group to the VM as well, this meant anything plugged into that hub automatically went to the VM without further fuckery.

The problem with USB passthrough is the max you will get is USB 2.0 speeds, not USB 3.0.

I run my backups off my FreeNAS VM to a 4TB USB 3.0 drive.

Less Fat Luke
May 23, 2003

Exciting Lemon

CommieGIR posted:

The problem with USB passthrough is the max you will get is USB 2.0 speeds, not USB 3.0.

I run my backups off my FreeNAS VM to a 4TB USB 3.0 drive.
Passing through the USB controller itself as a PCI device to the VM gives me 3.0 speeds in the Windows guest. Using virtualized USB passthrough would be way slower but if the controller is isolatable just do that.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
I know that shingled drives should never be used in arrays, and are garbage at workloads that have a mix of reads and writes. What about for a single sustained write? What sort of write throughput would I see on a shingled hard drive with a write-once workload?

Schadenboner
Aug 15, 2011

by Shine
Which of the Reds and Reds Pro are helium-filled? I know that the current Red 8 and 10 are but I’m not seeing anything on the WD site?

BurgerQuest
Mar 17, 2009

by Jeffrey of YOSPOS
Thanks, I've seen the youtubes and have actually set this up manually with qemu and kvm in Ubuntu before. It worked and gaming performance was great but it was also a bit of a pain in the dick to setup and maintain.

So was really wondering if anyone here is actually using it full time and if the Unraid experience makes it 'just work?

Schadenboner
Aug 15, 2011

by Shine
Is the “1 GB per TB” rule for FreeNAS in addition to the 8 gig minimum or is it really more like “8 GB for the first TB, one GB per TB additional”?

:ohdear:

bobfather
Sep 20, 2001

I will analyze your nervous system for beer money

Schadenboner posted:

Is the “1 GB per TB” rule for FreeNAS in addition to the 8 gig minimum or is it really more like “8 GB for the first TB, one GB per TB additional”?

:ohdear:

I think that rule is if you’re using deduplication, which you should not be.

IOwnCalculus
Apr 2, 2003





ZFS is always happier with more RAM. Any money you would spend on making the drive array faster - higher RPM disks, SSDs for ZIL / ARC - would be way better spent on RAM until you actually max the system out.

Also consider that rule is just for ZFS. If you want to run something else RAM heavy on the box, add that too.

And never ever ever ever enable dedupe.

Schadenboner
Aug 15, 2011

by Shine

IOwnCalculus posted:

ZFS is always happier with more RAM. Any money you would spend on making the drive array faster - higher RPM disks, SSDs for ZIL / ARC - would be way better spent on RAM until you actually max the system out.

Also consider that rule is just for ZFS. If you want to run something else RAM heavy on the box, add that too.

And never ever ever ever enable dedupe.

The prospective system is one of those little HP Microservers, I think they max out at 32GB?

Lowen SoDium
Jun 5, 2003

Highen Fiber
Clapping Larry
My Ubuntu server using zfs with 20TB of usable storage has 16GB of ram. Never had an issues with ram, running sonar, radar, emby, unifi controller, transmission, and a few other services.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
The rule of thumb is for write-heavy patterns with many concurrent users, and is a vestige of an era when an array might have like 4 TB total or something. Your plex server will be happy with 8 GB or whatever. If you find it's a problem, start adding RAM.

But yes ZFS likes RAM so if it isn't fast enough, that's your first stop.

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.
Throwing another opinion into the mix, running the usual array of jails + plex, and a VM for miscellaneous stuff on it, with a 108TB usable pool (2xZ2 at 8x8TB and 8x10TB), my box is quite happy with 32GB of RAM, and has ~18 of it just hanging out acting as cache.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

Lowen SoDium posted:

My Ubuntu server using zfs with 20TB of usable storage has 16GB of ram. Never had an issues with ram, running sonar, radar, emby, unifi controller, transmission, and a few other services.

How much data is stored on it? The idea is you need x amount of memory to hold however many blocks you have stored (storing the hash and metadata for each block), so with larger block sizes you need lesss ram

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Enos Cabell posted:

Any Unraid users swapped out to a larger drive in the array before? Not sure which is less disruptive, removing an old drive and letting the array rebuild data on the new drive, or copying data off the old drive first before putting in the new drive.

I've swapped drives using their recommended way of letting it rebuild, a success on both occasions.
I only use my unraid boxes as backups, though, so I wasn't *too* worried

Atomizer
Jun 24, 2007



Twerk from Home posted:

I know that shingled drives should never be used in arrays, and are garbage at workloads that have a mix of reads and writes. What about for a single sustained write? What sort of write throughput would I see on a shingled hard drive with a write-once workload?

SMR drives aren't good for OS use...but then again you should be using an SSD instead of any HDD for that purpose. They're really only a liability for heavy rewriting, where the drive has to try to write your new data while reading and moving the data you're altering. Even then, you're probably not going to notice a difference between HDDs with different recording techniques.

From what I remember, some if not all of the Seagate (who AFAIK is the most common user of SMR) SMR drives have a 20 GB PMR section at the interior of the drive, which it uses for shuffling data. If you overflow that, you'd likely notice some kind of performance degradation, not unlike overflowing an SSD's pseudoSLC cache (especially a new QLC drive.)

Note that the Samsung 2.5" 2 TB drives, both the Barracuda and the Firecuda SSHD/hybrid drive are SMR based on my research. The latter is, as you can read from product reviews, very common as an upgrade for the PS4, with many happy customers (and others complaining that the drive fails at some point, but that's another story.) That should tell you about its usability for gaming applications (and I have a few in gaming laptops with no notable problems.)

For your hypothetical workload, a single sustained write will work exactly like on any other HDD; you won't notice it's anything other than an ordinary HDD. If anything, because SMR means higher densities, read and write speeds will be increased over the same drive using PMR, but modern, high-capacity HDDs will get you speeds roughly in the 100-200 MB/s range, depending on the specific drive, capacity (i.e. density,) and other factors like disk position (with faster rates along the outer circumference compared to the inner.)

I use a Seagate 3.5" external HDD as my media drive for Plex, and it it's perfectly satisfactory for that purpose. I need it for nothing other than capacity, and its speeds are more than enough for my use. I wouldn't recommend against an SMR drive on principle, but there isn't really a point to it at lower capacities (mine is a 6 TB, which was more meaningful when I first got it but you can easily find PMR drives at higher capacities.)

Schadenboner posted:

Which of the Reds and Reds Pro are helium-filled? I know that the current Red 8 and 10 are but I’m not seeing anything on the WD site?

If they don't specifically say on the spec sheet, a hint might be the number of platters; Helium enables the platters to be placed closer together, and I think 7-platter drives are all going to be Helium (and a high capacity would also suggest towards that.) There's also a SMART value for Helium level, if you're running a drive and take a peek at those stats.

BlankSystemDaemon
Mar 13, 2009



bobfather posted:

I think that rule is if you’re using deduplication, which you should not be.
The old rule of thumb for deduplication was 5GB memory per 1TB diskspace, but a better way to calculate it is to take the number of LBAs you're expecting to use on your disks, multiply it by the size of an entry in the deduplication table which is between 320 and 480 bytes, and you have the real size of the table.
ZoL has reduced the size of the dedup table entries down to 25% of its original size, and yet very very few people who implement ZoL use dedup either - to make it worth it, you need to find a company that'll implement both a new vdev type to store the dedup table on two or more NVMe SSDs, as well as Ahrens' ideas for making dedup 1000x faster, which was described in these slides or this video:
https://www.youtube.com/watch?v=PYxFDBgxFS8

Atomizer posted:

SMR drives aren't good for OS use
One good use of SMR that this thread might appreaciate more than most is when you treat your disk array like a WORM storage setup where you only ever write to it once and never delete anything.
I know people who, when they run out of storage, simply buy another SAS expander JBOD chassis and begin filling it up 11 drives at a time with each vdev as a RAIDz3.
Typically, most SMR drives which people in this thread tend to see are drives that hide their SMR status from the OS; if they didn't (ie. used host-ware SMR firmware) and the OS has the code for it, any filesystem can optimize its writes for getting data stored on SMR.
Another upshot of this is that you get to take heavy advantage of streaming I/O which is the one area where modern spinning rust shines in terms of bandwidth (though not compared to SSDs, of course).

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

Dropbox uses servers full of 100 SMR drives but they basically are doing WORM and by using SSD’s to buffer data and controlling the whole drat stack, they manage to push 40GB/s worth of writes :science:

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Bob Morales posted:

Dropbox uses servers full of 100 SMR drives but they basically are doing WORM and by using SSD’s to buffer data and controlling the whole drat stack, they manage to push 40GB/s worth of writes :science:

Good to know. They've gotta be some type of replication-based solution that's similar to an inhouse Ceph, right?

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

Twerk from Home posted:

Good to know. They've gotta be some type of replication-based solution that's similar to an inhouse Ceph, right?

Yea, if you google “magic pocket” they have blog posts about various parts of it

Lowen SoDium
Jun 5, 2003

Highen Fiber
Clapping Larry

Bob Morales posted:

How much data is stored on it? The idea is you need x amount of memory to hold however many blocks you have stored (storing the hash and metadata for each block), so with larger block sizes you need lesss ram

I think there is about 12T in use.

eames
May 9, 2009

Atomizer posted:

SMR drives aren't good for OS use...but then again you should be using an SSD instead of any HDD for that purpose. They're really only a liability for heavy rewriting, where the drive has to try to write your new data while reading and moving the data you're altering. Even then, you're probably not going to notice a difference between HDDs with different recording techniques.

From what I remember, some if not all of the Seagate (who AFAIK is the most common user of SMR) SMR drives have a 20 GB PMR section at the interior of the drive, which it uses for shuffling data. If you overflow that, you'd likely notice some kind of performance degradation, not unlike overflowing an SSD's pseudoSLC cache (especially a new QLC drive.)

Note that the Samsung 2.5" 2 TB drives, both the Barracuda and the Firecuda SSHD/hybrid drive are SMR based on my research. The latter is, as you can read from product reviews, very common as an upgrade for the PS4, with many happy customers (and others complaining that the drive fails at some point, but that's another story.) That should tell you about its usability for gaming applications (and I have a few in gaming laptops with no notable problems.)

For your hypothetical workload, a single sustained write will work exactly like on any other HDD; you won't notice it's anything other than an ordinary HDD. If anything, because SMR means higher densities, read and write speeds will be increased over the same drive using PMR, but modern, high-capacity HDDs will get you speeds roughly in the 100-200 MB/s range, depending on the specific drive, capacity (i.e. density,) and other factors like disk position (with faster rates along the outer circumference compared to the inner.)


I was tricked into buying an external drive with one of these 2.5" Seagate SMR drives. :v:
It uses a multi-tier caching system with DRAM, NAND, PMR sectors on the fast edge of the platters and SMR sectors on the remaining space, all managed completely transparent to the host system.
This isn't really advertised; it seems like they're even trying to hide the fact that these drives are SMR. I only found out about this when I noticed that repeated random access to medium sized files yielded closer to SSD-like performance than what I expected from a regular PMR hard drive.

It works surprisingly well and I can only assume that it keeps the right files in the right places. The drive feels faster than a regular PMR drive most of the time. My only concern is reliability and the potential additional points of failure. Sometimes there's disk activity when the host OS isn't doing anything at all. Presumably that's when it shuffles data around between NAND/PMR/SMR.

more on this here: https://www.seagate.com/files/www-c...-paper-2017.pdf

Enos Cabell
Nov 3, 2004


HalloKitty posted:

I've swapped drives using their recommended way of letting it rebuild, a success on both occasions.
I only use my unraid boxes as backups, though, so I wasn't *too* worried

First drive swapped using this method went without a hitch, 70% into drive number two right now at about 24 hours per 8tb drive. Nice seeing how it works without the pressure of an actual dead drive.

Atomizer
Jun 24, 2007



eames posted:

I was tricked into buying an external drive with one of these 2.5" Seagate SMR drives. :v:
It uses a multi-tier caching system with DRAM, NAND, PMR sectors on the fast edge of the platters and SMR sectors on the remaining space, all managed completely transparent to the host system.
This isn't really advertised; it seems like they're even trying to hide the fact that these drives are SMR. I only found out about this when I noticed that repeated random access to medium sized files yielded closer to SSD-like performance than what I expected from a regular PMR hard drive.

It works surprisingly well and I can only assume that it keeps the right files in the right places. The drive feels faster than a regular PMR drive most of the time. My only concern is reliability and the potential additional points of failure. Sometimes there's disk activity when the host OS isn't doing anything at all. Presumably that's when it shuffles data around between NAND/PMR/SMR.

more on this here: https://www.seagate.com/files/www-c...-paper-2017.pdf

What external setup had the Seagate inside? I mean I'm sure they exist, I just haven't seen any enclosures that came with those SMR drives (the portable external drives that are of most interest to me are the 4/5 TB ones from WD or Seagate with the 15 mm or whatever height 2.5" drives inside, because they're high capacity for the size while only requiring a single USB connection.)

And yeah that's the caching solution I was referring to; they use multiple layers including that PMR section to manage it, and it all seems to work well in practice. The SSHDs with 8 GB of NAND flash are nice for the performance boost that you noticed, although the thing that annoyed me most was that the Firecuda I mentioned has a brother in the Barracuda (in this case I'm referring to the 2 TB 2.5" drives,) which as far as I could tell is the same drive just without that NAND portion, so is essentially gimped.

Sometimes drives do maintenance stuff on their own like you noticed, that's perfectly normal. My HGST He8 does this, which I mention because it's particularly obvious due to how loud the drive is compared to a consumer drive.

The main thing I try to clear up is the misconception about SMR drives; they're really just normal HDDs for the vast majority of average consumer use-cases. The only real disappointment is that the technology isn't particularly justified unless it's being used to get higher densities and/or lower costs; that 6 TB version I have I bought for ~$125 a few years back when that capacity typically went for ~$150, but not too long after that 6 TB external WDs (blue drive inside) dropped down to $100, so the rationale for that SMR drive is obviated, unless by comparison the price was, say, $80 or less to justify its existence. At the high end, 12+ TB drives are considerably more expensive than <12 TB drives, so if SMR turned a 10 TB into a 12 TB at a reasonable cost, that would be a perfect use of the technology.

Adbot
ADBOT LOVES YOU

eames
May 9, 2009

Atomizer posted:

What external setup had the Seagate inside? I mean I'm sure they exist, I just haven't seen any enclosures that came with those SMR drives (the portable external drives that are of most interest to me are the 4/5 TB ones from WD or Seagate with the 15 mm or whatever height 2.5" drives inside, because they're high capacity for the size while only requiring a single USB connection.)

The drive I bought was the relatively new Lacie USB-C mobile drive
.
Mine is the thicker 4TB Version, I strongly suspect it has a ST4000LM024 inside but all the firmware is rebranded to Lacie. There is some controversy around this because the official datasheet lists this as a PMR drive but it behaves nothing like one. Seagate support got cagey when I asked for the type/model of the drive inside and only told me not to worry because SMR is great!
I was in a pinch, needed a large USB powered drive and this was all the local Apple Store had in stock, so it is what it is.

I’m extra careful with keeping versioned backups of that drive but so far it works fine and gives me SSD-like performance when it hits the cache, which happens quite frequently. :shrug:

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply