|
Heners_UK posted:I've seen a few stories now of people swapping old, smaller drives, for large ones. Call me cheap (lots of people do), but if you have these drives in a parity/raid protected array (and you have backups of truly important stuff that are not raid/parity, which you should, because raid/parity is not backup) then why not simply use the drives until they die? The data is merely a rebuild away at worst. I'm guessing it's usually due to a lack of physical space
|
# ? May 27, 2019 20:09 |
|
|
# ? May 30, 2024 06:58 |
|
Lack of physical bays to put them in, (likely justified) fear of using heavily aged drives in an array where losing multiples would result in major data loss I don't have a realistic limit on number of drives now but I still don't run all the drives I have because a lot of them are now extremely old 3TB and I don't need the space. I'd rather keep extra spares on hand.
|
# ? May 27, 2019 20:39 |
|
Heners_UK posted:I've seen a few stories now of people swapping old, smaller drives, for large ones. Call me cheap (lots of people do), but if you have these drives in a parity/raid protected array (and you have backups of truly important stuff that are not raid/parity, which you should, because raid/parity is not backup) then why not simply use the drives until they die? The data is merely a rebuild away at worst. Like they say...many people don't have the physical space to add more drives. I've got 24 drives in a case that only actually has 12 bays...I couldn't jam another array in there. So, when I need more storage I have to increase the size of my existing zpools by replacing each drive on one of them with larger capacity drives. (And yes, this is a huge pain in the rear end and a big downside of ZFS for prosumer-level usage)
|
# ? May 27, 2019 20:43 |
I use unraid. I've done two parity drive upgrades and many array drive upgrades and each time I have always let the array rebuild one drive at a time. So far no problems and I've replaced 6 old drives with 8 shucked 8tb and 10tb models. The rebuilds take about a day while the server is running, and I just let it run like normal.
|
|
# ? May 27, 2019 21:54 |
|
Did unraid ever fix the issue with scheduled parity checks not happening? I don’t get why they aren’t, the crontab looks correct. I just click the check now button every week or so.
|
# ? May 27, 2019 21:56 |
|
Heners_UK posted:I've seen a few stories now of people swapping old, smaller drives, for large ones. Call me cheap (lots of people do), but if you have these drives in a parity/raid protected array (and you have backups of truly important stuff that are not raid/parity, which you should, because raid/parity is not backup) then why not simply use the drives until they die? The data is merely a rebuild away at worst. I have to admit that I see a ton of people who are requesting data recovery cause of this mindset, once Drives start to give out errors on the SMART chip, I would start looking to replace them. I've seen more than one bad drive, kill an entire parity table due to it spitting out garbage and the the person didn't replace it.
|
# ? May 27, 2019 22:22 |
|
I want to get a NAS to store 4K videos. If I map the drives on my PC it wont try and use the NAS for the encoding right?
|
# ? May 27, 2019 23:04 |
|
What do you mean by encoding?
|
# ? May 27, 2019 23:12 |
|
KOTEX GOD OF BLOOD posted:What do you mean by encoding?
|
# ? May 27, 2019 23:13 |
|
Too Poetic posted:I want to get a NAS to store 4K videos. If I map the drives on my PC it wont try and use the NAS for the encoding right? Correct. If you map the drives directly then then you are getting the files without any transcoding.
|
# ? May 27, 2019 23:58 |
|
QTS 4.4 is finally out for QNAP boxes that have been in beta for like 5 months. Lets hope this doesn't explode, I think it's only been available 20 minutes.
|
# ? May 28, 2019 03:30 |
|
Uhhhh my awful app posted to the wrong thread oops.
|
# ? May 28, 2019 03:50 |
|
Crosspost since this is probably more relevant here than the upgrading thread: Amazon deal of the day: WD Red Internal 8TB drive for $180.99
|
# ? May 28, 2019 11:11 |
|
Is anyone using unraid VM's with GPU passthrough? I'm thinking of adding an emulation VM to my NAS either Windows 10 or Linux (RetroArch front?) and have an old nVidia 970 GTX I could add and use.
|
# ? May 28, 2019 13:01 |
|
There are multiple YouTube videos that do this. The only technical thing is the separation of the IOMMU groups to hold that video card out of Unraid, so it's available for the container/VM. Edit: I just remembered one thing that a guy did to make it easier on himself was to use a USB hub. He assigned the USB hub IOMMU group to the VM as well, this meant anything plugged into that hub automatically went to the VM without further fuckery. dexefiend fucked around with this message at 13:28 on May 28, 2019 |
# ? May 28, 2019 13:25 |
|
If anyone was curious, I decided to just pull one of the old drives on my unraid server and swap in a new without copying anything or doing any prep on it. Wanted to see the behavior in a failed drive scenario. Was as simple as powering down the server, swapping drive, powering back up and selecting the new drive in a drop down list. Missing data is available in an "emulated" state while the array rebuilds the drive. About halfway done now after 12 hours. Also, it's mostly been covered, but I'm not keeping the old drives running for a few reasons, primarily that I'm at my drive limit in Unraid and don't feel like shelling out for the unlimited drive license just yet.
|
# ? May 28, 2019 13:42 |
|
dexefiend posted:There are multiple YouTube videos that do this. The problem with USB passthrough is the max you will get is USB 2.0 speeds, not USB 3.0. I run my backups off my FreeNAS VM to a 4TB USB 3.0 drive.
|
# ? May 28, 2019 15:55 |
|
CommieGIR posted:The problem with USB passthrough is the max you will get is USB 2.0 speeds, not USB 3.0.
|
# ? May 28, 2019 18:49 |
|
I know that shingled drives should never be used in arrays, and are garbage at workloads that have a mix of reads and writes. What about for a single sustained write? What sort of write throughput would I see on a shingled hard drive with a write-once workload?
|
# ? May 28, 2019 21:27 |
|
Which of the Reds and Reds Pro are helium-filled? I know that the current Red 8 and 10 are but I’m not seeing anything on the WD site?
|
# ? May 28, 2019 22:16 |
|
Thanks, I've seen the youtubes and have actually set this up manually with qemu and kvm in Ubuntu before. It worked and gaming performance was great but it was also a bit of a pain in the dick to setup and maintain. So was really wondering if anyone here is actually using it full time and if the Unraid experience makes it 'just work?
|
# ? May 28, 2019 22:45 |
|
Is the “1 GB per TB” rule for FreeNAS in addition to the 8 gig minimum or is it really more like “8 GB for the first TB, one GB per TB additional”?
|
# ? May 28, 2019 23:03 |
|
Schadenboner posted:Is the “1 GB per TB” rule for FreeNAS in addition to the 8 gig minimum or is it really more like “8 GB for the first TB, one GB per TB additional”? I think that rule is if you’re using deduplication, which you should not be.
|
# ? May 28, 2019 23:17 |
|
ZFS is always happier with more RAM. Any money you would spend on making the drive array faster - higher RPM disks, SSDs for ZIL / ARC - would be way better spent on RAM until you actually max the system out. Also consider that rule is just for ZFS. If you want to run something else RAM heavy on the box, add that too. And never ever ever ever enable dedupe.
|
# ? May 28, 2019 23:51 |
|
IOwnCalculus posted:ZFS is always happier with more RAM. Any money you would spend on making the drive array faster - higher RPM disks, SSDs for ZIL / ARC - would be way better spent on RAM until you actually max the system out. The prospective system is one of those little HP Microservers, I think they max out at 32GB?
|
# ? May 29, 2019 00:00 |
|
My Ubuntu server using zfs with 20TB of usable storage has 16GB of ram. Never had an issues with ram, running sonar, radar, emby, unifi controller, transmission, and a few other services.
|
# ? May 29, 2019 00:23 |
|
The rule of thumb is for write-heavy patterns with many concurrent users, and is a vestige of an era when an array might have like 4 TB total or something. Your plex server will be happy with 8 GB or whatever. If you find it's a problem, start adding RAM. But yes ZFS likes RAM so if it isn't fast enough, that's your first stop.
|
# ? May 29, 2019 00:45 |
|
Throwing another opinion into the mix, running the usual array of jails + plex, and a VM for miscellaneous stuff on it, with a 108TB usable pool (2xZ2 at 8x8TB and 8x10TB), my box is quite happy with 32GB of RAM, and has ~18 of it just hanging out acting as cache.
|
# ? May 29, 2019 02:27 |
|
Lowen SoDium posted:My Ubuntu server using zfs with 20TB of usable storage has 16GB of ram. Never had an issues with ram, running sonar, radar, emby, unifi controller, transmission, and a few other services. How much data is stored on it? The idea is you need x amount of memory to hold however many blocks you have stored (storing the hash and metadata for each block), so with larger block sizes you need lesss ram
|
# ? May 29, 2019 02:58 |
|
Enos Cabell posted:Any Unraid users swapped out to a larger drive in the array before? Not sure which is less disruptive, removing an old drive and letting the array rebuild data on the new drive, or copying data off the old drive first before putting in the new drive. I've swapped drives using their recommended way of letting it rebuild, a success on both occasions. I only use my unraid boxes as backups, though, so I wasn't *too* worried
|
# ? May 29, 2019 05:07 |
|
Twerk from Home posted:I know that shingled drives should never be used in arrays, and are garbage at workloads that have a mix of reads and writes. What about for a single sustained write? What sort of write throughput would I see on a shingled hard drive with a write-once workload? SMR drives aren't good for OS use...but then again you should be using an SSD instead of any HDD for that purpose. They're really only a liability for heavy rewriting, where the drive has to try to write your new data while reading and moving the data you're altering. Even then, you're probably not going to notice a difference between HDDs with different recording techniques. From what I remember, some if not all of the Seagate (who AFAIK is the most common user of SMR) SMR drives have a 20 GB PMR section at the interior of the drive, which it uses for shuffling data. If you overflow that, you'd likely notice some kind of performance degradation, not unlike overflowing an SSD's pseudoSLC cache (especially a new QLC drive.) Note that the Samsung 2.5" 2 TB drives, both the Barracuda and the Firecuda SSHD/hybrid drive are SMR based on my research. The latter is, as you can read from product reviews, very common as an upgrade for the PS4, with many happy customers (and others complaining that the drive fails at some point, but that's another story.) That should tell you about its usability for gaming applications (and I have a few in gaming laptops with no notable problems.) For your hypothetical workload, a single sustained write will work exactly like on any other HDD; you won't notice it's anything other than an ordinary HDD. If anything, because SMR means higher densities, read and write speeds will be increased over the same drive using PMR, but modern, high-capacity HDDs will get you speeds roughly in the 100-200 MB/s range, depending on the specific drive, capacity (i.e. density,) and other factors like disk position (with faster rates along the outer circumference compared to the inner.) I use a Seagate 3.5" external HDD as my media drive for Plex, and it it's perfectly satisfactory for that purpose. I need it for nothing other than capacity, and its speeds are more than enough for my use. I wouldn't recommend against an SMR drive on principle, but there isn't really a point to it at lower capacities (mine is a 6 TB, which was more meaningful when I first got it but you can easily find PMR drives at higher capacities.) Schadenboner posted:Which of the Reds and Reds Pro are helium-filled? I know that the current Red 8 and 10 are but I’m not seeing anything on the WD site? If they don't specifically say on the spec sheet, a hint might be the number of platters; Helium enables the platters to be placed closer together, and I think 7-platter drives are all going to be Helium (and a high capacity would also suggest towards that.) There's also a SMART value for Helium level, if you're running a drive and take a peek at those stats.
|
# ? May 29, 2019 07:42 |
bobfather posted:I think that rule is if you’re using deduplication, which you should not be. ZoL has reduced the size of the dedup table entries down to 25% of its original size, and yet very very few people who implement ZoL use dedup either - to make it worth it, you need to find a company that'll implement both a new vdev type to store the dedup table on two or more NVMe SSDs, as well as Ahrens' ideas for making dedup 1000x faster, which was described in these slides or this video: https://www.youtube.com/watch?v=PYxFDBgxFS8 Atomizer posted:SMR drives aren't good for OS use I know people who, when they run out of storage, simply buy another SAS expander JBOD chassis and begin filling it up 11 drives at a time with each vdev as a RAIDz3. Typically, most SMR drives which people in this thread tend to see are drives that hide their SMR status from the OS; if they didn't (ie. used host-ware SMR firmware) and the OS has the code for it, any filesystem can optimize its writes for getting data stored on SMR. Another upshot of this is that you get to take heavy advantage of streaming I/O which is the one area where modern spinning rust shines in terms of bandwidth (though not compared to SSDs, of course).
|
|
# ? May 29, 2019 08:01 |
|
Dropbox uses servers full of 100 SMR drives but they basically are doing WORM and by using SSD’s to buffer data and controlling the whole drat stack, they manage to push 40GB/s worth of writes
|
# ? May 29, 2019 14:20 |
|
Bob Morales posted:Dropbox uses servers full of 100 SMR drives but they basically are doing WORM and by using SSD’s to buffer data and controlling the whole drat stack, they manage to push 40GB/s worth of writes Good to know. They've gotta be some type of replication-based solution that's similar to an inhouse Ceph, right?
|
# ? May 29, 2019 14:40 |
|
Twerk from Home posted:Good to know. They've gotta be some type of replication-based solution that's similar to an inhouse Ceph, right? Yea, if you google “magic pocket” they have blog posts about various parts of it
|
# ? May 29, 2019 14:44 |
|
Bob Morales posted:How much data is stored on it? The idea is you need x amount of memory to hold however many blocks you have stored (storing the hash and metadata for each block), so with larger block sizes you need lesss ram I think there is about 12T in use.
|
# ? May 29, 2019 15:38 |
|
Atomizer posted:SMR drives aren't good for OS use...but then again you should be using an SSD instead of any HDD for that purpose. They're really only a liability for heavy rewriting, where the drive has to try to write your new data while reading and moving the data you're altering. Even then, you're probably not going to notice a difference between HDDs with different recording techniques. I was tricked into buying an external drive with one of these 2.5" Seagate SMR drives. It uses a multi-tier caching system with DRAM, NAND, PMR sectors on the fast edge of the platters and SMR sectors on the remaining space, all managed completely transparent to the host system. This isn't really advertised; it seems like they're even trying to hide the fact that these drives are SMR. I only found out about this when I noticed that repeated random access to medium sized files yielded closer to SSD-like performance than what I expected from a regular PMR hard drive. It works surprisingly well and I can only assume that it keeps the right files in the right places. The drive feels faster than a regular PMR drive most of the time. My only concern is reliability and the potential additional points of failure. Sometimes there's disk activity when the host OS isn't doing anything at all. Presumably that's when it shuffles data around between NAND/PMR/SMR. more on this here: https://www.seagate.com/files/www-c...-paper-2017.pdf
|
# ? May 29, 2019 17:18 |
|
HalloKitty posted:I've swapped drives using their recommended way of letting it rebuild, a success on both occasions. First drive swapped using this method went without a hitch, 70% into drive number two right now at about 24 hours per 8tb drive. Nice seeing how it works without the pressure of an actual dead drive.
|
# ? May 29, 2019 21:48 |
|
eames posted:I was tricked into buying an external drive with one of these 2.5" Seagate SMR drives. What external setup had the Seagate inside? I mean I'm sure they exist, I just haven't seen any enclosures that came with those SMR drives (the portable external drives that are of most interest to me are the 4/5 TB ones from WD or Seagate with the 15 mm or whatever height 2.5" drives inside, because they're high capacity for the size while only requiring a single USB connection.) And yeah that's the caching solution I was referring to; they use multiple layers including that PMR section to manage it, and it all seems to work well in practice. The SSHDs with 8 GB of NAND flash are nice for the performance boost that you noticed, although the thing that annoyed me most was that the Firecuda I mentioned has a brother in the Barracuda (in this case I'm referring to the 2 TB 2.5" drives,) which as far as I could tell is the same drive just without that NAND portion, so is essentially gimped. Sometimes drives do maintenance stuff on their own like you noticed, that's perfectly normal. My HGST He8 does this, which I mention because it's particularly obvious due to how loud the drive is compared to a consumer drive. The main thing I try to clear up is the misconception about SMR drives; they're really just normal HDDs for the vast majority of average consumer use-cases. The only real disappointment is that the technology isn't particularly justified unless it's being used to get higher densities and/or lower costs; that 6 TB version I have I bought for ~$125 a few years back when that capacity typically went for ~$150, but not too long after that 6 TB external WDs (blue drive inside) dropped down to $100, so the rationale for that SMR drive is obviated, unless by comparison the price was, say, $80 or less to justify its existence. At the high end, 12+ TB drives are considerably more expensive than <12 TB drives, so if SMR turned a 10 TB into a 12 TB at a reasonable cost, that would be a perfect use of the technology.
|
# ? May 30, 2019 07:49 |
|
|
# ? May 30, 2024 06:58 |
|
Atomizer posted:What external setup had the Seagate inside? I mean I'm sure they exist, I just haven't seen any enclosures that came with those SMR drives (the portable external drives that are of most interest to me are the 4/5 TB ones from WD or Seagate with the 15 mm or whatever height 2.5" drives inside, because they're high capacity for the size while only requiring a single USB connection.) The drive I bought was the relatively new Lacie USB-C mobile drive . Mine is the thicker 4TB Version, I strongly suspect it has a ST4000LM024 inside but all the firmware is rebranded to Lacie. There is some controversy around this because the official datasheet lists this as a PMR drive but it behaves nothing like one. Seagate support got cagey when I asked for the type/model of the drive inside and only told me not to worry because SMR is great! I was in a pinch, needed a large USB powered drive and this was all the local Apple Store had in stock, so it is what it is. I’m extra careful with keeping versioned backups of that drive but so far it works fine and gives me SSD-like performance when it hits the cache, which happens quite frequently.
|
# ? May 30, 2019 09:15 |