|
IOwnCalculus posted:Of course while I say that, BSD's point about raidz3 has me very tempted to do 11-drive raidz3s on the restructure I'm doing on my server right now, because after I'm all done I'll have so many "extra" drives I won't need to expand for a very long time. gently caress it. code:
|
# ? Sep 16, 2023 01:13 |
|
|
# ? Jun 5, 2024 05:21 |
|
So, re temp chat a few pages ago, how much should I consider heat on an aging drive? One of the drives in my truenas setup is a Seagate Iron Wolf 10tb (so not like a pro drive, a consumer grade drive) with with ~46k "Lifetime" according to the SMART results in Truenas, I'm guessing this is power on hours. I remember reading that it is somewhere around 6 years, or 52k power on hours that you should start thinking about replacing a drive. However, this drive has basically averaged 50c for pretty much 10 months out of the year it's whole lifespan, with peaks of up to 54c and very occasional dips down to 46c (ambients in the room ~30c, except at night when I run the AC at 25c ish). For 2 months of the year when we actually get cool weather here (18c~25c) I imagine that the mean drops but maintains that same ~20c delta. Should I be concerned that these temps will meaningfully shorten the drive's lifespan? I run short smart tests twice a week, a long smart test once a week, and a scrub every 2 weeks, and haven't had any warnings or anything yet. Thanks! E: My use case is basically just using it as a write once read many file server. I don't run any intensive tasks (that I know of). Not sure if this matters or not. Running in a 4 drive Z1 array (though am planning to tear it all down and go 6 drive Z2 sometime in the next year, and planning to retire this drive then). Shrimp or Shrimps fucked around with this message at 02:03 on Sep 16, 2023 |
# ? Sep 16, 2023 01:56 |
|
TrueNAS / zfs is neat. Copying all my crap over to my new pool and one disk took a poo poo in a pretty impressive way. Started getting notifications and as I was trying to familiarize myself with the UI and what to do, it just went offline entirely and the system could no longer see it. Started going upstairs where it lives and before I even got to the stairs I could hear it ticking quite loudly like the head? was caught on something. Well it was obvious which one was busted so I just popped it out and replaced it with one of my spares I wasn't sure what to do with yet and it started resilvering all the while it is still copying poo poo from my old server without missing a beat or needing to power cycle anything. A++ Also someone wanted a review of my experience with the TrueNAS Mini R, granted its pretty quick since unboxing so I'm still figuring stuff out but:
Really satisfied so far. Mr. Crow fucked around with this message at 02:32 on Sep 16, 2023 |
# ? Sep 16, 2023 02:15 |
|
Oh, speaking of identifying drives, how do I do that when first setting up a pool so that when one drive dies, I know exactly which one to physically remove from the server?
|
# ? Sep 16, 2023 03:06 |
|
If you're doing zfs on Linux, make sure your zpool create command references the drives by entries in /dev/disk/by-id/ and not /dev/sd*. Both because there are human-readable labels in /dev/disk/by-id/ that include the drive's make, model, and serial, and because /dev/sd* assignments are not guaranteed to be consistent across reboots. Edit: and keep a note somewhere easily accessible that says what drive is in what bay, by serial number. IOwnCalculus fucked around with this message at 03:12 on Sep 16, 2023 |
# ? Sep 16, 2023 03:08 |
|
I'll have to get a label printer or something, record the serial numbers on each label, and attach them to the front of each hot-swap bay. Good to know that I can see the serial numbers in the OS!
|
# ? Sep 16, 2023 04:27 |
|
I wouldn't bother physically labeling the bays because you'll have to redo it every time you swap a drive. On my box I just keep a Google Sheets spreadsheet with a 3Rx4C table that matches the layout of the drivebays in the server itself and a 6Rx4C table for the DS4246, and each cell has make/model/serial in it.
|
# ? Sep 16, 2023 06:02 |
The real trick is to use SAS enclosures with lights that can be manipulated with sesutil, and set up your pool such that the devices names in zpool status reflect the physical path. EDIT: Can't recall who was talking about NVMe over Fabric, but John Baldwin is presenting about it being developed for FreeBSD at EuroBSDCon 2023. BlankSystemDaemon fucked around with this message at 11:26 on Sep 16, 2023 |
|
# ? Sep 16, 2023 09:58 |
|
YerDa Zabam posted:Couple of new Def Con videos that I thought you lot might enjoy. really bad recording for a decent talk going over what everyone should already know about backblaze's methodology. if you read the report nothing in this should be new to you, just a few mentions of ssds holding strong for longevity but being too expensive for them to test in the scale they want and complaining about smart being inconsistent across manufacturers the rest of the talks at defcon are dire...
|
# ? Sep 16, 2023 12:19 |
|
IOwnCalculus posted:If you're doing zfs on Linux, make sure your zpool create command references the drives by entries in /dev/disk/by-id/ and not /dev/sd*. Both because there are human-readable labels in /dev/disk/by-id/ that include the drive's make, model, and serial, and because /dev/sd* assignments are not guaranteed to be consistent across reboots. In my experience, ZFS is quite good about finding its own disks even if the path you added them by isn't available - I think every disk gets a serial number in the header, and that's enough to put a pool back together?
|
# ? Sep 16, 2023 12:37 |
|
Yeah, there was a massive upload yesterday from them (def con) and all the ones I tried were impossible to listen to. Mics clipping and distorting, speakers being way too loud (Corey doctorow in particular) Or being so quiet that even the subtitles fail at points.
|
# ? Sep 16, 2023 12:37 |
|
Stupid question time. My NAS/Plex box is currently on windows 10. It has 2 drives in a windows storage space and the OS is running off an SSD (I repurposed my old system when I upgraded). I'm looking to move it into a new case with more space for drives, and I figure that's a good time to also transition off of windows. I wasn't thinking ahead when I went with the storage space. So my question is "how hard is it going to be to transition the data off of the storage space and into unraid or truenas?" If that's even possible? I'm hoping there's a way to convert that I'm just not aware of, but if it takes just buying more drives, then hey more space.
|
# ? Sep 16, 2023 14:52 |
|
Computer viking posted:In my experience, ZFS is quite good about finding its own disks even if the path you added them by isn't available - I think every disk gets a serial number in the header, and that's enough to put a pool back together? I have had ZFS poo poo bricks due to this before and I suspect it's because I run multiple vdevs. It sees all the disks and brings the pool up but starts failing checksums. Reimporting using /dev/disk/by-id fixed it. OpenZFS themselves also recommends against /dev/sdX for all but the smallest setups, and even the default Ubuntu fstab no longer relies on /dev/sdX for your root partition. How problematic this is depends on your particular drive controller but in my personal experience, controllers that always detect the disks in the same order are much more rare than ones that just get them in whatever order. The slightly weird SAS controller built into my DL380 G9 was one of the first I've come across in a long time that always put every drive in the exact same order.
|
# ? Sep 16, 2023 15:41 |
|
Tangentially NAS related, the fan in my TS140's PSU has occasionally started buzzing and clicking, so I think its bearing is starting to go. I took it apart and cleaned/lubricated it as best I could, but it continues to start up after a while. Giving it a smack causes it to stop for a while, which isn't an ideal solution for a NAS full of old hard drives so I should just replace the fan. As long as the voltage/current/RPM/CFM numbers match, will it be fine to grab something off the shelf and swap the 3 pin connector with the 2-pin one the PSU uses, and leave the RPM sensor not connected to anything? As usual I'm going to try hitting up the local electronics recycling depot before ordering a new part online (a Yate Loon D80BH-12 running at 0.18A).
|
# ? Sep 16, 2023 18:17 |
|
Rap Game Goku posted:Stupid question time. To transfer data in this case I you will need a medium while your Nas is converted. Either get a large enough external drive or upload it to some sort of storage service and download it again, which would probably be the cheapest option but also the slowest.
|
# ? Sep 16, 2023 18:29 |
|
Combat Pretzel posted:Also, when you're using compression, you want variable record sizes, because here they're used extensively. By the way, do not sleep on the default ZFS compression, you might think "these are video files, they're already compressed" but for some reason the file system level compression saved me something like 5%-8% off my total usage for free. It's definitely effective, and it's essentially free (just a little CPU overhead). I recommend ZSTD.
|
# ? Sep 16, 2023 18:51 |
Talorat posted:By the way, do not sleep on the default ZFS compression, you might think "these are video files, they're already compressed" but for some reason the file system level compression saved me something like 5%-8% off my total usage for free. It's definitely effective, and it's essentially free (just a little CPU overhead). I recommend ZSTD. Still, it's nice to have enabled by default, because the default levels achieve good enough compression ratios while managing decent speeds, that it genuinely speeds up operations because disks have to work on less actual data.
|
|
# ? Sep 16, 2023 19:06 |
|
Coxswain Balls posted:Tangentially NAS related, the fan in my TS140's PSU has occasionally started buzzing and clicking, so I think its bearing is starting to go. I took it apart and cleaned/lubricated it as best I could, but it continues to start up after a while. Giving it a smack causes it to stop for a while, which isn't an ideal solution for a NAS full of old hard drives so I should just replace the fan. As long as the voltage/current/RPM/CFM numbers match, will it be fine to grab something off the shelf and swap the 3 pin connector with the 2-pin one the PSU uses, and leave the RPM sensor not connected to anything? As usual I'm going to try hitting up the local electronics recycling depot before ordering a new part online (a Yate Loon D80BH-12 running at 0.18A). Yeah the rpm sensor isn't interactive or anything, it's for the host to read on PC hardware, you can use a 3 pin fan as a 2 pin fan. You could even get a 4 pin one if you had one spare. Usual caveats about being really careful around the big caps inside a PSU.
|
# ? Sep 16, 2023 19:35 |
|
Rap Game Goku posted:Stupid question time. As per the above you're going to need to move it all somewhere then move it back. Keep in mind with Unraid your parity disk need to be as large as your largest individual disk. So if you had 2x 8tbs and bought an external 16tb for transfer, intending to add the 16tb after to your NAS, you would really need to use that 16tb as the parity drive. You can run unraid without parity and add this drive last. Ideally if you're also looking to bump your capacity up you would grab 2x new disks of a larger size, so you'd have 16tb parity, then 8+8+16 storage array as an example. It might not look that great now but then you're free to add any disks up to the size of your parity drive moving forward.
|
# ? Sep 16, 2023 23:08 |
|
Aware posted:As per the above you're going to need to move it all somewhere then move it back. Just to add some emphasis, don't add your parity drives until all your data is copied over. Massive data transfers with parity disks in place is purgatory, even with reconstructive write enabled.
|
# ? Sep 17, 2023 12:43 |
Charles Leclerc posted:Just to add some emphasis, don't add your parity drives until all your data is copied over. Massive data transfers with parity disks in place is purgatory, even with reconstructive write enabled.
|
|
# ? Sep 17, 2023 13:39 |
|
When you have parity drives it will calculate parity when data is initially written to the array like you would expect. Unraid parity disks can be added later on, and it will scrub through your array drives and generate the parity whenever as well. A popular recommendation for initial ingest into a new unraid server is to leave the parity disabled and bypass any cache so you're writing straight to array disks without any parity overhead. Then once the initial ingest is done enable parity for the array and cache for your shares for normal day to day use.
|
# ? Sep 17, 2023 13:58 |
|
BlankSystemDaemon posted:RAID3 through 6 all have to have parity calculated when data is initially written, otherwise there is no parity - so are they doing some sort of block pointer rewrite nonsense to post-write parity-compute in the background? Yep, but unraid isn't RAID
|
# ? Sep 17, 2023 14:13 |
Charles Leclerc posted:Yep, but unraid isn't RAID
|
|
# ? Sep 17, 2023 14:23 |
|
BlankSystemDaemon posted:New slogan being "UnRAID: A quicker way of losing data"? You're definitely at risk during the initial data transfer, but building two-disk parity on my 60TB array only takes around 10 hours. For the sake of saving literally days on the initial ingest it feels worthwhile imo.
|
# ? Sep 17, 2023 14:27 |
|
How fast it takes to build parity really is determined by the size of the individual drive, if it's dual parity or not and the total size of the array really isn't a factor. My 20TB parity takes nearly 2 full days to build or check.
|
# ? Sep 17, 2023 14:33 |
|
BlankSystemDaemon posted:New slogan being "UnRAID: A quicker way of losing data"? Nope. Mines been running for 9 years now and my only data loss was from user error.
|
# ? Sep 17, 2023 14:33 |
Charles Leclerc posted:You're definitely at risk during the initial data transfer, but building two-disk parity on my 60TB array only takes around 10 hours. For the sake of saving literally days on the initial ingest it feels worthwhile imo. Matt Zerella posted:Nope. Mines been running for 9 years now and my only data loss was from user error. It's still subject to silent data loss because there's no in-line checksumming that's verified using periodic patrol scrubs or on every read of the data.
|
|
# ? Sep 17, 2023 15:07 |
|
BlankSystemDaemon posted:
Yes everything's fine and I don't care about any of this for home use. I run a monthly scrub so I don't know where you got that information from that it doesn't have that feature.
|
# ? Sep 17, 2023 15:10 |
|
Unraid parity has overhead because it is calculating parity by reading from your array drives. Unraid's parity can only protect you from drive failure. There are plugins available to identify and track bit-rot or other file corruption, but if you care about that you probably should just use something besides Unraid. Unraid's parity check will tell you you had some issue as it will detect errors with your parity, but you won't have a way to identify where it occurred.
|
# ? Sep 17, 2023 15:20 |
|
BlankSystemDaemon posted:Why should the ingest take any additional time? XOR and Galois calculations using finite field theory are both something computers are exceptionally quick at. idk, but it does for me, especially lots of small files, it can be slow as molasses. I definitely feel like going down the TrueNAS/ZFS road is a more robust solution but I don't want to invest the time into it when UnRAID is perfectly cromulent for what I want from my home server. Everything I'd be upset about losing I have hot and cold backups of in different physical locations and the rest... well I'm on a gigabit line and have multiple Usenet indexers so that's maybe a week at most reacquiring Linux distros.
|
# ? Sep 17, 2023 15:33 |
|
hogofwar posted:To transfer data in this case I you will need a medium while your Nas is converted. Either get a large enough external drive or upload it to some sort of storage service and download it again, which would probably be the cheapest option but also the slowest. Aware posted:Ideally if you're also looking to bump your capacity up you would grab 2x new disks of a larger size, so you'd have 16tb parity, then 8+8+16 storage array as an example. That's what I figured. I got a pretty good deal on 16tbs last black friday. Hopefully that'll pop up again.
|
# ? Sep 17, 2023 19:34 |
|
And BSD is definitely correct imo, Unraid isn't well suited to protecting your data so critical things should have other backup strategies. For most of us this is fine because all my unraid does is store easily replaceable Linux ISOs which I'm typically cycling deletion on anyway to free up space for new things. The rest of unraid is just a nice balance of making docker easier with a nice GUI and minimal janitoring.
|
# ? Sep 17, 2023 21:51 |
|
Fozzy The Bear posted:With cheap mini PCs such as the Beelink S12, the only way to turn it into a NAS is by adding USB storage, correct? I have the N100 model of this and there’s a slot for a 2.5 SSD in it. So you could do both NVMe M.2 and a 2.5 SATA.
|
# ? Sep 17, 2023 22:39 |
What is the threshold for "less janitorial duties"? On FreeBSD using ZFS and zfsd, all I ever have to do is pop out a disk when it's failing, and pop another one in that's bigger. No command-line utilities or anything else.
|
|
# ? Sep 17, 2023 23:00 |
|
Honestly for me I just like the docker management/app store approach. I've run my poo poo on Debian/ZFS before and you're right it's not a pain at all for storage. But on the docker/VM side I just prefer Unraids interface over portainer/cockpit/other random ones I've tried. I run docker/podman elsewhere and for work and so it's not a lack of familiarity, I just like how unraid has set it up for home. At home I'm a as much as a Luddite as I can be. It's purely personal preference at this point, I really can't defend Unraid as any kind of reliable storage solution.
|
# ? Sep 17, 2023 23:21 |
|
Aware posted:Honestly for me I just like the docker management/app store approach. I've run my poo poo on Debian/ZFS before and you're right it's not a pain at all for storage. But on the docker/VM side I just prefer Unraids interface over portainer/cockpit/other random ones I've tried. I run docker/podman elsewhere and for work and so it's not a lack of familiarity, I just like how unraid has set it up for home. At home I'm a as much as a Luddite as I can be. Same. I have a cheap TrueNAS box that just does storage and do server stuff on my Unraid box.
|
# ? Sep 18, 2023 00:47 |
|
code:
|
# ? Sep 18, 2023 02:58 |
|
Doesn't Unraid do ZFS now?
|
# ? Sep 18, 2023 03:51 |
|
|
# ? Jun 5, 2024 05:21 |
|
IOwnCalculus posted:I have had ZFS poo poo bricks due to this before and I suspect it's because I run multiple vdevs. It sees all the disks and brings the pool up but starts failing checksums. in the long ago days of parallel SCSI drive racks, I once built a pair of systems hooked up to 4 external drive expanders via 2 SCSI cables each. using a LSI/Adaptec controller IIRC. Some genius moved things around and swapped one set of cables, and adaptec had zero metadata on the drives so just happily discovered drives by SCSI ID based on what it had in it's EEPROM and started serving entirely random data and instantly destroyed both of the redundant arrays. After explaining to the client that their stupidity was the cause and we had told them in writing not to move the servers without paying someone (me) to make sure it didn't get hosed up, I reflashed the controllers to JBOD and used software raid. I've never touched hardware raid since. They spent the next two months pissing and moaning and rebuilding their dataset. Turns out "we're not wasting money on backing up because it's transient data!" only works if the data is, in fact, transient. e: I have a super hard time believing ZFS can accidentally import a drive from another pool. It absolutely shits bricks if you try to use a pool on another system without going through a whole "I'm really done with this pool, please mark it clean and don't try to use it at startup anymore" routine before moving the drives. Harik fucked around with this message at 06:21 on Sep 18, 2023 |
# ? Sep 18, 2023 06:17 |