Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
IOwnCalculus
Apr 2, 2003





IOwnCalculus posted:

Of course while I say that, BSD's point about raidz3 has me very tempted to do 11-drive raidz3s on the restructure I'm doing on my server right now, because after I'm all done I'll have so many "extra" drives I won't need to expand for a very long time.

gently caress it.

code:
$ zpool status tank
  pool: tank
 state: ONLINE
config:

        NAME                                                   STATE     READ WRITE CKSUM
        tank                                                   ONLINE       0     0     0
          raidz3-0                                             ONLINE       0     0     0
            scsi-SATA_HGST_HUH721010AL_                        ONLINE       0     0     0
            scsi-SATA_HGST_HUH721010AL_                        ONLINE       0     0     0
            scsi-SATA_HGST_HUH721010AL_                        ONLINE       0     0     0
            scsi-SSEAGATE_ST10000NM0096_                       ONLINE       0     0     0
            scsi-SHGST_H7210A520SUN010T_                       ONLINE       0     0     0
            scsi-SSEAGATE_ST10000NM0096_                       ONLINE       0     0     0
            scsi-SWDC_WUS721010AL4200_                         ONLINE       0     0     0
            scsi-SSEAGATE_ST10000NM0096_                       ONLINE       0     0     0
            scsi-SHGST_H7210A520SUN010T_                       ONLINE       0     0     0
            scsi-SSEAGATE_ST10000NM0096_                       ONLINE       0     0     0
            scsi-SHGST_H7210A520SUN010T_                       ONLINE       0     0     0
          raidz3-1                                             ONLINE       0     0     0
            scsi-SHGST_H7210A520SUN010T_                       ONLINE       0     0     0
            scsi-SSEAGATE_ST10000NM0226_                       ONLINE       0     0     0
            scsi-SSEAGATE_ST10000NM0096_                       ONLINE       0     0     0
            scsi-SHGST_HUH721010AL42C0_                        ONLINE       0     0     0
            scsi-SSEAGATE_ST10000NM0096_                       ONLINE       0     0     0
            scsi-SHGST_HUH721010AL4200_                        ONLINE       0     0     0
            scsi-SNETAPP_X377_STATE10TA07_                     ONLINE       0     0     0
            scsi-SSEAGATE_ST10000NM0226_                       ONLINE       0     0     0
            scsi-SSEAGATE_ST10000NM0096_                       ONLINE       0     0     0
            scsi-SSEAGATE_ST10000NM0096_                       ONLINE       0     0     0
            scsi-SSEAGATE_ST10000NM0096_                       ONLINE       0     0     0
        spares
          scsi-SSEAGATE_ST10000NM0096_                         AVAIL
And I have 12 more drives to add to this once I clear off that temporary pool.

Adbot
ADBOT LOVES YOU

Shrimp or Shrimps
Feb 14, 2012


So, re temp chat a few pages ago, how much should I consider heat on an aging drive? One of the drives in my truenas setup is a Seagate Iron Wolf 10tb (so not like a pro drive, a consumer grade drive) with with ~46k "Lifetime" according to the SMART results in Truenas, I'm guessing this is power on hours. I remember reading that it is somewhere around 6 years, or 52k power on hours that you should start thinking about replacing a drive.

However, this drive has basically averaged 50c for pretty much 10 months out of the year it's whole lifespan, with peaks of up to 54c and very occasional dips down to 46c (ambients in the room ~30c, except at night when I run the AC at 25c ish). For 2 months of the year when we actually get cool weather here (18c~25c) I imagine that the mean drops but maintains that same ~20c delta.

Should I be concerned that these temps will meaningfully shorten the drive's lifespan? I run short smart tests twice a week, a long smart test once a week, and a scrub every 2 weeks, and haven't had any warnings or anything yet.

Thanks!

E: My use case is basically just using it as a write once read many file server. I don't run any intensive tasks (that I know of). Not sure if this matters or not. Running in a 4 drive Z1 array (though am planning to tear it all down and go 6 drive Z2 sometime in the next year, and planning to retire this drive then).

Shrimp or Shrimps fucked around with this message at 02:03 on Sep 16, 2023

Mr. Crow
May 22, 2008

Snap City mayor for life
TrueNAS / zfs is neat. Copying all my crap over to my new pool and one disk took a poo poo in a pretty impressive way. Started getting notifications and as I was trying to familiarize myself with the UI and what to do, it just went offline entirely and the system could no longer see it. Started going upstairs where it lives and before I even got to the stairs I could hear it ticking quite loudly like the head? was caught on something. Well it was obvious which one was busted so I just popped it out and replaced it with one of my spares I wasn't sure what to do with yet and it started resilvering all the while it is still copying poo poo from my old server without missing a beat or needing to power cycle anything.

A++


Also someone wanted a review of my experience with the TrueNAS Mini R, granted its pretty quick since unboxing so I'm still figuring stuff out but:


  • packaging was superb, probably standard with server components but I've never unboxed a new server before
  • server itself is pretty cool and quiet, definitely by server standards. its not dead silent like my desktop but its like running a good room fan, the 40-50 db they claim is accurate
  • the loudest thing is the PSU, it kind of whines a little bit im not sure if this is just coil whine or if the fans are maybe off tilt a little but it kind of fades in and out. TBD if its gonna be annoying long term for anyone in there. its not super loud or anything but it is just kind of noticeable
  • the hardware is nice, everything is easily accessible and the trays slide in and out really easily (in a good way).
  • you do have to screw in the drives to the trays, not sure if standard and kind of annoying the first time but not a big deal after that
  • software wise its all well known and standard stuff, obviously im impressed having to immediately replace a drive that poo poo itself
  • has some gimmicky but cool features in the ui, like showing the drives where they superimposed on the enclosure, and you can tell it to identify drives through the UI which causes the physical drive to blink in an obvoous way

Really satisfied so far.

Mr. Crow fucked around with this message at 02:32 on Sep 16, 2023

Kibner
Oct 21, 2008

Acguy Supremacy
Oh, speaking of identifying drives, how do I do that when first setting up a pool so that when one drive dies, I know exactly which one to physically remove from the server?

IOwnCalculus
Apr 2, 2003





If you're doing zfs on Linux, make sure your zpool create command references the drives by entries in /dev/disk/by-id/ and not /dev/sd*. Both because there are human-readable labels in /dev/disk/by-id/ that include the drive's make, model, and serial, and because /dev/sd* assignments are not guaranteed to be consistent across reboots.

Edit: and keep a note somewhere easily accessible that says what drive is in what bay, by serial number.

IOwnCalculus fucked around with this message at 03:12 on Sep 16, 2023

Kibner
Oct 21, 2008

Acguy Supremacy
I'll have to get a label printer or something, record the serial numbers on each label, and attach them to the front of each hot-swap bay. Good to know that I can see the serial numbers in the OS!

IOwnCalculus
Apr 2, 2003





I wouldn't bother physically labeling the bays because you'll have to redo it every time you swap a drive.

On my box I just keep a Google Sheets spreadsheet with a 3Rx4C table that matches the layout of the drivebays in the server itself and a 6Rx4C table for the DS4246, and each cell has make/model/serial in it.

BlankSystemDaemon
Mar 13, 2009



The real trick is to use SAS enclosures with lights that can be manipulated with sesutil, and set up your pool such that the devices names in zpool status reflect the physical path.

EDIT: Can't recall who was talking about NVMe over Fabric, but John Baldwin is presenting about it being developed for FreeBSD at EuroBSDCon 2023.

BlankSystemDaemon fucked around with this message at 11:26 on Sep 16, 2023

Wiggly Wayne DDS
Sep 11, 2010



YerDa Zabam posted:

Couple of new Def Con videos that I thought you lot might enjoy.
The hard drive stats one in particular I enjoyed. (It's Backblaze btw)

https://www.youtube.com/watch?v=pY7S5CUqPxI

alright video covering the western digital mycloud, and synology cloud services but lol at them acting surprised at the certificate transparency log existing. western digital only really cared if you have a valid token, not one for your specific device. given how low-hanging that fruit is no surprise it got locked down 2 weeks before pwn2own. really all their research was pretty low in complexity, which is telling for the state of nas security as a whole. can't help but notice their synology attack needed local network access or at least the mac/serial/model so they didn't have as much wide access as they implied
really bad recording for a decent talk going over what everyone should already know about backblaze's methodology. if you read the report nothing in this should be new to you, just a few mentions of ssds holding strong for longevity but being too expensive for them to test in the scale they want and complaining about smart being inconsistent across manufacturers

the rest of the talks at defcon are dire...

Computer viking
May 30, 2011
Now with less breakage.

IOwnCalculus posted:

If you're doing zfs on Linux, make sure your zpool create command references the drives by entries in /dev/disk/by-id/ and not /dev/sd*. Both because there are human-readable labels in /dev/disk/by-id/ that include the drive's make, model, and serial, and because /dev/sd* assignments are not guaranteed to be consistent across reboots.

Edit: and keep a note somewhere easily accessible that says what drive is in what bay, by serial number.

In my experience, ZFS is quite good about finding its own disks even if the path you added them by isn't available - I think every disk gets a serial number in the header, and that's enough to put a pool back together?

YerDa Zabam
Aug 13, 2016



Yeah, there was a massive upload yesterday from them (def con) and all the ones I tried were impossible to listen to. Mics clipping and distorting, speakers being way too loud (Corey doctorow in particular) Or being so quiet that even the subtitles fail at points.

Rap Game Goku
Apr 2, 2008

Word to your moms, I came to drop spirit bombs


Stupid question time.

My NAS/Plex box is currently on windows 10. It has 2 drives in a windows storage space and the OS is running off an SSD (I repurposed my old system when I upgraded). I'm looking to move it into a new case with more space for drives, and I figure that's a good time to also transition off of windows.

I wasn't thinking ahead when I went with the storage space. So my question is "how hard is it going to be to transition the data off of the storage space and into unraid or truenas?" If that's even possible? I'm hoping there's a way to convert that I'm just not aware of, but if it takes just buying more drives, then hey more space.

IOwnCalculus
Apr 2, 2003





Computer viking posted:

In my experience, ZFS is quite good about finding its own disks even if the path you added them by isn't available - I think every disk gets a serial number in the header, and that's enough to put a pool back together?

I have had ZFS poo poo bricks due to this before and I suspect it's because I run multiple vdevs. It sees all the disks and brings the pool up but starts failing checksums.

Reimporting using /dev/disk/by-id fixed it. OpenZFS themselves also recommends against /dev/sdX for all but the smallest setups, and even the default Ubuntu fstab no longer relies on /dev/sdX for your root partition. How problematic this is depends on your particular drive controller but in my personal experience, controllers that always detect the disks in the same order are much more rare than ones that just get them in whatever order. The slightly weird SAS controller built into my DL380 G9 was one of the first I've come across in a long time that always put every drive in the exact same order.

Coxswain Balls
Jun 4, 2001

Tangentially NAS related, the fan in my TS140's PSU has occasionally started buzzing and clicking, so I think its bearing is starting to go. I took it apart and cleaned/lubricated it as best I could, but it continues to start up after a while. Giving it a smack causes it to stop for a while, which isn't an ideal solution for a NAS full of old hard drives so I should just replace the fan. As long as the voltage/current/RPM/CFM numbers match, will it be fine to grab something off the shelf and swap the 3 pin connector with the 2-pin one the PSU uses, and leave the RPM sensor not connected to anything? As usual I'm going to try hitting up the local electronics recycling depot before ordering a new part online (a Yate Loon D80BH-12 running at 0.18A).

hogofwar
Jun 25, 2011

'We've strayed into a zone with a high magical index,' he said. 'Don't ask me how. Once upon a time a really powerful magic field must have been generated here, and we're feeling the after-effects.'
'Precisely,' said a passing bush.

Rap Game Goku posted:

Stupid question time.

My NAS/Plex box is currently on windows 10. It has 2 drives in a windows storage space and the OS is running off an SSD (I repurposed my old system when I upgraded). I'm looking to move it into a new case with more space for drives, and I figure that's a good time to also transition off of windows.

I wasn't thinking ahead when I went with the storage space. So my question is "how hard is it going to be to transition the data off of the storage space and into unraid or truenas?" If that's even possible? I'm hoping there's a way to convert that I'm just not aware of, but if it takes just buying more drives, then hey more space.

To transfer data in this case I you will need a medium while your Nas is converted. Either get a large enough external drive or upload it to some sort of storage service and download it again, which would probably be the cheapest option but also the slowest.

Talorat
Sep 18, 2007

Hahaha! Aw come on, I can't tell you everything right away! That would make for a boring story, don't you think?

Combat Pretzel posted:

Also, when you're using compression, you want variable record sizes, because here they're used extensively.

By the way, do not sleep on the default ZFS compression, you might think "these are video files, they're already compressed" but for some reason the file system level compression saved me something like 5%-8% off my total usage for free. It's definitely effective, and it's essentially free (just a little CPU overhead). I recommend ZSTD.

BlankSystemDaemon
Mar 13, 2009



Talorat posted:

By the way, do not sleep on the default ZFS compression, you might think "these are video files, they're already compressed" but for some reason the file system level compression saved me something like 5%-8% off my total usage for free. It's definitely effective, and it's essentially free (just a little CPU overhead). I recommend ZSTD.
Neither lz4 or zstd will compress already-compressed data like video and music, for what it's worth.

Still, it's nice to have enabled by default, because the default levels achieve good enough compression ratios while managing decent speeds, that it genuinely speeds up operations because disks have to work on less actual data.

Rexxed
May 1, 2010

Dis is amazing!
I gotta try dis!

Coxswain Balls posted:

Tangentially NAS related, the fan in my TS140's PSU has occasionally started buzzing and clicking, so I think its bearing is starting to go. I took it apart and cleaned/lubricated it as best I could, but it continues to start up after a while. Giving it a smack causes it to stop for a while, which isn't an ideal solution for a NAS full of old hard drives so I should just replace the fan. As long as the voltage/current/RPM/CFM numbers match, will it be fine to grab something off the shelf and swap the 3 pin connector with the 2-pin one the PSU uses, and leave the RPM sensor not connected to anything? As usual I'm going to try hitting up the local electronics recycling depot before ordering a new part online (a Yate Loon D80BH-12 running at 0.18A).

Yeah the rpm sensor isn't interactive or anything, it's for the host to read on PC hardware, you can use a 3 pin fan as a 2 pin fan. You could even get a 4 pin one if you had one spare. Usual caveats about being really careful around the big caps inside a PSU.

Aware
Nov 18, 2003

Rap Game Goku posted:

Stupid question time.

My NAS/Plex box is currently on windows 10. It has 2 drives in a windows storage space and the OS is running off an SSD (I repurposed my old system when I upgraded). I'm looking to move it into a new case with more space for drives, and I figure that's a good time to also transition off of windows.

I wasn't thinking ahead when I went with the storage space. So my question is "how hard is it going to be to transition the data off of the storage space and into unraid or truenas?" If that's even possible? I'm hoping there's a way to convert that I'm just not aware of, but if it takes just buying more drives, then hey more space.

As per the above you're going to need to move it all somewhere then move it back.

Keep in mind with Unraid your parity disk need to be as large as your largest individual disk. So if you had 2x 8tbs and bought an external 16tb for transfer, intending to add the 16tb after to your NAS, you would really need to use that 16tb as the parity drive. You can run unraid without parity and add this drive last. Ideally if you're also looking to bump your capacity up you would grab 2x new disks of a larger size, so you'd have 16tb parity, then 8+8+16 storage array as an example.

It might not look that great now but then you're free to add any disks up to the size of your parity drive moving forward.

Theophany
Jul 22, 2014

SUCCHIAMI IL MIO CAZZO DA DIETRO, RANA RAGAZZO



2022 FIA Formula 1 WDC

Aware posted:

As per the above you're going to need to move it all somewhere then move it back.

Keep in mind with Unraid your parity disk need to be as large as your largest individual disk. So if you had 2x 8tbs and bought an external 16tb for transfer, intending to add the 16tb after to your NAS, you would really need to use that 16tb as the parity drive. You can run unraid without parity and add this drive last. Ideally if you're also looking to bump your capacity up you would grab 2x new disks of a larger size, so you'd have 16tb parity, then 8+8+16 storage array as an example.

It might not look that great now but then you're free to add any disks up to the size of your parity drive moving forward.

Just to add some emphasis, don't add your parity drives until all your data is copied over. Massive data transfers with parity disks in place is purgatory, even with reconstructive write enabled.

BlankSystemDaemon
Mar 13, 2009



Charles Leclerc posted:

Just to add some emphasis, don't add your parity drives until all your data is copied over. Massive data transfers with parity disks in place is purgatory, even with reconstructive write enabled.
RAID3 through 6 all have to have parity calculated when data is initially written, otherwise there is no parity - so are they doing some sort of block pointer rewrite nonsense to post-write parity-compute in the background?

THF13
Sep 26, 2007

Keep an adversary in the dark about what you're capable of, and he has to assume the worst.
When you have parity drives it will calculate parity when data is initially written to the array like you would expect. Unraid parity disks can be added later on, and it will scrub through your array drives and generate the parity whenever as well.

A popular recommendation for initial ingest into a new unraid server is to leave the parity disabled and bypass any cache so you're writing straight to array disks without any parity overhead. Then once the initial ingest is done enable parity for the array and cache for your shares for normal day to day use.

Theophany
Jul 22, 2014

SUCCHIAMI IL MIO CAZZO DA DIETRO, RANA RAGAZZO



2022 FIA Formula 1 WDC

BlankSystemDaemon posted:

RAID3 through 6 all have to have parity calculated when data is initially written, otherwise there is no parity - so are they doing some sort of block pointer rewrite nonsense to post-write parity-compute in the background?

Yep, but unraid isn't RAID :v:

BlankSystemDaemon
Mar 13, 2009



Charles Leclerc posted:

Yep, but unraid isn't RAID :v:
New slogan being "UnRAID: A quicker way of losing data"?

Theophany
Jul 22, 2014

SUCCHIAMI IL MIO CAZZO DA DIETRO, RANA RAGAZZO



2022 FIA Formula 1 WDC

BlankSystemDaemon posted:

New slogan being "UnRAID: A quicker way of losing data"?

You're definitely at risk during the initial data transfer, but building two-disk parity on my 60TB array only takes around 10 hours. For the sake of saving literally days on the initial ingest it feels worthwhile imo.

THF13
Sep 26, 2007

Keep an adversary in the dark about what you're capable of, and he has to assume the worst.
How fast it takes to build parity really is determined by the size of the individual drive, if it's dual parity or not and the total size of the array really isn't a factor. My 20TB parity takes nearly 2 full days to build or check.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

BlankSystemDaemon posted:

New slogan being "UnRAID: A quicker way of losing data"?

Nope. Mines been running for 9 years now and my only data loss was from user error.

BlankSystemDaemon
Mar 13, 2009



Charles Leclerc posted:

You're definitely at risk during the initial data transfer, but building two-disk parity on my 60TB array only takes around 10 hours. For the sake of saving literally days on the initial ingest it feels worthwhile imo.
Why should the ingest take any additional time? XOR and Galois calculations using finite field theory are both something computers are exceptionally quick at.

Matt Zerella posted:

Nope. Mines been running for 9 years now and my only data loss was from user error.
The only data loss you've noticed, you mean?
It's still subject to silent data loss because there's no in-line checksumming that's verified using periodic patrol scrubs or on every read of the data.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

BlankSystemDaemon posted:


The only data loss you've noticed, you mean?
It's still subject to silent data loss because there's no in-line checksumming that's verified using periodic patrol scrubs or on every read of the data.

Yes everything's fine and I don't care about any of this for home use. I run a monthly scrub so I don't know where you got that information from that it doesn't have that feature.

THF13
Sep 26, 2007

Keep an adversary in the dark about what you're capable of, and he has to assume the worst.
Unraid parity has overhead because it is calculating parity by reading from your array drives.
Unraid's parity can only protect you from drive failure.
There are plugins available to identify and track bit-rot or other file corruption, but if you care about that you probably should just use something besides Unraid.
Unraid's parity check will tell you you had some issue as it will detect errors with your parity, but you won't have a way to identify where it occurred.

Theophany
Jul 22, 2014

SUCCHIAMI IL MIO CAZZO DA DIETRO, RANA RAGAZZO



2022 FIA Formula 1 WDC

BlankSystemDaemon posted:

Why should the ingest take any additional time? XOR and Galois calculations using finite field theory are both something computers are exceptionally quick at.

idk, but it does for me, especially lots of small files, it can be slow as molasses. :shrug: I definitely feel like going down the TrueNAS/ZFS road is a more robust solution but I don't want to invest the time into it when UnRAID is perfectly cromulent for what I want from my home server. Everything I'd be upset about losing I have hot and cold backups of in different physical locations and the rest... well I'm on a gigabit line and have multiple Usenet indexers so that's maybe a week at most reacquiring Linux distros.

Rap Game Goku
Apr 2, 2008

Word to your moms, I came to drop spirit bombs


hogofwar posted:

To transfer data in this case I you will need a medium while your Nas is converted. Either get a large enough external drive or upload it to some sort of storage service and download it again, which would probably be the cheapest option but also the slowest.

Aware posted:

Ideally if you're also looking to bump your capacity up you would grab 2x new disks of a larger size, so you'd have 16tb parity, then 8+8+16 storage array as an example.

It might not look that great now but then you're free to add any disks up to the size of your parity drive moving forward.


That's what I figured. I got a pretty good deal on 16tbs last black friday. Hopefully that'll pop up again.

Aware
Nov 18, 2003
And BSD is definitely correct imo, Unraid isn't well suited to protecting your data so critical things should have other backup strategies. For most of us this is fine because all my unraid does is store easily replaceable Linux ISOs which I'm typically cycling deletion on anyway to free up space for new things. The rest of unraid is just a nice balance of making docker easier with a nice GUI and minimal janitoring.

bsaber
Jul 27, 2007

Fozzy The Bear posted:

With cheap mini PCs such as the Beelink S12, the only way to turn it into a NAS is by adding USB storage, correct?
https://www.amazon.com/Beelink-Desktop-Computer-Ethernet-Family-NAS/dp/B0BWDGVCV7/

I have the N100 model of this and there’s a slot for a 2.5 SSD in it. So you could do both NVMe M.2 and a 2.5 SATA.

BlankSystemDaemon
Mar 13, 2009



What is the threshold for "less janitorial duties"?

On FreeBSD using ZFS and zfsd, all I ever have to do is pop out a disk when it's failing, and pop another one in that's bigger.
No command-line utilities or anything else.

Aware
Nov 18, 2003
Honestly for me I just like the docker management/app store approach. I've run my poo poo on Debian/ZFS before and you're right it's not a pain at all for storage. But on the docker/VM side I just prefer Unraids interface over portainer/cockpit/other random ones I've tried. I run docker/podman elsewhere and for work and so it's not a lack of familiarity, I just like how unraid has set it up for home. At home I'm a as much as a Luddite as I can be.

It's purely personal preference at this point, I really can't defend Unraid as any kind of reliable storage solution.

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer

Aware posted:

Honestly for me I just like the docker management/app store approach. I've run my poo poo on Debian/ZFS before and you're right it's not a pain at all for storage. But on the docker/VM side I just prefer Unraids interface over portainer/cockpit/other random ones I've tried. I run docker/podman elsewhere and for work and so it's not a lack of familiarity, I just like how unraid has set it up for home. At home I'm a as much as a Luddite as I can be.

It's purely personal preference at this point, I really can't defend Unraid as any kind of reliable storage solution.

Same. I have a cheap TrueNAS box that just does storage and do server stuff on my Unraid box.

IOwnCalculus
Apr 2, 2003





code:
  pool: tank
 state: ONLINE
config:

        NAME                                                   STATE     READ WRITE CKSUM
        tank                                                   ONLINE       0     0     0
          raidz3-0                                             ONLINE       0     0     0
            scsi-SATA_HGST_HUH721010AL                         ONLINE       0     0     0
            scsi-SATA_HGST_HUH721010AL                         ONLINE       0     0     0
            scsi-SATA_HGST_HUH721010AL                         ONLINE       0     0     0
            scsi-SSEAGATE_ST10000NM0096                        ONLINE       0     0     0
            scsi-SHGST_H7210A520SUN010T                        ONLINE       0     0     0
            scsi-SSEAGATE_ST10000NM0096                        ONLINE       0     0     0
            scsi-SWDC_WUS721010AL4200                          ONLINE       0     0     0
            scsi-SSEAGATE_ST10000NM0096                        ONLINE       0     0     0
            scsi-SHGST_H7210A520SUN010T                        ONLINE       0     0     0
            scsi-SSEAGATE_ST10000NM0096                        ONLINE       0     0     0
            scsi-SHGST_H7210A520SUN010T                        ONLINE       0     0     0
          raidz3-1                                             ONLINE       0     0     0
            scsi-SHGST_H7210A520SUN010T                        ONLINE       0     0     0
            scsi-SSEAGATE_ST10000NM0226                        ONLINE       0     0     0
            scsi-SSEAGATE_ST10000NM0096                        ONLINE       0     0     0
            scsi-SHGST_HUH721010AL42C0                         ONLINE       0     0     0
            scsi-SSEAGATE_ST10000NM0096                        ONLINE       0     0     0
            scsi-SHGST_HUH721010AL4200                         ONLINE       0     0     0
            scsi-SNETAPP_X377_STATE10TA07                      ONLINE       0     0     0
            scsi-SSEAGATE_ST10000NM0226                        ONLINE       0     0     0
            scsi-SSEAGATE_ST10000NM0096                        ONLINE       0     0     0
            scsi-SSEAGATE_ST10000NM0096                        ONLINE       0     0     0
            scsi-SSEAGATE_ST10000NM0096                        ONLINE       0     0     0
          raidz3-2                                             ONLINE       0     0     0
            scsi-SATA_HGST_HUH721010AL                         ONLINE       0     0     0
            scsi-SATA_WDC_WD100EMAZ-00                         ONLINE       0     0     0
            scsi-SATA_WDC_WD101EMAZ-11                         ONLINE       0     0     0
            scsi-SATA_WDC_WD101EMAZ-11                         ONLINE       0     0     0
            scsi-SATA_WDC_WD100EMAZ-00                         ONLINE       0     0     0
            scsi-SATA_WDC_WD101EMAZ-11                         ONLINE       0     0     0
            scsi-SATA_HGST_HUH721010AL                         ONLINE       0     0     0
            scsi-SATA_HGST_HUH721010AL                         ONLINE       0     0     0
            scsi-SATA_HGST_HUH721010AL                         ONLINE       0     0     0
            scsi-SATA_WDC_WD100EMAZ-00                         ONLINE       0     0     0
            scsi-SATA_WDC_WD101EMAZ-11                         ONLINE       0     0     0
        spares
          scsi-SSEAGATE_ST10000NM0096                          AVAIL
          scsi-SATA_HGST_HUH721010AL                           AVAIL
          scsi-SHGST_H7210A520SUN010T                          AVAIL
This might not be "until the heat death of the universe" reliability but it's certainly closer to that than a bunch of Linux ISOs deserve.

Rap Game Goku
Apr 2, 2008

Word to your moms, I came to drop spirit bombs


Doesn't Unraid do ZFS now?

Adbot
ADBOT LOVES YOU

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

IOwnCalculus posted:

I have had ZFS poo poo bricks due to this before and I suspect it's because I run multiple vdevs. It sees all the disks and brings the pool up but starts failing checksums.
hey it's story time prompted by a particularly nasty flashback this post brought on!

in the long ago days of parallel SCSI drive racks, I once built a pair of systems hooked up to 4 external drive expanders via 2 SCSI cables each. using a LSI/Adaptec controller IIRC.

Some genius moved things around and swapped one set of cables, and adaptec had zero metadata on the drives so just happily discovered drives by SCSI ID based on what it had in it's EEPROM and started serving entirely random data and instantly destroyed both of the redundant arrays.

After explaining to the client that their stupidity was the cause and we had told them in writing not to move the servers without paying someone (me) to make sure it didn't get hosed up, I reflashed the controllers to JBOD and used software raid. I've never touched hardware raid since. They spent the next two months pissing and moaning and rebuilding their dataset. Turns out "we're not wasting money on backing up because it's transient data!" only works if the data is, in fact, transient.

e: I have a super hard time believing ZFS can accidentally import a drive from another pool. It absolutely shits bricks if you try to use a pool on another system without going through a whole "I'm really done with this pool, please mark it clean and don't try to use it at startup anymore" routine before moving the drives.

Harik fucked around with this message at 06:21 on Sep 18, 2023

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply