Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Klyith
Aug 3, 2007

GBS Pledge Week

Shrimp or Shrimps posted:

Assuming I'm stuck with the drives, my options are to either 1) continue mirroring for 26tb, 2) do raidZ1, the 16tb drives are treated as 10tb, for 30tb; 2) do raidZ2, the 16tb drives are treated as 10tb, for 20tb.

Z1 gives me more space, but only 1 drive can fail at a time.
Z2 gives me less space, but any 2 drives can fail at once.
Mirroring gives me in-between Z1 and Z2 space, but if 2 drives fail, it has to be from separate pairs in order to recover my data.

It's also possible to partition the drives and tell ZFS to use a partition rather than the whole drive. The remainder of those 16TB drives doesn't have to be wasted. You could theoretically set that as a second pool. So if you decided to z2 the array and only have 20TB of space & max redundancy, you could jbod the remainder into 12TB for the :filez: collection or whatever.

Whether TrueNas can easily set that up without hackery is another question.


...

IMO for a home NAS solution the idea of a double-redundant array seems very overkill. That seems designed around the idea of 100% uptime and systems that keep working until a tech comes by the next day to replace drives. If I was confident that ZFS & the OS was going to detect a failure and shut down to preserve the remainder, that feels like a totally reasonable level of safety.

If I had 4 big drives, rather than go for 2x redundancy I'd use 3 for the array and one for offline / offsite backup. I'd rate the chances of a house fire or other disaster higher than 2 drives dying at the same moment.

Adbot
ADBOT LOVES YOU

Shrimp or Shrimps
Feb 14, 2012


Thanks. I have wondered if I'm overthinking this whole thing in terms of redundancy. Taking stock of the kind of files I'm going to be storing, only a (space-wise) minority are highly can't-lose-this-poo poo files, and most of those are just photos of loved ones and the cat and I have them on cloud and other areas, too. Documents and poo poo I keep on local PCs, and USB drives.

I hadn't previously considered it, but having 2 JBOD and 2 mirrored could be something of a solution to this and maximize space while having still some level of redundancy. The files that can be lost can just go on the JBOD.

Clearly should have thought this whole thing through more before diving headfirst into home nas redundnacy big tb space gently caress yeaaaaaaaaaaaaaaa

edit:

Klyith posted:

It's also possible to partition the drives and tell ZFS to use a partition rather than the whole drive. The remainder of those 16TB drives doesn't have to be wasted. You could theoretically set that as a second pool. So if you decided to z2 the array and only have 20TB of space & max redundancy, you could jbod the remainder into 12TB for the :filez: collection or whatever.

Whether TrueNas can easily set that up without hackery is another question.


Thanks for this suggestion. So I found this helpful guide which should be able to do the above: https://tentacles666.wordpress.com/2014/06/02/freenas-creating-zfs-zpools-using-partitions-on-mixed-sized-disks/

In this case, I could do 4x10tb (2 being partitions of the 16tb drives) either in Z1 or Z2, Z2 for redundancy, and Z1 for expandability down the line when adding a single drive becomes feasible, and I have 2 x 6tb for a JBOD. That actually sounds really good as the 12tb can be used to keep game files that I'm not currently playing. I have a 200gb monthly data cap at my home internet before I'm throttled, and with game sizes these days, it'd be nice not to have to use it redownloading game I want to play again, but it's also not a thing I care that much about having redundancy for.

Should I go down the Z1 route, would I be able to add a single 16tb later and do the same thing, throw a 10tb partition into expanding the Z1 and then the remaining 6tb gets added to the JBOD? Or really, any size drive.

An interesting approach for sure!

Shrimp or Shrimps fucked around with this message at 08:55 on Feb 16, 2022

Kivi
Aug 1, 2006
I care

IOwnCalculus posted:

Biggest reason to me to run ZFS over mdraid: unless they've somehow fixed this since I last used it, mdraid will poo poo your entire array on a single URE if you're down to N drives out of N+x redundancy.

ZFS will note the URE, tell you the specific files it couldn't recover, and bring up as much of the array as it can.
That sounds genuinely useful and good improvement, I guess it's grounds for a switch.

Now, with 12-18 TB drives and target useable size of 50-60 TB what sort of (optimal) configuration and disk counts am I looking at? I can have up to 10 SATA disks.

BlankSystemDaemon
Mar 13, 2009



Kivi posted:

That sounds genuinely useful and good improvement, I guess it's grounds for a switch.

Now, with 12-18 TB drives and target useable size of 50-60 TB what sort of (optimal) configuration and disk counts am I looking at? I can have up to 10 SATA disks.
With 9x 12TB drives in raidz3 and one hot-spare you're looking at ~65TB formatted diskspace and on the order of 33,207 years of mean time between data loss for a very pessimistic MTBF of 35000 hours per disk and an URE rate of 10^14 bytes,
With the same drive configuration, ie. 9 active drives and a hot-spare, using 18TB drives, you're looking at 101TB formatted diskspace and on the order of 20000 years of mean time between data loss, assuming the same pessimistic failure rates.
These numbers are from this statistical modeling tool, if you want to play around with it.

It's also worth mentioning, that if you set the autoexpand propery and keep replacing drives until all of them are 18TB, your array will automatically grow to 101TB, and that OpenZFS 3.0 will add raidz expansion so that if you move the hardware to a different chassis that can fit more drives, you can expand that way as well (although obviously this will decrease the mean time between data loss).

BlankSystemDaemon fucked around with this message at 11:55 on Feb 16, 2022

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer
I just got a nice deal on this Lian-Li case which will be perfect for my new NAS build: https://pcpartpicker.com/product/9vPfrH/lian-li-case-pcq28b.

It is mini-ITX so hoping the thread can suggest some low cost/low power CPU/Mobo combos for the build. I am planning to include 4-6 HDDs. Will be running TrueNAS, likely Core.

CopperHound
Feb 14, 2012

CopperHound posted:

For now, if there isn't anything that looks like a definite hardware failure, I'll swap bays and see if it follows the drive before buying another sas to 4x sata cable.
It has followed my drive after a few days, so probably not cable related. This is my only seagate drive, so I'm wondering if there is something related to its synchronize cache(10) behavior beyond physical failure. I'm running a long smart check to see if any of the numbers have changed much, but in the meantime here is my log again.
code:
Feb 15 10:25:51 nas 	(da0:mps0:0:0:0): SYNCHRONIZE CACHE(10). CDB: 35 00 00 00 00 00 00 00 00 00 length 0 SMID 1587 Command timeout on target 0(0x000c) 60000 set, 60.4960533 elapsed
Feb 15 10:25:51 nas mps0: Sending abort to target 0 for SMID 1587
Feb 15 10:25:51 nas 	(da0:mps0:0:0:0): SYNCHRONIZE CACHE(10). CDB: 35 00 00 00 00 00 00 00 00 00 length 0 SMID 1587 Aborting command 0xfffffe00e9507488
Feb 15 10:25:51 nas 	(da0:mps0:0:0:0): READ(16). CDB: 88 00 00 00 00 02 08 60 ff 48 00 00 00 30 00 00 length 24576 SMID 1393 Command timeout on target 0(0x000c) 60000 set, 60.5795321 elapsed
Feb 15 10:25:51 nas mps0: Controller reported scsi ioc terminated tgt 0 SMID 1568 loginfo 31130000
Feb 15 10:25:51 nas mps0: Controller reported scsi ioc terminated tgt 0 SMID 233 loginfo 31130000
Feb 15 10:25:51 nas mps0: Controller reported scsi ioc terminated tgt 0 SMID 1393 loginfo 31130000
Feb 15 10:25:51 nas (da0:mps0:0:0:0): WRITE(16). CDB: 8a 00 00 00 00 02 68 35 57 18 00 00 00 08 00 00 
Feb 15 10:25:51 nas mps0: (da0:mps0:0:0:0): CAM status: CCB request completed with an error
Feb 15 10:25:51 nas Finished abort recovery for target 0
Feb 15 10:25:51 nas (da0:mps0:0:0:0): Error 5, Retries exhausted
Feb 15 10:25:51 nas mps0: Unfreezing devq for target ID 0
Feb 15 10:25:51 nas (da0:mps0:0:0:0): READ(16). CDB: 88 00 00 00 00 02 08 60 ff 48 00 00 00 30 00 00 
Feb 15 10:25:51 nas (da0:mps0:0:0:0): CAM status: CCB request completed with an error
Feb 15 10:25:51 nas (da0:mps0:0:0:0): Error 5, Retries exhausted
Feb 15 10:25:51 nas (da0:mps0:0:0:0): SYNCHRONIZE CACHE(10). CDB: 35 00 00 00 00 00 00 00 00 00 
Feb 15 10:25:51 nas (da0:mps0:0:0:0): CAM status: Command timeout
Feb 15 10:25:51 nas (da0:mps0:0:0:0): Retrying command, 0 more tries remain
Feb 15 10:25:51 nas (da0:mps0:0:0:0): SYNCHRONIZE CACHE(10). CDB: 35 00 00 00 00 00 00 00 00 00 
Feb 15 10:25:51 nas (da0:mps0:0:0:0): CAM status: SCSI Status Error
Feb 15 10:25:51 nas (da0:mps0:0:0:0): SCSI status: Check Condition
Feb 15 10:25:51 nas (da0:mps0:0:0:0): SCSI sense: UNIT ATTENTION asc:29,0 (Power on, reset, or bus device reset occurred)
Feb 15 10:25:51 nas (da0:mps0:0:0:0): Error 6, Retries exhausted
Feb 15 10:25:51 nas (da0:mps0:0:0:0): Invalidating pack
Some searching implies there might be a compatibility issue with LSI controllers. I can try connecting the drive directly to my motherboard if there is nothing new with the smart data.

CopperHound fucked around with this message at 18:58 on Feb 16, 2022

Klyith
Aug 3, 2007

GBS Pledge Week

Smashing Link posted:

It is mini-ITX so hoping the thread can suggest some low cost/low power CPU/Mobo combos for the build. I am planning to include 4-6 HDDs. Will be running TrueNAS, likely Core.

6 SATA ports on ITX boards is pretty rare. A few exist, but not a lot. (Forgive the newegg link, just using them to illustrate the small number of choices. Also nearly all of those are on newegg 3rd party, which I would avoid buying anything from.)

So if you want more than 4 drives, you should probably plan on getting a PCIe controller card as well.

As for CPU/mobo combo, if you're buying new the Athlon 200GE still seems like a pretty good deal. Low power, plenty good enough for a NAS machine. The issue I see with intel right now is the cheapest chips are -F parts with no GPU.

BlankSystemDaemon
Mar 13, 2009



CopperHound posted:

Some searching implies there might be a compatibility issue with LSI controllers. I can try connecting the drive directly to my motherboard if there is nothing new with the smart data.
Where are you getting this from?

UNIT ATTENTION Additional Sense Code 29,0 means that power was lost or reset, is that concurrent with you disconnecting the drive?

Also, it'd help a lot if you don't just include the first 20 or so lines - please do:
dmesg | tail -n200 | nc termbin.com 9999
Then paste the link here; it uses the termbin service which doesn't provide any way to index individual pastes, and they're deleted after a month.

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer

Klyith posted:

6 SATA ports on ITX boards is pretty rare. A few exist, but not a lot. (Forgive the newegg link, just using them to illustrate the small number of choices. Also nearly all of those are on newegg 3rd party, which I would avoid buying anything from.)

So if you want more than 4 drives, you should probably plan on getting a PCIe controller card as well.

As for CPU/mobo combo, if you're buying new the Athlon 200GE still seems like a pretty good deal. Low power, plenty good enough for a NAS machine. The issue I see with intel right now is the cheapest chips are -F parts with no GPU.

Thanks for that advice. Definitely planning on an LSI card and hoping to not have to use a slot for a GPU, so the 200GE seems perfect. This is not going to be running anything except possibly Plex so I don't think I will need a lot of cores/threads.

SolusLunes
Oct 10, 2011

I now have several regrets.

:barf:

Smashing Link posted:

Thanks for that advice. Definitely planning on an LSI card and hoping to not have to use a slot for a GPU, so the 200GE seems perfect. This is not going to be running anything except possibly Plex so I don't think I will need a lot of cores/threads.

You may want to run something intel with quicksync for a plex serving box- unless you're not planning to do much transcoding from it.

Not sure if AMD's APUs are good for transcodes, but I'm sure someone here has tried it.

Less Fat Luke
May 23, 2003

Exciting Lemon

CopperHound posted:

Some searching implies there might be a compatibility issue with LSI controllers. I can try connecting the drive directly to my motherboard if there is nothing new with the smart data.
I've seen similar intermittent warnings from LSI cards (with internal and external ports) when they're overheating so make sure the airflow over them is sufficient; usually they're designed to be in a server chassis with much more airflow than a regular PC case that's idling can provide.

Klyith
Aug 3, 2007

GBS Pledge Week

SolusLunes posted:

Not sure if AMD's APUs are good for transcodes, but I'm sure someone here has tried it.

AMD APUs on the 200GE and Ryzen have Video Core Next which is AMD's equivalent video de/encoder. It's not as capable as QuickSync since it only encodes H264 & H265, but that's like 99% of what you'd want to use.

But looks like Plex doesn't support it, including on AMD GPUs. Jellyfin does though. :lmao: at companies that commercialize an open source project to make money but still fall behind

SolusLunes
Oct 10, 2011

I now have several regrets.

:barf:

Klyith posted:

AMD APUs on the 200GE and Ryzen have Video Core Next which is AMD's equivalent video de/encoder. It's not as capable as QuickSync since it only encodes H264 & H265, but that's like 99% of what you'd want to use.

But looks like Plex doesn't support it, including on AMD GPUs. Jellyfin does though. :lmao: at companies that commercialize an open source project to make money but still fall behind

lol, just lol

I'm firmly in the plex ecosystem for myself for now but you best believe I have a backup plan for when plex implodes or just gets left so embarrassingly far in the dust

CopperHound
Feb 14, 2012

BlankSystemDaemon posted:

Where are you getting this from?
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=224496

That is not concurrent with me disconnecting the drive, but you did make me double check that it wasn't a drive that I did the 3v power tape trick on.

BlankSystemDaemon posted:

Also, it'd help a lot if you don't just include the first 20 or so lines - please do:
dmesg | tail -n200 | nc termbin.com 9999
Then paste the link here; it uses the termbin service which doesn't provide any way to index individual pastes, and they're deleted after a month.
I'm not exactly sure what else I left out that would be useful, but here you go: https://termbin.com/zpua

Less Fat Luke posted:

I've seen similar intermittent warnings from LSI cards (with internal and external ports) when they're overheating so make sure the airflow over them is sufficient; usually they're designed to be in a server chassis with much more airflow than a regular PC case that's idling can provide.
I don't have fantastic airflow over my lsi card, so maybe?

CopperHound fucked around with this message at 23:43 on Feb 16, 2022

BlankSystemDaemon
Mar 13, 2009



CopperHound posted:

https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=224496

That is not concurrent with me disconnecting the drive, but you did make me double check that it wasn't a drive that I did the 3v power tape trick on.

I'm not exactly sure what else I left out that would be useful, but here you go: https://termbin.com/zpua

I don't have fantastic airflow over my lsi card, so maybe?
I think mav@ in comment #51 might be onto something with it being an NCQ issue.

In my old server, I had some HD204UI disks from what was then Seagate (before they got bought by Samsung) where the initial firmware released on the disks would write corrupt data to the disk if NCQ was being used while a ATA IDENTIFY was issued and the only way to fix it was by flashing the firmware (to something using the exact same version string, frustratingly).

If you use the on-board controller for the disk, you can try and see if there's an option for turning NCQ off, and whether that affects anything, or if you're running TrueNAS 12.0-U7, try mpsutil show all followed by mpsutil -u UNIT set disabled where you replace UNIT with the unit (although you can omit it if you've only got one controller, as it defaults to unit 0).

EDIT: And comment #52 appears to have done proper cross-testing, indicating that it's a specific combination of enclosure(s)+controller(s).
Hardware is just great.

BlankSystemDaemon fucked around with this message at 00:35 on Feb 17, 2022

BlackMK4
Aug 23, 2006

wat.
Megamarm

Klyith posted:

AMD APUs on the 200GE and Ryzen have Video Core Next which is AMD's equivalent video de/encoder. It's not as capable as QuickSync since it only encodes H264 & H265, but that's like 99% of what you'd want to use.

But looks like Plex doesn't support it, including on AMD GPUs. Jellyfin does though. :lmao: at companies that commercialize an open source project to make money but still fall behind

I remember coming across a post a while ago about this. AMD APU transcoding isn't officially supported but it works out of the box on Windows hosts and it'll work on linux hosts if you copy some libraries from lib to the plex folder manually. Very much a YMMV thing.

e: https://forums.plex.tv/t/got-hw-transcoding-to-work-with-libva-vaapi-on-ryzen-apu-ryzen-7-4700u/676546
and https://forums.plex.tv/t/feature-request-add-support-for-amds-video-core-next-encoding/226861/141

BlackMK4 fucked around with this message at 01:42 on Feb 17, 2022

Shrimp or Shrimps
Feb 14, 2012


I'm attempting to go down the partition route to get a 4x10tb raidz1 + 2x6tb jbod across 2x10tb drives and 2x16tb drives, but have run into an issue.

If I use diskinfo -v on the 10tb drives, I get the same media size byte count on both of them: 10000831348736. So I then partition the 2 16tb drives (gpart create -s gpt --> gpart add -t freebsd-zfs -s) using this number, and create a zpool using Z1. Only problem is that on the dashboard, Truenas doesn't recognize the 10tb drives. They are shown as unknown drives.

So the only way to get Truenas to recognize the 10tb drives after the zpool has been created, exported, and then imported, has been to also partition the 10tb drives (so they become ada2p1 and ada3p1).

Does anybody know why? Is there any problem to going down this route?

As an additional issue, I'm having trouble creating the partition to the max size of the drive. If I try to partition the 10tb drive with 10000831348736, I get an autofill: no space left on the drive error.

Should I not be using the total media size as listed by diskinfo -v to try and create a single partition on the 10tb drive?

Thanks!


Edit: Had a big think about this, clearly in over my head and with no idea what I'm doing futzing about in the shell, and that's going to end in misery I'm sure. I'm going for a Z1 across 4 drives, with the aim of expanding the 10tbs to 16s when the need arises, and finding something to do with the 10tbs.

That's it, what a saga. Thanks for all the input and help, everybody, much appreciated!

Shrimp or Shrimps fucked around with this message at 14:43 on Feb 17, 2022

Cantide
Jun 13, 2001
Pillbug

Shrimp or Shrimps posted:

I'm attempting to go down the partition route to get a 4x10tb raidz1 + 2x6tb jbod across 2x10tb drives and 2x16tb drives, but have run into an issue.

If I use diskinfo -v on the 10tb drives, I get the same media size byte count on both of them: 10000831348736. So I then partition the 2 16tb drives (gpart create -s gpt --> gpart add -t freebsd-zfs -s) using this number, and create a zpool using Z1. Only problem is that on the dashboard, Truenas doesn't recognize the 10tb drives. They are shown as unknown drives.

So the only way to get Truenas to recognize the 10tb drives after the zpool has been created, exported, and then imported, has been to also partition the 10tb drives (so they become ada2p1 and ada3p1).

Does anybody know why? Is there any problem to going down this route?

As an additional issue, I'm having trouble creating the partition to the max size of the drive. If I try to partition the 10tb drive with 10000831348736, I get an autofill: no space left on the drive error.

Should I not be using the total media size as listed by diskinfo -v to try and create a single partition on the 10tb drive?

Thanks!


Edit: Had a big think about this, clearly in over my head and with no idea what I'm doing futzing about in the shell, and that's going to end in misery I'm sure. I'm going for a Z1 across 4 drives, with the aim of expanding the 10tbs to 16s when the need arises, and finding something to do with the 10tbs.

That's it, what a saga. Thanks for all the input and help, everybody, much appreciated!

If it's just a Data grave with mostly large files where you don't expect much change you can do what I did, go the snapraid route for parity and just create a pool of your data with mergerfs. Not much thinking involved, I currently have 100TB pool with 2 parity disks the pool is scrubbed automatically and most importantly I can expand that pool with whatever size disks I want without much hassle.

BlankSystemDaemon
Mar 13, 2009



Cantide posted:

If it's just a Data grave with mostly large files where you don't expect much change you can do what I did, go the snapraid route for parity and just create a pool of your data with mergerfs. Not much thinking involved, I currently have 100TB pool with 2 parity disks the pool is scrubbed automatically and most importantly I can expand that pool with whatever size disks I want without much hassle.
This reads like "delete your data before your filesystem can", as nothing about snapraid or mergerfs is built for fault tolerance or maintainability.

BlankSystemDaemon fucked around with this message at 11:48 on Feb 18, 2022

Cantide
Jun 13, 2001
Pillbug

BlankSystemDaemon posted:

This reads like "delete your data before your filesystem can", as nothing about snapraid or mergerfs is built for fault tolerance or maintainability.

And this read like "I'm clinically paranoid"

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Eh, if you go out of your way to build a dedicated storage device, you'd think you'd want it pretty fault tolerant. And those weird-rear end hacks (at least I consider them that) only get half way there.

BlankSystemDaemon
Mar 13, 2009



Cantide posted:

And this read like "I'm clinically paranoid"
Nope.

Klyith
Aug 3, 2007

GBS Pledge Week

What does this prove?

Keito
Jul 21, 2005

WHAT DO I CHOOSE ?

Holy poo poo what are they doing :shepicide:

BlankSystemDaemon
Mar 13, 2009



Klyith posted:

What does this prove?

See:

Keito posted:

Holy poo poo what are they doing :shepicide:
For their sake, I hope that's machine-generated because it's utterly unmaintainable and unreadable.

And if it is machine generated to obfuscate it, what are they trying to hide?

BlankSystemDaemon fucked around with this message at 14:30 on Feb 19, 2022

Klyith
Aug 3, 2007

GBS Pledge Week

BlankSystemDaemon posted:

See:

For their sake, I hope that's machine-generated because it's utterly unmaintainable and unreadable.

And if it is machine generated to obfuscate it, what are they trying to hide?

A lookup table? Oh my god!

Look, zfs has a 1.4mb one of those:
https://github.com/openzfs/zfs/blob/master/include/sys/u8_textprep_data.h

Why do you look at that and think it's obfuscated code?

Chumbawumba4ever97
Dec 31, 2000

by Fluffdaddy
A few kind goons here recommended Stablebit Drivepool which I love. Someone even let me know that you can mount drives with no driver letters to a folder on your C: which was a huge help in knowing where my stuff is.

The only weird thing is I noticed in Device Manager (Win10) two of the "folder drives" have exclamation points, which is kind of worrying to me. Here is what I mean:




The only thing I noticed is the ones with the exclamation points have different icons as you can see on the right. Does that have anything to do with it? The two with the exclamation points have hard drive icons and the ones that do not have exclamation points have folder icons? That's the only "clue" I have.

Is this going cause data loss somehow? Why are only these two drives showing up like this?

BlankSystemDaemon
Mar 13, 2009



Klyith posted:

A lookup table? Oh my god!

Look, zfs has a 1.4mb one of those:
https://github.com/openzfs/zfs/blob/master/include/sys/u8_textprep_data.h

Why do you look at that and think it's obfuscated code?
That's a header file, and it's used exactly once, relating to UTF-8. whereas the lookup tables I linked earlier are part of the code to do Galois Fields used for fault tolerance.
I'd say there's a slight distinction between the two.

EDIT: I'm reading the paper that introduces Cauchy matricies and while independent corroboration seems a bit sparse on top of which not everything is fully implemented and I can find no evidence that it's integrated the best feedback it's had, the last does at least tell me that it's an interesting paper for data storage, should ZFS ever need to go above three parity disks.

BlankSystemDaemon fucked around with this message at 20:11 on Feb 19, 2022

Klyith
Aug 3, 2007

GBS Pledge Week

BlankSystemDaemon posted:

That's a header file,

So what? There's no real difference, declaring a bunch of consts does the exact same thing whether you put .h or a .c on the filename. Completely arbitrary.

BlankSystemDaemon posted:

I'd say there's a slight distinction between the two.

Yeah, the distinction is a lookup table that solves a main function of your code is super loving useful. That's the whole point of a lookup table: to reduce an operation to O(1) time. While a table for UTF characters is a convenience, or maybe useful so you don't get crashed by Zalgo or whatever.

You haven't in the least shown that a lookup table is bad thing, or unmaintainable, or "obfuscated". And looking 1 step further would have shown you that yes, the lookup table is machine generated by mktables.c.


BlankSystemDaemon posted:

EDIT: I'm reading the paper that introduces Cauchy matricies and while independent corroboration seems a bit sparse on top of which not everything is fully implemented and I can find no evidence that it's integrated the best feedback it's had, the last does at least tell me that it's an interesting paper for data storage, should ZFS ever need to go above three parity disks.

So.... the guy who wrote snapraid is getting kernel patches into significant storage projects. Dude just admit that you were being dumb and went way far out on a limb with this:

BlankSystemDaemon posted:

This reads like "delete your data before your filesystem can", as nothing about snapraid or mergerfs is built for fault tolerance or maintainability.

BlankSystemDaemon
Mar 13, 2009



Klyith posted:

So what? There's no real difference, declaring a bunch of consts does the exact same thing whether you put .h or a .c on the filename. Completely arbitrary.

Yeah, the distinction is a lookup table that solves a main function of your code is super loving useful. That's the whole point of a lookup table: to reduce an operation to O(1) time. While a table for UTF characters is a convenience, or maybe useful so you don't get crashed by Zalgo or whatever.

You haven't in the least shown that a lookup table is bad thing, or unmaintainable, or "obfuscated". And looking 1 step further would have shown you that yes, the lookup table is machine generated by mktables.c.

So.... the guy who wrote snapraid is getting kernel patches into significant storage projects. Dude just admit that you were being dumb and went way far out on a limb with this:
The difference is where it's used; one is for UTF-8, the other is in the fault tolerance code.

How is it a contentious point to say that tables aren't maintainable? Can you tell me exactly what that code does, without looking at the comments, and can you spot an error in it just by analyzing the code by hand?
Having lookup tables don't automatically reduce everything to O(1) - Galois Field matricies are finite field theory, it's not exactly something thats O(1) no matter what you do.

Did you read the part about where it's a thread about how it was never added to the Linux kernel?

Klyith
Aug 3, 2007

GBS Pledge Week

BlankSystemDaemon posted:

How is it a contentious point to say that tables aren't maintainable? Can you tell me exactly what that code does, without looking at the comments, and can you spot an error in it just by analyzing the code by hand?
The comments are in the mktables that generates it. And a bunch of consts don't do anything by themselves. Like, you could run that same code to generate the exact same tables in memory at runtime... but what would be the benefit? Assuming that the contents of the Cauchy matrix is something that will never change, there's no reason not to have it static.

BlankSystemDaemon posted:

Having lookup tables don't automatically reduce everything to O(1) - Galois Field matricies are finite field theory, it's not exactly something thats O(1) no matter what you do.
But it reduces whatever operation you would have done in place of the lookup. If you're saying that the math being replaced here is the trivial part and a useless optimization, now that's a specific criticism I can get behind. Otherwise I'm not really seeing this as a giant travesty.

BlankSystemDaemon posted:

Did you read the part about where it's a thread about how it was never added to the Linux kernel?
poo poo I misread the 2nd reply to the independent test link as "can send upstream" not "cannot send upstream". Sorry.




On a more productive note:

Chumbawumba4ever97 posted:

The only weird thing is I noticed in Device Manager (Win10) two of the "folder drives" have exclamation points, which is kind of worrying to me. Here is what I mean:



Is this going cause data loss somehow? Why are only these two drives showing up like this?
Right click in dev manager and look at properties for the drives with exclamation points, what does it say in the status box?

Also this is the first result on google and seems relevant.

Yaoi Gagarin
Feb 20, 2014

It's pretty normal to code a lookup table or a blob like that, I don't think it's a smoking gun or anything. You just generate the .c file with an external script. Also I'd rather have that in a .c file than a .h file, so you aren't relying on the linker to clean up multiple definitions.

Cantide
Jun 13, 2001
Pillbug
Good thing I decided not not look at this thread for a while.
Anyway the long and short of it for now is I just decided to trust first hand user reports and the developers of snapraid and it hasn't disappointed (yet), I get daily mail via openmediavault (YES, OMG NO BSD) for the diffs, syncs and scrubs and sometimes have to do a manual sync when my arbitraryly set file deletion limits prevent an automatic sync.
Sure this opens me up to some kind of potential catastrophe but if it fails completely I will (at first) probably "only" loose the data from one disk as the underlying pool is just a bunch of ext4 in a big pool.

However I have been using snapraid for undeletes of a large amount of files multiple times and it has been working as advertised. I have not got around to look up a test case for bitrot to check if scrubbing actually does something but here we get back to me being a trusting fellow and liking to live dangerously. Anyway my really important data is backed up via backblaze b2 (duplicati - yes the configuration of duplicati inside my password protected keepass database gets synced to multiple machines via syncthing ). And no there is no further on-site backup, just parity.

All in all I'm happy, the system is flexible, can be expanded or god forbid contracted at will and If I fall on my face I guess I will do stuff differently next time...

Leng
May 13, 2006

One song / Glory
One song before I go / Glory
One song to leave behind


No other road
No other way
No day but today
I did a dumb dumb thing with my FreeNAS set up (an old 10.x.x.x router died; got a new 192.x.x.x router which has DHCP server enabled so I went and reserved the IP addresses for FreeNAS and the Plex jail MAC addresses then pressed a stupid setting to switch to DHCP from my previously static IPs).

This promptly broke a lot of things. I did not back up my config because I did not expect things to break quite so spectacularly and now my FreeNAS box can't access the internet, which means my Plex server is...dead. Thankfully, my SMB share is still working so I can still get to the files if I'm at home, but this is less than ideal. The obvious solution was to put everything back the way it was and I'm 85% certain I have things properly configured now but it is still not working. Whatever dumb thing I've messed up isn't easily googleable.

Hopefully someone knowledgeable on the TrueNAS forums will be able to help :( but I'm not that optimistic.

TL;DR as a complete noob at all this I should have just taken my money and paid for DropBox premium or something, instead of wanting to have control over my own files.

Vent over, carry on. When I'm feeling less frustrated, I'll probably take another crack at fixing this. Maybe I'll try exporting my pools so I can nuke the whole install and start clean with TrueNAS Core.

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit
Sounds like something in the IP space is messed up. Can you switch it all back to how it was and put your new router on a 10.x.x.x subnet again?

Crunchy Black
Oct 24, 2017

by Athanatos
TrueNAS Scale is out.

I have no particular affiliation to BSD, though its been fun learning a bit about how it ticks over the years, and I'm due for a rebuild just to fix some weirdness that's crept into my config over the years. I'm probably going to wait on Bluefin, though, cause I want to make it a more hyper converged solution and free up some rackspace. Anyone else have any thoughts?

https://www.servethehome.com/truenas-scale-released-and-resetting-the-nas-paradigm/

BlankSystemDaemon
Mar 13, 2009



Storage is just great:
https://twitter.com/xenadu02/status/1495693475584557056

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Speculation on the US vendor? Seagate?

Klyith
Aug 3, 2007

GBS Pledge Week

priznat posted:

Speculation on the US vendor? Seagate?

Seagate isn't a very big player in SSDs, and they don't make their own controllers so it might be hard to blame them in particular. I think the most obvious candidates would be Micron/Crucial or Intel (even though Intel SSDs aren't really Intel anymore).

Adbot
ADBOT LOVES YOU

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Says the 970 EVO Plus never lost data. Guess I'm faring well with Samsung. So far, anyway.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply