Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Kung-Fu Jesus
Dec 13, 2003

Does that make its name slightly inappropriate then?

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009




Kung-Fu Jesus posted:

Does that make its name slightly inappropriate then?
Not with this:

UnRAID 6.12 release notes posted:

Additionally, you may format any data device in the unRAID array with a single-device ZFS file system.
Also, what the gently caress does that even mean.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



IMO, if I were building a new NAS I'd use TrueNAS Scale.

Computer viking
May 30, 2011
Now with less breakage.

BlankSystemDaemon posted:

Also, what the gently caress does that even mean.

If their definition of "data drive" means it's still covered by their parity stuff, it's a bit like creating a single-vdev zpool out of a zvol. (Can you do that?)

withoutclass
Nov 6, 2007

Resist the siren call of rhinocerosness

College Slice
The only thing I can think of is if it will treat a pool as a single disk but that doesn't seem correct either.

The NPC
Nov 21, 2010


My parents are on rural internet and are running out of storage space on their systems. They currently have 1 NUC-like desktop and 1 laptop. Both have external hard drives which are old and filling up. There are no backups. The NUC has mostly documents and photos. Mostly used for email. The laptop is for ripping an extensive record collection. It is also used for media consumption and travels off the network frequently. Total data is less than 4TB. I would like to:
1. Provide a way for them to back up their files.
2. Move as many local files to the server as possible.

For the NUC something like a samba share would be fine.

For the laptop, I'm not sure if having 90% of the photos and music inaccessible off-network is acceptable. Maybe use the external drive as a source of truth, and a nightly + on-demand sync to a file share? I guess priorities wise, back ups would be first, eliminating the need for the external drive would be a bonus.

Hardware wise, I just upgraded my main PC so I have extra parts, and was able to grad some WD Red Pluses on sale. I figure I would run ubuntu server + zfs for the host and virtualize anything else needed. This is similar to what I'm using for my home server. I'm also not opposed to grabbing a synology or something. I just don't have any experience with them.

Keito
Jul 21, 2005

WHAT DO I CHOOSE ?

BlankSystemDaemon posted:

Also, what the gently caress does that even mean.

I assume it means they'll create a zpool with a single disk vdev and a single filesystem for use in the jankraid.

BlankSystemDaemon
Mar 13, 2009




Computer viking posted:

If their definition of "data drive" means it's still covered by their parity stuff, it's a bit like creating a single-vdev zpool out of a zvol. (Can you do that?)
I just tested it by creating a file-backed GEOM gate using truncate(1) and ggatel(8), then created a pool named tank on top of that GEOM gate, and added a volume to that pool.

Conceptually, it can't work because a volume has to be the child of the pools default dataset (the one that gets created when creating the pool), and that can't be deleted using zfs-destroy(8) - and indeed it doesn't work.

withoutclass posted:

The only thing I can think of is if it will treat a pool as a single disk but that doesn't seem correct either.
Yes, that's fundamentally a misunderstanding of pooled storage in general, and ZFS in particular.

Keito posted:

I assume it means they'll create a zpool with a single disk vdev and a single filesystem for use in the jankraid.
So they're re-inventing the same system Synology has with BTRFS, from scratch?
That thing was invented because BTRFS has absolutely bonkers ideas about what to do in case part of an array is broken (which will cause the array to be unbootable, and can lead to permanent dataloss if handled incorrectly), and is a bit of a mess.

There are very few ways of loving up a ZFS implementation, and they had to go and invent a brand new one?
I will never understand how people trust UnRAID with their data.

Computer viking
May 30, 2011
Now with less breakage.

BlankSystemDaemon posted:

I just tested it by creating a file-backed GEOM gate using truncate(1) and ggatel(8), then created a pool named tank on top of that GEOM gate, and added a volume to that pool.

Conceptually, it can't work because a volume has to be the child of the pools default dataset (the one that gets created when creating the pool), and that can't be deleted using zfs-destroy(8) - and indeed it doesn't work.

I meant something like this madness, which I just tested:
code:
root@machine:/# zpool status
  pool: zpool
 state: ONLINE
  scan: scrub repaired 0B in 01:14:57 with 0 errors on Sun Mar 12 01:38:58 2023
config:

	NAME        STATE     READ WRITE CKSUM
	zpool       ONLINE       0     0     0
	  sda       ONLINE       0     0     0
	  sdb       ONLINE       0     0     0

errors: No known data errors
root@machine:/# zfs list
NAME    USED  AVAIL     REFER  MOUNTPOINT
zpool  1.18T  4.13T     1.18T  /zpool
root@machine:/# zfs create zpool/vol -V 16G
root@machine:/# zpool create testpool /dev/zvol/zpool/vol 
root@machine:/# zpool status
  pool: testpool
 state: ONLINE
config:

	NAME        STATE     READ WRITE CKSUM
	testpool    ONLINE       0     0     0
	  vol       ONLINE       0     0     0

errors: No known data errors

  pool: zpool
 state: ONLINE
  scan: scrub repaired 0B in 01:14:57 with 0 errors on Sun Mar 12 01:38:58 2023
config:

	NAME        STATE     READ WRITE CKSUM
	zpool       ONLINE       0     0     0
	  sda       ONLINE       0     0     0
	  sdb       ONLINE       0     0     0

errors: No known data errors
root@machine:/# zfs list
NAME        USED  AVAIL     REFER  MOUNTPOINT
testpool    744K  15.0G      192K  /testpool
zpool      1.20T  4.11T     1.18T  /zpool
zpool/vol  16.5G  4.13T      768K  -

That's a zpool created from a single device, but that device is itself redundant (because it's a zvol on a mirror). In theory that should be "as good as" using a mirror directly, since any block checksum fails will be corrected by the underlying mirror before they make it to the testpool - but much like making a single-device zpool on top of a hardware mirror it still feels wrong.

Computer viking fucked around with this message at 13:42 on Mar 21, 2023

BlankSystemDaemon
Mar 13, 2009




Computer viking posted:

I meant something like this madness, which I just tested:
code:
root@machine:/# zpool status
  pool: zpool
 state: ONLINE
  scan: scrub repaired 0B in 01:14:57 with 0 errors on Sun Mar 12 01:38:58 2023
config:

	NAME        STATE     READ WRITE CKSUM
	zpool       ONLINE       0     0     0
	  sda       ONLINE       0     0     0
	  sdb       ONLINE       0     0     0

errors: No known data errors
root@machine:/# zfs list
NAME    USED  AVAIL     REFER  MOUNTPOINT
zpool  1.18T  4.13T     1.18T  /zpool
root@machine:/# zfs create zpool/vol -V 16G
root@machine:/# zpool create testpool /dev/zvol/zpool/vol 
root@machine:/# zpool status
  pool: testpool
 state: ONLINE
config:

	NAME        STATE     READ WRITE CKSUM
	testpool    ONLINE       0     0     0
	  vol       ONLINE       0     0     0

errors: No known data errors

  pool: zpool
 state: ONLINE
  scan: scrub repaired 0B in 01:14:57 with 0 errors on Sun Mar 12 01:38:58 2023
config:

	NAME        STATE     READ WRITE CKSUM
	zpool       ONLINE       0     0     0
	  sda       ONLINE       0     0     0
	  sdb       ONLINE       0     0     0

errors: No known data errors
root@machine:/# zfs list
NAME        USED  AVAIL     REFER  MOUNTPOINT
testpool    744K  15.0G      192K  /testpool
zpool      1.20T  4.11T     1.18T  /zpool
zpool/vol  16.5G  4.13T      768K  -

That's a zpool created from a single device, but that device is itself redundant (because it's a zvol on a mirror). In theory that should be "as good as" using a mirror directly, since any block checksum fails will be corrected by the underlying mirror before they make it to the testpool - but much like making a single-device zpool on top of a hardware mirror it still feels wrong.
With a zpool nested on top of a zpool, it's using twice the cputime for absolutely no benefit whatsoever (and even if you use -o and -O on zpool-create(8) to disable checksumming, primarycache, and all other properties of that nature), it's still consuming more cputime.

If UnRAID is putting their RAID implementation on top of a zvol created on top of a pair of mirrored disks, they're incurring exactly the same cputime-for-no-benefit.

Also, your zpool isn't mirrored, it's striped because you forgot the mirror keyword - but their wording is pretty unambiguously about using ZFS on top of a single device.

Not that that's inherently a bad thing - I do it on my primary laptop (a T480 running FreeBSD 14-CURRENT), because it can't fit two NVMe SSDs without losing access to the LTE-A modem that I use for roadwarrioring instead of relying on hotspots and a VPN.
The difference is that I have snapshots taken every minute, and they're they're zfs-send|receive'd to my server every 5 minutes, then converted to bookmarks so that they no longer take up any space but still preserve incremental backup streams.

Computer viking
May 30, 2011
Now with less breakage.

Oh yeah, I did intentionally stripe that - I'd forgotten which pool I used as a test here. (It's a temporary dump for one of the stages of something I'm doing to some sequencing data, and I can easily enough recreate it if one of the drives fail - and the extra speed is directly useful.)

The one benefit I could see to putting a zpool on top of an already redundant layer is if you want some of the other ZFS benefits - snapshots, transparent compression, ACL support, send/receive for backups, that sort of thing. The CPU overhead is real, but not huge, so I guess it could sometimes be worth it as an alternative to XFS or whatever they typically use?

e: Much the same goes for single-drive pools, as you say.

Computer viking fucked around with this message at 14:10 on Mar 21, 2023

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
The real use for unraid is the cache pools with ZFS. Not having to use BTRFS for a mirror is a net gain.

Personally I'll be leaving my array alone and not using ZFS until I can build a proper array with mix and match drives (is that even a thing?).

Corb3t
Jun 7, 2003

BlankSystemDaemon posted:

I will never understand how people trust UnRAID with their data.

Why shouldn't I? If everybody is using 3-2-1 backup strategy properly, they should have all the confidence in the world that their important data is safe in Unraid. I've recovered, replaced, and rebuilt multiple drives in my Unraid array without issue, as well replaced the parity without fail, all with Plex still humming along and serving media to my friends and family. The ability to mix and match any sized drives as I go is a huge benefit of Unraid's parity array system and one of the main reasons I went with it.

I played around with TrueNAS and found that Unraid was much more user friendly and hands off, which also suits my needs more. The huge community of nerds developing apps, plugins, etc is also great, too.

Matt Zerella posted:

The real use for unraid is the cache pools with ZFS. Not having to use BTRFS for a mirror is a net gain.

Personally I'll be leaving my array alone and not using ZFS until I can build a proper array with mix and match drives (is that even a thing?).

Mixing and matching isn't currently possible, but they're supposedly working on it. I'm looking forward to upgrading my NAS's motherboard so I can take advantage of 2-3 ZFS'd cache drives.

Corb3t fucked around with this message at 14:35 on Mar 21, 2023

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
TrueNAS using Kubernetes to host containers seems stupid to me. I'd love to use ZFS but I like the flexibility of UnRAID.

I don't need enterprise class protection for my linux ISO's and homely tinkering. unRAID for me Just Works so I don't have to do my day job at home. Anything important for me is stored in the cloud or in paid services.

I really think people are reading way too much into ZFS on UnRAID, it's in its infancy stage and the way you guys are describing it doesn't seem like the intended usage. From what I've seen it's meant for the cache drives. Array stuff will come later.

BlankSystemDaemon
Mar 13, 2009




Computer viking posted:

Oh yeah, I did intentionally stripe that - I'd forgotten which pool I used as a test here. (It's a temporary dump for one of the stages of something I'm doing to some sequencing data, and I can easily enough recreate it if one of the drives fail - and the extra speed is directly useful.)

The one benefit I could see to putting a zpool on top of an already redundant layer is if you want some of the other ZFS benefits - snapshots, transparent compression, ACL support, send/receive for backups, that sort of thing. The CPU overhead is real, but not huge, so I guess it could sometimes be worth it as an alternative to XFS or whatever they typically use?

e: Much the same goes for single-drive pools, as you say.
If ZFS doesn't have direct access to the disks, it can't ensure that the ATA/SAS FLUSH events are handled properly, and this breaks both the transactional properties of ZFS as well as its data resiliency.

Matt Zerella posted:

The real use for unraid is the cache pools with ZFS. Not having to use BTRFS for a mirror is a net gain.

Personally I'll be leaving my array alone and not using ZFS until I can build a proper array with mix and match drives (is that even a thing?).
I don't know what unRAID cache pools are, sorry.

The only RAID that's been able to do "proper" mix-and-match is Drobo and make use of all of the disks without leaving space unused, and with the horror stories I've heard about that, I'm not sure it's a recommendation as much as a cautionary tale.
Nobody really knows how they accomplish this since it's proprietary, but one way to accomplish it would be to split disks up into small chunks and set up many small arrays that span the entire set of disks in different ways.

There's nothing stopping you from using ZFS with a mixed set of drive sizes, except that it's the smallest drive that controls the size of each of the array items, which is only a real problem if you never plan to touch it ever again.
I'm using this feature in my on-site off-line backup server, which has two raidz3 vdevs with 15 drives, where the smallest drive is 2TB and the largest is 8TB; whenever I can afford to replace a drive with a new one (while keeping at least one drive as a spare), I replace one of the small drives, by pulling it out and plugging a new one in (which causes ZFS to detect a drive has been replaced, which automatically starts the resilver process - and once it's finished, the pool automatically grows bigger, without me having to do anything).

Corb3t posted:

Why shouldn't I? If everybody is using 3-2-1 backup strategy properly, they should have all the confidence in the world that their important data is safe in Unraid. I've recovered, replaced, and rebuilt multiple drives in my Unraid array without issue, as well replaced the parity without fail, all with Plex still humming along and serving media to my friends and family. The ability to mix and match any sized drives as I go is a huge benefit of Unraid's parity array system and one of the main reasons I went with it.
The 3-2-1 strategy doesn't protect you against writing corrupt data to your other backups, because there's no way to tell if a file was intentionally modified by the user or modified because of silent corruption on-disk.
The only way to guard against silent data corruption is by having checksums for both data and metadata arranged in a hash-tree like ZFS.

power crystals
Jun 6, 2007

Who wants a belly rub??

Nitrousoxide posted:

IMO, if I were building a new NAS I'd use TrueNAS Scale.

I love my Scale setup, except for the multiple times they've made breaking changes and not made it obvious how to get out of the now-broken state my install was in. The first and biggest was when truecharts decided to suddenly deprecate "PVC (Simple)" storage which used to be the default, so being the default it's what I had used for everything. The regular "PVC" that replaced it had a quota setting, which okay sure that's reasonable, but the "Simple" mode's lack of a quota made it register as infinite, and the UI wouldn't let you transition from simple to not-simple because you couldn't set a "smaller" quota. That was infuriating and I wound up having to delete and recreate my apps, and this time I used hostpath storage and I don't care if it breaks rolling back at least now I can fix it myself. The second was when truecharts stopped working entirely and the fix was apparently to upgrade to Bluefin, which sure okay I've been putting that off but I wasn't having any actual problems, but then after upgrade it tells me that I'm not allowed to have apps write directly to datasets shared via SMB. Again, I get why (because unix permissions combined with SMB are a clusterfuck and most things don't interact properly with ACLs) but it took me quite a while to dig up the "shut up, I know what I'm doing and I don't care" button because the UI didn't even explain the intended solution (change the apps to use NFS shares rather than direct mounts) let alone the unsupported one. Having applications that access the files that I also access is not an edge case!

The actual storage management is great, and I've already had it notify me about drive issues much more promptly than other solutions I've tried in the past and made replacing failed ones trivial which is all great, but the change management there feels very much like they only care about new installs and existing users who aren't on the corporate support system can go gently caress themselves.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Matt Zerella posted:

TrueNAS using Kubernetes to host containers seems stupid to me. I'd love to use ZFS but I like the flexibility of UnRAID.

I don't need enterprise class protection for my linux ISO's and homely tinkering. unRAID for me Just Works so I don't have to do my day job at home. Anything important for me is stored in the cloud or in paid services.

I really think people are reading way too much into ZFS on UnRAID, it's in its infancy stage and the way you guys are describing it doesn't seem like the intended usage. From what I've seen it's meant for the cache drives. Array stuff will come later.

You can use docker compose in TrueNAS now. They don't require the use of kubernetes anymore.

https://www.truenas.com/community/threads/truecharts-integrates-docker-compose-with-truenas-scale.99848/

power crystals posted:

I love my Scale setup, except for the multiple times they've made breaking changes and not made it obvious how to get out of the now-broken state my install was in. The first and biggest was when truecharts decided to suddenly deprecate "PVC (Simple)" storage which used to be the default, so being the default it's what I had used for everything. The regular "PVC" that replaced it had a quota setting, which okay sure that's reasonable, but the "Simple" mode's lack of a quota made it register as infinite, and the UI wouldn't let you transition from simple to not-simple because you couldn't set a "smaller" quota. That was infuriating and I wound up having to delete and recreate my apps, and this time I used hostpath storage and I don't care if it breaks rolling back at least now I can fix it myself. The second was when truecharts stopped working entirely and the fix was apparently to upgrade to Bluefin, which sure okay I've been putting that off but I wasn't having any actual problems, but then after upgrade it tells me that I'm not allowed to have apps write directly to datasets shared via SMB. Again, I get why (because unix permissions combined with SMB are a clusterfuck and most things don't interact properly with ACLs) but it took me quite a while to dig up the "shut up, I know what I'm doing and I don't care" button because the UI didn't even explain the intended solution (change the apps to use NFS shares rather than direct mounts) let alone the unsupported one. Having applications that access the files that I also access is not an edge case!

The actual storage management is great, and I've already had it notify me about drive issues much more promptly than other solutions I've tried in the past and made replacing failed ones trivial which is all great, but the change management there feels very much like they only care about new installs and existing users who aren't on the corporate support system can go gently caress themselves.

That sounds unfortunate. I've generally kept my server and NAS on separate devices so it's not been an issue to me. I use OpenMediaVault as my docker platform and a Synology NAS as my NAS (it does nothing other than act as a NAS). Though LIke I said, if I were building new now I'd use TrueNAS rather than Synology.

I've also spun up Proxmox on an old computer and will probably migrate my OMV install over to that some day as a VM rather than the bare metal install it is now. Either that or I'll move to another container OS like CoreOS, or even try K3S as the orchestrator. This is a bit beyond the scope of the NAS thread though and more in the homelab or self-hosting thread's perview.

Nitrousoxide fucked around with this message at 15:42 on Mar 21, 2023

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

BlankSystemDaemon posted:


I don't know what unRAID cache pools are, sorry.


Basically it's a fast storage pool (usually SSDs) that are transparent. If my usenet client downloads a file it goes to the cache. Later on at 3 AM if the file isn't in use, it moves the file to my array (which is slower). the idea is you use them as fast immediate write so you don't have to spin up disks/deal with the slower fuse fs that unraid uses to turn multiple disks into a single filesystem (I think SnapRAID is the same).

Currently you can only mirror cache drives with BTRFS. the big win here is that with the next version we can use ZFS instead to do a mirror.

Keito
Jul 21, 2005

WHAT DO I CHOOSE ?

Nitrousoxide posted:

You can use docker compose in TrueNAS now. They don't require the use of kubernetes anymore.

https://www.truenas.com/community/threads/truecharts-integrates-docker-compose-with-truenas-scale.99848/

Is that just something for parsing Compose file YAML and using it to orchestrate k8s? All the links in that thread are dead so I can't really find out much about it from there.

https://truecharts.org/news/docker-compose/ <- this turned up after some web searches

So it's running Docker in a container, and then you attach a shell and run the compose CLI tool there? I guess that would work but it seems a bit messy.


I don't understand why using something that's not Docker apparently is a non-starter for so many home users. The docker CLI is OK but not that great. Compose YAML is pretty poo poo. k8s YAML is even uglier, but it's not exactly hard to get a grasp of.

There just seems to be this huge aversion to learning something new, like how'd you get started with Linux containers in the first place if you hate everything you don't know? Weird.

Hughlander
May 11, 2005

Nitrousoxide posted:

IMO, if I were building a new NAS I'd use TrueNAS Scale.

I switched my main box from True NAS inside ESXI with hardware passthrough to Proxmox, and plan to switch again to TrueNAS Scale when the k8s features mature a bit more

Matt Zerella posted:

TrueNAS using Kubernetes to host containers seems stupid to me. I'd love to use ZFS but I like the flexibility of UnRAID.

I don't need enterprise class protection for my linux ISO's and homely tinkering. unRAID for me Just Works so I don't have to do my day job at home. Anything important for me is stored in the cloud or in paid services.

I really think people are reading way too much into ZFS on UnRAID, it's in its infancy stage and the way you guys are describing it doesn't seem like the intended usage. From what I've seen it's meant for the cache drives. Array stuff will come later.

A lot of it is going to be the helm charts being a superior way to setup vs docker compose files.

Hughlander fucked around with this message at 16:04 on Mar 21, 2023

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Keito posted:

Is that just something for parsing Compose file YAML and using it to orchestrate k8s? All the links in that thread are dead so I can't really find out much about it from there.

https://truecharts.org/news/docker-compose/ <- this turned up after some web searches

So it's running Docker in a container, and then you attach a shell and run the compose CLI tool there? I guess that would work but it seems a bit messy.


I don't understand why using something that's not Docker apparently is a non-starter for so many home users. The docker CLI is OK but not that great. Compose YAML is pretty poo poo. k8s YAML is even uglier, but it's not exactly hard to get a grasp of.

There just seems to be this huge aversion to learning something new, like how'd you get started with Linux containers in the first place if you hate everything you don't know? Weird.

It's docker-in-docker. You could use the CLI tools in the container if you want, or you could install Portainer or some other orchestration tool if you prefer.

I don't find compose.yaml to be hard to parse, but I've been using it for a year or two now for Docker and Podman. Kubernetes pods are inscrutable to me currently, but I'm trying to learn them.

Computer viking
May 30, 2011
Now with less breakage.

BlankSystemDaemon posted:

If ZFS doesn't have direct access to the disks, it can't ensure that the ATA/SAS FLUSH events are handled properly, and this breaks both the transactional properties of ZFS as well as its data resiliency.

And yet I'd still prefer it to ext4. :shrug:

(That is - you're not wrong, but it's still a nice file system even without those guarantees.)

BlankSystemDaemon
Mar 13, 2009




Matt Zerella posted:

Basically it's a fast storage pool (usually SSDs) that are transparent. If my usenet client downloads a file it goes to the cache. Later on at 3 AM if the file isn't in use, it moves the file to my array (which is slower). the idea is you use them as fast immediate write so you don't have to spin up disks/deal with the slower fuse fs that unraid uses to turn multiple disks into a single filesystem (I think SnapRAID is the same).

Currently you can only mirror cache drives with BTRFS. the big win here is that with the next version we can use ZFS instead to do a mirror.
Ah, so it's a scratch directory. Gotcha.

Computer viking
May 30, 2011
Now with less breakage.

Or an ingestion buffer, I guess?

BlankSystemDaemon
Mar 13, 2009




Computer viking posted:

Or an ingestion buffer, I guess?
Same thing, different name.

Computer viking
May 30, 2011
Now with less breakage.

BlankSystemDaemon posted:

Same thing, different name.

I'd associate scratch disks with temporary storage that you'd explicitly copy to and from, while this sounds more like a transparent tiered storage thing?

BlankSystemDaemon
Mar 13, 2009




Computer viking posted:

I'd associate scratch disks with temporary storage that you'd explicitly copy to and from, while this sounds more like a transparent tiered storage thing?
Everywhere I've worked, a scratch directory/disk is where data goes before it goes to its permanent storage - irrespective of whether it's transparent, or not. :shrug:

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
The reason why pool mirrors are important is usually on unraid you'll keep your app data (docker persistent directories) and the docker image/directory pinned to the cache. Now yes, I know a mirror isn't backup, but it gives you durability on your scratch drive. BTRFS mirror actually work pretty well, but I'd rather be using ZFS as its much more stable/mature.

That Works
Jul 22, 2006

Every revolution evaporates and leaves behind only the slime of a new bureaucracy


iirc the only reason the unraid cache isn't only just a scratch drive is that some apps that do very frequent read-write stuff also get installed on the cache drive instead of the main array.

Could just all be semantics though, I've never thought too deeply on it.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

That Works posted:

iirc the only reason the unraid cache isn't only just a scratch drive is that some apps that do very frequent read-write stuff also get installed on the cache drive instead of the main array.

Could just all be semantics though, I've never thought too deeply on it.

Correct, for a share you can define a caching strategy.

Yes: Data is written to cache and moved when the mover runs
Prefer: Data lives on the cache drive, if cache is full it overflows to the array
Only: Data is only written to cache, if cache is full, no more data (this is stupid and idk why it exists)
No: Don't use cache

BlankSystemDaemon
Mar 13, 2009




Matt Zerella posted:

BTRFS mirror actually work pretty well
Yeah, right up until you find out about how it refuses to mount itself if part of a mirror is missing without using an almost undocumented mount option (which you can't do if you're booting from a pair of mirrored drives, without modifying your initramfs from a running system before rebooting, or rescue disk if you forget to do that like most people who run into this seem to).
And heaven help you if you forget to manually balance the mirror once the disk has been replaced, because if you forget to do so, and the other disk fails (or you forget to disable the almost undocumented mount option), your mirror has suffered permanent dataloss - and your only option is to restore from backup.

There's so much manual nonsense involved in BTRFS, that's not present in any other RAID implementation.
Even hardware RAID from the bad times in the 1990s know to automatically start resilvering once a drive has been replaced, and don't prevent you from booting the array if a single drive from a mirror is missing.

Yaoi Gagarin
Feb 20, 2014

The word is write-back cache

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



BlankSystemDaemon posted:

Yeah, right up until you find out about how it refuses to mount itself if part of a mirror is missing without using an almost undocumented mount option (which you can't do if you're booting from a pair of mirrored drives, without modifying your initramfs from a running system before rebooting, or rescue disk if you forget to do that like most people who run into this seem to).
And heaven help you if you forget to manually balance the mirror once the disk has been replaced, because if you forget to do so, and the other disk fails (or you forget to disable the almost undocumented mount option), your mirror has suffered permanent dataloss - and your only option is to restore from backup.

There's so much manual nonsense involved in BTRFS, that's not present in any other RAID implementation.
Even hardware RAID from the bad times in the 1990s know to automatically start resilvering once a drive has been replaced, and don't prevent you from booting the array if a single drive from a mirror is missing.

My Synology, using BTRFS, automatically resilvered the array after I swapped out a failing drive.

BlankSystemDaemon
Mar 13, 2009




VostokProgram posted:

The word is write-back cache
Nope, this is a write-back cache.

Nitrousoxide posted:

My Synology, using BTRFS, automatically resilvered the array after I swapped out a failing drive.
That's because Synology uses BTRFS on top of Linux's mdadm RAID plus some proprietary code to do the actual mdadm RAID administration whenever a disk needs to be removed, and also has some proprietary code that deals with checksum errors et cetera, as well as some proprietary code to work around all of the issues with block devices in Linux, that normally prevent Linux from being able to correlate I/O with errors.

CopperHound
Feb 14, 2012

Nitrousoxide posted:

My Synology, using BTRFS, automatically resilvered the array after I swapped out a failing drive.
Isn't that mdadm raid on top of btrfs formatted drives?

Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.

BlankSystemDaemon posted:

I will never understand how people trust UnRAID with their data.

I still think the biggest appeal of Unraid is for people just getting into the NAS/Homelab game. The ability to add miscellaneous drives as you get them makes it great for those that just want to find a use for old parts/drives they have laying around and don't have the budget to buy a set of matching drives all at once. Creating an "array" and a "share" were far more intuitive to me than things like "zpool, vdev, and RaidZ1 or RaidZ2". The idea of going "poo poo, I'm low one drive space, need to just add one more drive to the array" makes a lot more sense to someone that was previously just buying another external HDD and attaching it to a USB hub.

The UI makes getting into containerized apps very easy, as does the "app store" community apps plugin. Even spinning up VMs was more intuitive for me when I was getting started than it was with Proxmox and TrueNAS (at the time, these have come a long way since then). It's all stuff that can be easily done right from the vanilla OS and the Unraid forums are extremely active with a bunch of other people who are all running a very similar setup.

If you're someone who needs to store stuff that's irreplaceable or business critical then sure, either buy a prebuilt system, commit to TrueNAS, or if you can't do that just put it in a Google Drive/iCloud. If you're looking to build a system to store stuff with low up-front cost and would appreciate being able to rebuild a failed drive instead of re-sourcing those files and also maybe start self-hosting apps or messing around with VMs then Unraid is a good choice.

And also, its pretty clear that the single disk zfs thing was a "you could even do this if you wanted to for some reason" rather than "here's a recommended setup" kind of thing.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Scruff McGruff posted:

The UI makes getting into containerized apps very easy, as does the "app store" community apps plugin. Even spinning up VMs was more intuitive for me when I was getting started than it was with Proxmox and TrueNAS (at the time, these have come a long way since then). It's all stuff that can be easily done right from the vanilla OS and the Unraid forums are extremely active with a bunch of other people who are all running a very similar setup.

Not to mention friendly and not elitist. The FreeNAS forums are a cesspool of assholes.

BlankSystemDaemon
Mar 13, 2009




Scruff McGruff posted:

I still think the biggest appeal of Unraid is for people just getting into the NAS/Homelab game. The ability to add miscellaneous drives as you get them makes it great for those that just want to find a use for old parts/drives they have laying around and don't have the budget to buy a set of matching drives all at once. Creating an "array" and a "share" were far more intuitive to me than things like "zpool, vdev, and RaidZ1 or RaidZ2". The idea of going "poo poo, I'm low one drive space, need to just add one more drive to the array" makes a lot more sense to someone that was previously just buying another external HDD and attaching it to a USB hub.

The UI makes getting into containerized apps very easy, as does the "app store" community apps plugin. Even spinning up VMs was more intuitive for me when I was getting started than it was with Proxmox and TrueNAS (at the time, these have come a long way since then). It's all stuff that can be easily done right from the vanilla OS and the Unraid forums are extremely active with a bunch of other people who are all running a very similar setup.

If you're someone who needs to store stuff that's irreplaceable or business critical then sure, either buy a prebuilt system, commit to TrueNAS, or if you can't do that just put it in a Google Drive/iCloud. If you're looking to build a system to store stuff with low up-front cost and would appreciate being able to rebuild a failed drive instead of re-sourcing those files and also maybe start self-hosting apps or messing around with VMs then Unraid is a good choice.

And also, its pretty clear that the single disk zfs thing was a "you could even do this if you wanted to for some reason" rather than "here's a recommended setup" kind of thing.
Is it the case that people are expecting to be able to throw an appliance onto a system, and then just click a few buttons, without having to read any documentation?

Because that's an absolute anathema to me.

Matt Zerella posted:

Not to mention friendly and not elitist. The FreeNAS forums are a cesspool of assholes.
Yeah, that's true.

I try not to be friendly and non-elitist, but I'm not sure how well I pull it off.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
My unraid NAS is for both media (replacable) and backing up steam installs although lately my gaming pc has almost too much ssd space so it’s not as necessary. Also my download folder is mounted to the nas so my pc doesn’t get cluttered with crap I download.

Important documents etc go straight to onedrive, I don’t even bother putting that on the NAS. Unraid is more of a media server box using various containers.

I was also thinking of moving pi hole onto a container as my pi packed it in which shows it is silly putting load bearing network infrastructure on a single rpi. I knew that already of course just was too lazy to bother before :haw:

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009




priznat posted:

Important documents etc go straight to onedrive, I don’t even bother putting that on the NAS. Unraid is more of a media server box using various containers.
Just remember that Microsoft (and all the other butt-companies including Amazon and Alphabet) have lost customer data, and don't have any guarantees about being able to recover anything.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply