|
Does that make its name slightly inappropriate then?
|
# ? Mar 21, 2023 00:38 |
|
|
# ? Jun 6, 2024 08:46 |
Kung-Fu Jesus posted:Does that make its name slightly inappropriate then? UnRAID 6.12 release notes posted:Additionally, you may format any data device in the unRAID array with a single-device ZFS file system.
|
|
# ? Mar 21, 2023 00:39 |
IMO, if I were building a new NAS I'd use TrueNAS Scale.
|
|
# ? Mar 21, 2023 00:51 |
|
BlankSystemDaemon posted:Also, what the gently caress does that even mean. If their definition of "data drive" means it's still covered by their parity stuff, it's a bit like creating a single-vdev zpool out of a zvol. (Can you do that?)
|
# ? Mar 21, 2023 01:06 |
|
The only thing I can think of is if it will treat a pool as a single disk but that doesn't seem correct either.
|
# ? Mar 21, 2023 01:45 |
|
My parents are on rural internet and are running out of storage space on their systems. They currently have 1 NUC-like desktop and 1 laptop. Both have external hard drives which are old and filling up. There are no backups. The NUC has mostly documents and photos. Mostly used for email. The laptop is for ripping an extensive record collection. It is also used for media consumption and travels off the network frequently. Total data is less than 4TB. I would like to: 1. Provide a way for them to back up their files. 2. Move as many local files to the server as possible. For the NUC something like a samba share would be fine. For the laptop, I'm not sure if having 90% of the photos and music inaccessible off-network is acceptable. Maybe use the external drive as a source of truth, and a nightly + on-demand sync to a file share? I guess priorities wise, back ups would be first, eliminating the need for the external drive would be a bonus. Hardware wise, I just upgraded my main PC so I have extra parts, and was able to grad some WD Red Pluses on sale. I figure I would run ubuntu server + zfs for the host and virtualize anything else needed. This is similar to what I'm using for my home server. I'm also not opposed to grabbing a synology or something. I just don't have any experience with them.
|
# ? Mar 21, 2023 02:41 |
|
BlankSystemDaemon posted:Also, what the gently caress does that even mean. I assume it means they'll create a zpool with a single disk vdev and a single filesystem for use in the jankraid.
|
# ? Mar 21, 2023 11:13 |
Computer viking posted:If their definition of "data drive" means it's still covered by their parity stuff, it's a bit like creating a single-vdev zpool out of a zvol. (Can you do that?) Conceptually, it can't work because a volume has to be the child of the pools default dataset (the one that gets created when creating the pool), and that can't be deleted using zfs-destroy(8) - and indeed it doesn't work. withoutclass posted:The only thing I can think of is if it will treat a pool as a single disk but that doesn't seem correct either. Keito posted:I assume it means they'll create a zpool with a single disk vdev and a single filesystem for use in the jankraid. That thing was invented because BTRFS has absolutely bonkers ideas about what to do in case part of an array is broken (which will cause the array to be unbootable, and can lead to permanent dataloss if handled incorrectly), and is a bit of a mess. There are very few ways of loving up a ZFS implementation, and they had to go and invent a brand new one? I will never understand how people trust UnRAID with their data.
|
|
# ? Mar 21, 2023 13:32 |
|
BlankSystemDaemon posted:I just tested it by creating a file-backed GEOM gate using truncate(1) and ggatel(8), then created a pool named tank on top of that GEOM gate, and added a volume to that pool. I meant something like this madness, which I just tested: code:
Computer viking fucked around with this message at 13:42 on Mar 21, 2023 |
# ? Mar 21, 2023 13:39 |
Computer viking posted:I meant something like this madness, which I just tested: If UnRAID is putting their RAID implementation on top of a zvol created on top of a pair of mirrored disks, they're incurring exactly the same cputime-for-no-benefit. Also, your zpool isn't mirrored, it's striped because you forgot the mirror keyword - but their wording is pretty unambiguously about using ZFS on top of a single device. Not that that's inherently a bad thing - I do it on my primary laptop (a T480 running FreeBSD 14-CURRENT), because it can't fit two NVMe SSDs without losing access to the LTE-A modem that I use for roadwarrioring instead of relying on hotspots and a VPN. The difference is that I have snapshots taken every minute, and they're they're zfs-send|receive'd to my server every 5 minutes, then converted to bookmarks so that they no longer take up any space but still preserve incremental backup streams.
|
|
# ? Mar 21, 2023 13:58 |
|
Oh yeah, I did intentionally stripe that - I'd forgotten which pool I used as a test here. (It's a temporary dump for one of the stages of something I'm doing to some sequencing data, and I can easily enough recreate it if one of the drives fail - and the extra speed is directly useful.) The one benefit I could see to putting a zpool on top of an already redundant layer is if you want some of the other ZFS benefits - snapshots, transparent compression, ACL support, send/receive for backups, that sort of thing. The CPU overhead is real, but not huge, so I guess it could sometimes be worth it as an alternative to XFS or whatever they typically use? e: Much the same goes for single-drive pools, as you say. Computer viking fucked around with this message at 14:10 on Mar 21, 2023 |
# ? Mar 21, 2023 14:07 |
|
The real use for unraid is the cache pools with ZFS. Not having to use BTRFS for a mirror is a net gain. Personally I'll be leaving my array alone and not using ZFS until I can build a proper array with mix and match drives (is that even a thing?).
|
# ? Mar 21, 2023 14:25 |
|
BlankSystemDaemon posted:I will never understand how people trust UnRAID with their data. Why shouldn't I? If everybody is using 3-2-1 backup strategy properly, they should have all the confidence in the world that their important data is safe in Unraid. I've recovered, replaced, and rebuilt multiple drives in my Unraid array without issue, as well replaced the parity without fail, all with Plex still humming along and serving media to my friends and family. The ability to mix and match any sized drives as I go is a huge benefit of Unraid's parity array system and one of the main reasons I went with it. I played around with TrueNAS and found that Unraid was much more user friendly and hands off, which also suits my needs more. The huge community of nerds developing apps, plugins, etc is also great, too. Matt Zerella posted:The real use for unraid is the cache pools with ZFS. Not having to use BTRFS for a mirror is a net gain. Mixing and matching isn't currently possible, but they're supposedly working on it. I'm looking forward to upgrading my NAS's motherboard so I can take advantage of 2-3 ZFS'd cache drives. Corb3t fucked around with this message at 14:35 on Mar 21, 2023 |
# ? Mar 21, 2023 14:32 |
|
TrueNAS using Kubernetes to host containers seems stupid to me. I'd love to use ZFS but I like the flexibility of UnRAID. I don't need enterprise class protection for my linux ISO's and homely tinkering. unRAID for me Just Works so I don't have to do my day job at home. Anything important for me is stored in the cloud or in paid services. I really think people are reading way too much into ZFS on UnRAID, it's in its infancy stage and the way you guys are describing it doesn't seem like the intended usage. From what I've seen it's meant for the cache drives. Array stuff will come later.
|
# ? Mar 21, 2023 14:35 |
Computer viking posted:Oh yeah, I did intentionally stripe that - I'd forgotten which pool I used as a test here. (It's a temporary dump for one of the stages of something I'm doing to some sequencing data, and I can easily enough recreate it if one of the drives fail - and the extra speed is directly useful.) Matt Zerella posted:The real use for unraid is the cache pools with ZFS. Not having to use BTRFS for a mirror is a net gain. The only RAID that's been able to do "proper" mix-and-match is Drobo and make use of all of the disks without leaving space unused, and with the horror stories I've heard about that, I'm not sure it's a recommendation as much as a cautionary tale. Nobody really knows how they accomplish this since it's proprietary, but one way to accomplish it would be to split disks up into small chunks and set up many small arrays that span the entire set of disks in different ways. There's nothing stopping you from using ZFS with a mixed set of drive sizes, except that it's the smallest drive that controls the size of each of the array items, which is only a real problem if you never plan to touch it ever again. I'm using this feature in my on-site off-line backup server, which has two raidz3 vdevs with 15 drives, where the smallest drive is 2TB and the largest is 8TB; whenever I can afford to replace a drive with a new one (while keeping at least one drive as a spare), I replace one of the small drives, by pulling it out and plugging a new one in (which causes ZFS to detect a drive has been replaced, which automatically starts the resilver process - and once it's finished, the pool automatically grows bigger, without me having to do anything). Corb3t posted:Why shouldn't I? If everybody is using 3-2-1 backup strategy properly, they should have all the confidence in the world that their important data is safe in Unraid. I've recovered, replaced, and rebuilt multiple drives in my Unraid array without issue, as well replaced the parity without fail, all with Plex still humming along and serving media to my friends and family. The ability to mix and match any sized drives as I go is a huge benefit of Unraid's parity array system and one of the main reasons I went with it. The only way to guard against silent data corruption is by having checksums for both data and metadata arranged in a hash-tree like ZFS.
|
|
# ? Mar 21, 2023 14:54 |
|
Nitrousoxide posted:IMO, if I were building a new NAS I'd use TrueNAS Scale. I love my Scale setup, except for the multiple times they've made breaking changes and not made it obvious how to get out of the now-broken state my install was in. The first and biggest was when truecharts decided to suddenly deprecate "PVC (Simple)" storage which used to be the default, so being the default it's what I had used for everything. The regular "PVC" that replaced it had a quota setting, which okay sure that's reasonable, but the "Simple" mode's lack of a quota made it register as infinite, and the UI wouldn't let you transition from simple to not-simple because you couldn't set a "smaller" quota. That was infuriating and I wound up having to delete and recreate my apps, and this time I used hostpath storage and I don't care if it breaks rolling back at least now I can fix it myself. The second was when truecharts stopped working entirely and the fix was apparently to upgrade to Bluefin, which sure okay I've been putting that off but I wasn't having any actual problems, but then after upgrade it tells me that I'm not allowed to have apps write directly to datasets shared via SMB. Again, I get why (because unix permissions combined with SMB are a clusterfuck and most things don't interact properly with ACLs) but it took me quite a while to dig up the "shut up, I know what I'm doing and I don't care" button because the UI didn't even explain the intended solution (change the apps to use NFS shares rather than direct mounts) let alone the unsupported one. Having applications that access the files that I also access is not an edge case! The actual storage management is great, and I've already had it notify me about drive issues much more promptly than other solutions I've tried in the past and made replacing failed ones trivial which is all great, but the change management there feels very much like they only care about new installs and existing users who aren't on the corporate support system can go gently caress themselves.
|
# ? Mar 21, 2023 15:11 |
Matt Zerella posted:TrueNAS using Kubernetes to host containers seems stupid to me. I'd love to use ZFS but I like the flexibility of UnRAID. You can use docker compose in TrueNAS now. They don't require the use of kubernetes anymore. https://www.truenas.com/community/threads/truecharts-integrates-docker-compose-with-truenas-scale.99848/ power crystals posted:I love my Scale setup, except for the multiple times they've made breaking changes and not made it obvious how to get out of the now-broken state my install was in. The first and biggest was when truecharts decided to suddenly deprecate "PVC (Simple)" storage which used to be the default, so being the default it's what I had used for everything. The regular "PVC" that replaced it had a quota setting, which okay sure that's reasonable, but the "Simple" mode's lack of a quota made it register as infinite, and the UI wouldn't let you transition from simple to not-simple because you couldn't set a "smaller" quota. That was infuriating and I wound up having to delete and recreate my apps, and this time I used hostpath storage and I don't care if it breaks rolling back at least now I can fix it myself. The second was when truecharts stopped working entirely and the fix was apparently to upgrade to Bluefin, which sure okay I've been putting that off but I wasn't having any actual problems, but then after upgrade it tells me that I'm not allowed to have apps write directly to datasets shared via SMB. Again, I get why (because unix permissions combined with SMB are a clusterfuck and most things don't interact properly with ACLs) but it took me quite a while to dig up the "shut up, I know what I'm doing and I don't care" button because the UI didn't even explain the intended solution (change the apps to use NFS shares rather than direct mounts) let alone the unsupported one. Having applications that access the files that I also access is not an edge case! That sounds unfortunate. I've generally kept my server and NAS on separate devices so it's not been an issue to me. I use OpenMediaVault as my docker platform and a Synology NAS as my NAS (it does nothing other than act as a NAS). Though LIke I said, if I were building new now I'd use TrueNAS rather than Synology. I've also spun up Proxmox on an old computer and will probably migrate my OMV install over to that some day as a VM rather than the bare metal install it is now. Either that or I'll move to another container OS like CoreOS, or even try K3S as the orchestrator. This is a bit beyond the scope of the NAS thread though and more in the homelab or self-hosting thread's perview. Nitrousoxide fucked around with this message at 15:42 on Mar 21, 2023 |
|
# ? Mar 21, 2023 15:35 |
|
BlankSystemDaemon posted:
Basically it's a fast storage pool (usually SSDs) that are transparent. If my usenet client downloads a file it goes to the cache. Later on at 3 AM if the file isn't in use, it moves the file to my array (which is slower). the idea is you use them as fast immediate write so you don't have to spin up disks/deal with the slower fuse fs that unraid uses to turn multiple disks into a single filesystem (I think SnapRAID is the same). Currently you can only mirror cache drives with BTRFS. the big win here is that with the next version we can use ZFS instead to do a mirror.
|
# ? Mar 21, 2023 15:58 |
|
Nitrousoxide posted:You can use docker compose in TrueNAS now. They don't require the use of kubernetes anymore. Is that just something for parsing Compose file YAML and using it to orchestrate k8s? All the links in that thread are dead so I can't really find out much about it from there. https://truecharts.org/news/docker-compose/ <- this turned up after some web searches So it's running Docker in a container, and then you attach a shell and run the compose CLI tool there? I guess that would work but it seems a bit messy. I don't understand why using something that's not Docker apparently is a non-starter for so many home users. The docker CLI is OK but not that great. Compose YAML is pretty poo poo. k8s YAML is even uglier, but it's not exactly hard to get a grasp of. There just seems to be this huge aversion to learning something new, like how'd you get started with Linux containers in the first place if you hate everything you don't know? Weird.
|
# ? Mar 21, 2023 16:00 |
|
Nitrousoxide posted:IMO, if I were building a new NAS I'd use TrueNAS Scale. I switched my main box from True NAS inside ESXI with hardware passthrough to Proxmox, and plan to switch again to TrueNAS Scale when the k8s features mature a bit more Matt Zerella posted:TrueNAS using Kubernetes to host containers seems stupid to me. I'd love to use ZFS but I like the flexibility of UnRAID. A lot of it is going to be the helm charts being a superior way to setup vs docker compose files. Hughlander fucked around with this message at 16:04 on Mar 21, 2023 |
# ? Mar 21, 2023 16:01 |
Keito posted:Is that just something for parsing Compose file YAML and using it to orchestrate k8s? All the links in that thread are dead so I can't really find out much about it from there. It's docker-in-docker. You could use the CLI tools in the container if you want, or you could install Portainer or some other orchestration tool if you prefer. I don't find compose.yaml to be hard to parse, but I've been using it for a year or two now for Docker and Podman. Kubernetes pods are inscrutable to me currently, but I'm trying to learn them.
|
|
# ? Mar 21, 2023 16:07 |
|
BlankSystemDaemon posted:If ZFS doesn't have direct access to the disks, it can't ensure that the ATA/SAS FLUSH events are handled properly, and this breaks both the transactional properties of ZFS as well as its data resiliency. And yet I'd still prefer it to ext4. (That is - you're not wrong, but it's still a nice file system even without those guarantees.)
|
# ? Mar 21, 2023 16:13 |
Matt Zerella posted:Basically it's a fast storage pool (usually SSDs) that are transparent. If my usenet client downloads a file it goes to the cache. Later on at 3 AM if the file isn't in use, it moves the file to my array (which is slower). the idea is you use them as fast immediate write so you don't have to spin up disks/deal with the slower fuse fs that unraid uses to turn multiple disks into a single filesystem (I think SnapRAID is the same).
|
|
# ? Mar 21, 2023 16:33 |
|
Or an ingestion buffer, I guess?
|
# ? Mar 21, 2023 16:51 |
Computer viking posted:Or an ingestion buffer, I guess?
|
|
# ? Mar 21, 2023 16:52 |
|
BlankSystemDaemon posted:Same thing, different name. I'd associate scratch disks with temporary storage that you'd explicitly copy to and from, while this sounds more like a transparent tiered storage thing?
|
# ? Mar 21, 2023 16:58 |
Computer viking posted:I'd associate scratch disks with temporary storage that you'd explicitly copy to and from, while this sounds more like a transparent tiered storage thing?
|
|
# ? Mar 21, 2023 17:14 |
|
The reason why pool mirrors are important is usually on unraid you'll keep your app data (docker persistent directories) and the docker image/directory pinned to the cache. Now yes, I know a mirror isn't backup, but it gives you durability on your scratch drive. BTRFS mirror actually work pretty well, but I'd rather be using ZFS as its much more stable/mature.
|
# ? Mar 21, 2023 17:17 |
iirc the only reason the unraid cache isn't only just a scratch drive is that some apps that do very frequent read-write stuff also get installed on the cache drive instead of the main array. Could just all be semantics though, I've never thought too deeply on it.
|
|
# ? Mar 21, 2023 17:21 |
|
That Works posted:iirc the only reason the unraid cache isn't only just a scratch drive is that some apps that do very frequent read-write stuff also get installed on the cache drive instead of the main array. Correct, for a share you can define a caching strategy. Yes: Data is written to cache and moved when the mover runs Prefer: Data lives on the cache drive, if cache is full it overflows to the array Only: Data is only written to cache, if cache is full, no more data (this is stupid and idk why it exists) No: Don't use cache
|
# ? Mar 21, 2023 17:26 |
Matt Zerella posted:BTRFS mirror actually work pretty well And heaven help you if you forget to manually balance the mirror once the disk has been replaced, because if you forget to do so, and the other disk fails (or you forget to disable the almost undocumented mount option), your mirror has suffered permanent dataloss - and your only option is to restore from backup. There's so much manual nonsense involved in BTRFS, that's not present in any other RAID implementation. Even hardware RAID from the bad times in the 1990s know to automatically start resilvering once a drive has been replaced, and don't prevent you from booting the array if a single drive from a mirror is missing.
|
|
# ? Mar 21, 2023 17:46 |
|
The word is write-back cache
|
# ? Mar 21, 2023 17:50 |
BlankSystemDaemon posted:Yeah, right up until you find out about how it refuses to mount itself if part of a mirror is missing without using an almost undocumented mount option (which you can't do if you're booting from a pair of mirrored drives, without modifying your initramfs from a running system before rebooting, or rescue disk if you forget to do that like most people who run into this seem to). My Synology, using BTRFS, automatically resilvered the array after I swapped out a failing drive.
|
|
# ? Mar 21, 2023 17:56 |
VostokProgram posted:The word is write-back cache Nitrousoxide posted:My Synology, using BTRFS, automatically resilvered the array after I swapped out a failing drive.
|
|
# ? Mar 21, 2023 18:04 |
|
Nitrousoxide posted:My Synology, using BTRFS, automatically resilvered the array after I swapped out a failing drive.
|
# ? Mar 21, 2023 18:08 |
|
BlankSystemDaemon posted:I will never understand how people trust UnRAID with their data. I still think the biggest appeal of Unraid is for people just getting into the NAS/Homelab game. The ability to add miscellaneous drives as you get them makes it great for those that just want to find a use for old parts/drives they have laying around and don't have the budget to buy a set of matching drives all at once. Creating an "array" and a "share" were far more intuitive to me than things like "zpool, vdev, and RaidZ1 or RaidZ2". The idea of going "poo poo, I'm low one drive space, need to just add one more drive to the array" makes a lot more sense to someone that was previously just buying another external HDD and attaching it to a USB hub. The UI makes getting into containerized apps very easy, as does the "app store" community apps plugin. Even spinning up VMs was more intuitive for me when I was getting started than it was with Proxmox and TrueNAS (at the time, these have come a long way since then). It's all stuff that can be easily done right from the vanilla OS and the Unraid forums are extremely active with a bunch of other people who are all running a very similar setup. If you're someone who needs to store stuff that's irreplaceable or business critical then sure, either buy a prebuilt system, commit to TrueNAS, or if you can't do that just put it in a Google Drive/iCloud. If you're looking to build a system to store stuff with low up-front cost and would appreciate being able to rebuild a failed drive instead of re-sourcing those files and also maybe start self-hosting apps or messing around with VMs then Unraid is a good choice. And also, its pretty clear that the single disk zfs thing was a "you could even do this if you wanted to for some reason" rather than "here's a recommended setup" kind of thing.
|
# ? Mar 21, 2023 18:48 |
|
Scruff McGruff posted:The UI makes getting into containerized apps very easy, as does the "app store" community apps plugin. Even spinning up VMs was more intuitive for me when I was getting started than it was with Proxmox and TrueNAS (at the time, these have come a long way since then). It's all stuff that can be easily done right from the vanilla OS and the Unraid forums are extremely active with a bunch of other people who are all running a very similar setup. Not to mention friendly and not elitist. The FreeNAS forums are a cesspool of assholes.
|
# ? Mar 21, 2023 19:11 |
Scruff McGruff posted:I still think the biggest appeal of Unraid is for people just getting into the NAS/Homelab game. The ability to add miscellaneous drives as you get them makes it great for those that just want to find a use for old parts/drives they have laying around and don't have the budget to buy a set of matching drives all at once. Creating an "array" and a "share" were far more intuitive to me than things like "zpool, vdev, and RaidZ1 or RaidZ2". The idea of going "poo poo, I'm low one drive space, need to just add one more drive to the array" makes a lot more sense to someone that was previously just buying another external HDD and attaching it to a USB hub. Because that's an absolute anathema to me. Matt Zerella posted:Not to mention friendly and not elitist. The FreeNAS forums are a cesspool of assholes. I try not to be friendly and non-elitist, but I'm not sure how well I pull it off.
|
|
# ? Mar 21, 2023 19:13 |
|
My unraid NAS is for both media (replacable) and backing up steam installs although lately my gaming pc has almost too much ssd space so it’s not as necessary. Also my download folder is mounted to the nas so my pc doesn’t get cluttered with crap I download. Important documents etc go straight to onedrive, I don’t even bother putting that on the NAS. Unraid is more of a media server box using various containers. I was also thinking of moving pi hole onto a container as my pi packed it in which shows it is silly putting load bearing network infrastructure on a single rpi. I knew that already of course just was too lazy to bother before
|
# ? Mar 21, 2023 19:17 |
|
|
# ? Jun 6, 2024 08:46 |
priznat posted:Important documents etc go straight to onedrive, I don’t even bother putting that on the NAS. Unraid is more of a media server box using various containers.
|
|
# ? Mar 21, 2023 19:20 |