|
With Emby (and I assume jellyfin, but don't have it running atm to check), you can toggle a checkbox per user to disable media playback. They can still download files if that's enabled for them.
|
# ¿ Jun 18, 2023 21:00 |
|
|
# ¿ May 16, 2024 10:54 |
|
Corb3t posted:Unraid 6.12.0 with ZFS support has been released. Is there any reason I'd want to re-format an xfs or btrfs formatted NVMe SSD to zfs? SpaceInvader One put out a video on how to do this, and mentioned some things coming in upcoming videos. -Making top level folders in a share their own dataset, so each of your containers in appdata can be snapshotted independently for easy rollback. -Auto snapshotting and auto-replication from dataset to dataset
|
# ¿ Jun 18, 2023 21:10 |
|
CA Backup/Restore is supposed to be one of the more reliable ways to get a backup of your appdata because it shuts down your docker containers before making its backup. Did you put the files back in place manually or use the restore function of the plugin? I've had permissions issues manually copying files from its backup to a single borked application, but for a full cache drive replacement you should just need to hit restore and have everything copied back correctly. Spaceinvader one is putting out video tutorialss with ZFS on Unraid 6.12, and one of the things you can do with it is setup each folder in your appdata as an individual zfs dataset, and then use zfs replication to backup those to a zfs formatted drive in the array. But his last video on actually setting up the replication isn't out yet.
|
# ¿ Jul 19, 2023 13:48 |
|
When you have parity drives it will calculate parity when data is initially written to the array like you would expect. Unraid parity disks can be added later on, and it will scrub through your array drives and generate the parity whenever as well. A popular recommendation for initial ingest into a new unraid server is to leave the parity disabled and bypass any cache so you're writing straight to array disks without any parity overhead. Then once the initial ingest is done enable parity for the array and cache for your shares for normal day to day use.
|
# ¿ Sep 17, 2023 13:58 |
|
How fast it takes to build parity really is determined by the size of the individual drive, if it's dual parity or not and the total size of the array really isn't a factor. My 20TB parity takes nearly 2 full days to build or check.
|
# ¿ Sep 17, 2023 14:33 |
|
Unraid parity has overhead because it is calculating parity by reading from your array drives. Unraid's parity can only protect you from drive failure. There are plugins available to identify and track bit-rot or other file corruption, but if you care about that you probably should just use something besides Unraid. Unraid's parity check will tell you you had some issue as it will detect errors with your parity, but you won't have a way to identify where it occurred.
|
# ¿ Sep 17, 2023 15:20 |
|
bone emulator posted:I guess this is the place to ask this? While certainly not a fan of the community, I have been a fan of serverbuilds actual builds and they just came out with a new "NAS Killer 6.0" build that looks pretty solid. https://forums.serverbuilds.net/t/guide-nas-killer-6-0-ddr4-is-finally-cheap/13956 About ~$375 before storage Since it's local only you could also consider a jankier and cheaper option, a quicksync capable laptop and some attached external drives. Here's a somewhat random example also stolen from serverbuilds tech deals. https://www.ebay.com/sch/i.html?_fr...1-53200-19255-0
|
# ¿ Oct 8, 2023 02:15 |
|
If you're not attached to windows, as a do it yourself solution you can combine mergefs/snapraid. Or pay for unraid.
|
# ¿ Jan 29, 2024 19:46 |
|
If it's just in the event of a disaster you can put it in an S3 bucket that transitions the storage tier of files to glacier deep after 1 day. That comes down to ~$1/TB/month if I'm remembering pricing right.
|
# ¿ Jan 31, 2024 02:01 |
|
If you're using an HBA card it should be pretty easy. You can pass the card through and Truenas will see the drives directly, so you could import your bare metal TrueNAS array into a new virtualized TrueNAS pretty simply. My recent Unraid build I did on baremetal for a while to make sure it was all stable, then switched to Proxmox this way. Since I was using an nvme for unraid cache/appdata and nvme drives are just pcie devices, I was able to pass that through directly as well. Could switch between booting unraid on bare metal and as a proxmox VM with 0 configuration changes, just choosing which boot device during startup. If you have sata drives connected directly to the motherboard I think this gets more annoying, as passing those through with proxmox I believe will make them appear as a different drive.
|
# ¿ Mar 23, 2024 17:14 |
|
Oysters Autobio posted:Was considering an HBA card but now if this sort of setup makes it easy then I probably will get one then. I was told to get LSI cards and avoid Adaptec ones, which was advice I followed but don't really know the reasons behind. You'll want one that's capable of being flashed to IT mode, but you'll probably find the seller advertising they pre-flashed it and won't need to do it yourself. Different cards will use different numbers of PCIE lanes. I think you can use an 8x card in a 4x slot, but I'm unsure if that will only affect total bandwidth (combined max speed of all drives limited based on number of lanes and pcie version), or if it will cause errors or weirdness if you approach that speed. You connect hard drives to the hba with breakout cables that let you hook up 4 drives to each port on the hba. Depending on the hba the type of connector is different, so make sure you have the right one. The naming convention of lsi hba's is pretty simple. An LSI 92## card should be a PCIE gen 2 version, and a 93## will be a gen 3. Gen 2 is still probably fine for spinning drives and are extremely cheap. But gen 3 ones have come down in price recently. After the 9### model number, they will have another number and letter. That number is the number of drives it supports and if the connections are internal or external. So a 9300-8i is a PCIE gen 3 card with 2 internal connectors that you can connect 8 drives to. Don't assume PCIE lane requirements from the number of hard drives connections, lookup the specific model details. You can use more drives than a HBA card supports with a SAS expander card. These are pcie cards that just need power and a connection the the GBA, and act as the hard drive equivalent of a network switch. EDIT: As a final note, some HBAs were designed for servers with the expectation that there would be airflow over them and can overheat, so a common recommendation is to zip tie a tiny fan onto its heatsink. Some don't need this, but I don't know anywhere you could check.
|
# ¿ Mar 23, 2024 21:45 |
|
|
# ¿ May 16, 2024 10:54 |
|
1:Unraid is pretty quick to disable drives. 2: If you can recover data from a failed drive if you don't have parity depends entirely on how badly the drive has failed. Each disk in the array on unraid has its own independent filesystem, so you can mount it separately on your server or another device and try to access the files from it. 3: For the same reasons, drive failure won't affect any of the other disks. 4: Nothing native in unraid to duplicate data across disks, but you could pretty easily set this up with a scheduled script to copy a directory from one share to another, and set those two shares to use different sets of physical disks. 5: I think you should run parity for convenience. Even if you have everything backed up, it's a hassle. And drives failing is an inevitability. A potentially bigger issue with losing replaceable data is figuring out what you need to replace. The 'Arr suite will help you figure out movies/tv shows that were lost, but for media that isn't tracked by something like that it could be a hassle. Unraid has some ways to manage share/directory structure to try to keep data on the same disks, but it is a bit tedious to set that up. With unraid you can add a parity drive later, it doesn't need to be setup from the start.
|
# ¿ Apr 19, 2024 06:05 |