|
mekyabetsu posted:With ZFS, do mirrored vdevs need to be the same size? Let's say I have three mirrored vdevs setup like so: Correct you can expand the pool via adding new mirror vdevs. One thing to watch out for is that your disk configuration has one vdev with disks which are significantly smaller than the other two which will likely result in an uneven distribution of data favoring the vdevs with larger drives. https://jrs-s.net/2018/04/11/how-data-gets-imbalanced-on-zfs/
|
# ? Apr 16, 2024 16:35 |
|
|
# ? Jun 5, 2024 06:00 |
|
As I understand it: you can added mirrored vdevs ad hoc and it'll work. In a datacentre environment, wildly different sizes/performance/free-space can cause performance issues, but single user home use it should be fine.
|
# ? Apr 16, 2024 17:12 |
|
Arishtat posted:Correct you can expand the pool via adding new mirror vdevs. I read about this, but I don't understand why it's a problem. I mean, obviously more data is going to be written to the larger drives because they're... bigger.
|
# ? Apr 16, 2024 17:25 |
|
mekyabetsu posted:With ZFS, do mirrored vdevs need to be the same size? Let's say I have three mirrored vdevs setup like so: You're mostly right - you will get 20TB, and you can add any vdevs you want to an existing pool. The debatable part is "not a problem" - ZFS tries to keep all vdevs at roughly equally full, so the new mirror will get near enough 100% of the write load until they catch up to the rest. If this is a problem or not depends on your use.
|
# ? Apr 16, 2024 17:46 |
|
mekyabetsu posted:I read about this, but I don't understand why it's a problem. I mean, obviously more data is going to be written to the larger drives because they're... bigger. Basically if you care a lot about throughput and iops you want all writes to be spread as evenly as possible among the vdevs. For example if you need to write 10 GB and you have 4 vdevs, the fastest thing is to have each vdev take 2.5 GB so they all finish at the same time. However zfs wants to balance the % used, so bigger vdevs get more writes, as do new empty vdevs. That means after adding a new one, that entire 10 GB write will go just to the new vdev. This is slower because we aren't doing anything in parallel. For home use this is not a problem.
|
# ? Apr 16, 2024 18:05 |
|
As an aside, an individual vdev can be made up of differently-sized drives (i.e. an 8TB and a 10TB), with ZFS treating the vdev as if it's the size of the smallest drive. Typically this is something only done when you're expanding in place - replace one drive with a larger drive, resilver, replace the other drive with a larger drive, resilver again, expand to fill the new usable space. I will third/fourth/whatever that mixed-size vdevs is not a problem for all but the most extreme "home" use cases. However, I'll throw out another concern. I'm assuming the 2TB drives are old, because 2TB. Is an extra 2TB storage worth the increased risk of losing the entire array if both of the 2TB drives die before you can finish replacing one of them? I would put them in a separate pool and use it for local backups of irreplaceable data instead of making it part of your main pool.
|
# ? Apr 16, 2024 19:36 |
|
What's the recommended approach for reverse proxying into UnRAID to get access to various containers? I've seen folks recommend Cloudflare's tunnel system but I don't feel like switching all my DNS stuff over from AWS. Nginx with port forwarding seems to be the alternative.
Tiny Timbs fucked around with this message at 21:57 on Apr 16, 2024 |
# ? Apr 16, 2024 21:53 |
|
Tailscale.
|
# ? Apr 17, 2024 01:07 |
|
IOwnCalculus posted:I'm assuming the 2TB drives are old, because 2TB. Is an extra 2TB storage worth the increased risk of losing the entire array if both of the 2TB drives die before you can finish replacing one of them? I would put them in a separate pool and use it for local backups of irreplaceable data instead of making it part of your main pool. Yeah, I just used that as an example. I do have some smaller drives, but I'll likely just sell those and buy some larger 10+ TB drives to expand when needed. Thanks to all for your help and answers!
|
# ? Apr 17, 2024 02:29 |
|
When I do reverse proxying, I don’t need to install the SSL certificate in every drat container? Traefik worthwhile or too much?
|
# ? Apr 17, 2024 06:19 |
|
No and yes worthwhile.
|
# ? Apr 17, 2024 06:49 |
|
Tiny Timbs posted:What's the recommended approach for reverse proxying into UnRAID to get access to various containers? I've seen folks recommend Cloudflare's tunnel system but I don't feel like switching all my DNS stuff over from AWS. Nginx with port forwarding seems to be the alternative. Unraid has Wireguard built-in under Settings > VPN Tools. If you just need personal access to the server from offsite, would definitely recommend going that route or using Tailscale. If you need it publicly accessible then I can recommend using Nginx Proxy Manager (NPM), it gives you an easy to use GUI for configuring your proxy routes and can create certs with LetsEncrypt or you can install certs from an external provider.
|
# ? Apr 17, 2024 14:16 |
|
Been thinking about getting a backup Nas for home use. Mostly just wanted to get backups for my important documents on my main computer and my DJing laptop before it keels over and dies and I have to rebuild all my playlists and metrics by hand. I don't really care about plex or setting it up as a media server, but I wouldn't necessarily be against having some space dedicated for an iscsi drive to throw steam games on. I'm looking at a turnkey system rather than dealing with the hassle of building my own. Has anyone used the Truenas Mini X? I have a lot of direct experience in the Synology space, but the mini x came up on my radar and I like the platform for the price - I just have no experience with the software stacks on either the Core or Scale side. Do you feel it's worth the extra $400 to move from like a DS1522+ to the Truenas Mini X? Also how concerned should I be for noise on either of those models? I live in a one bedroom where I can't necessarily get away from the sound of something if it's roaring like a jet engine constantly.
|
# ? Apr 17, 2024 17:26 |
|
Scruff McGruff posted:Unraid has Wireguard built-in under Settings > VPN Tools. If you just need personal access to the server from offsite, would definitely recommend going that route or using Tailscale. I ended up going with NPM and I really like it aside from some setup issues. The URL checking service for LetsEncrypt kept giving me weird and inconsistent errors and would only let me get a cert for my A record and not the CNAMEs. Using the Route53 API method worked flawlessly. Now I have to figure out how split DNS works so I can direct local network traffic without going through the web, and I’m thinking about setting up Authelia for 2FA.
|
# ? Apr 17, 2024 17:54 |
|
Seeing the BRT fixes that keep going into OpenZFS, they sure got blindsided by their own project complexity. The 2.2.4 release of OpenZFS is gonna get some speculative prefetcher improvements I'm putting my hopes in, to improve streaming game data from a spinning rust mirror on a cold cache.
|
# ? Apr 17, 2024 22:37 |
|
I’m choosing an OS for my file server which will be running ZFS along with Plex and some other relatively lightweight home lab stuff. I assume Ubuntu plays nicely with ZFS and will be suitable for my needs? I was looking at Manjaro as well, but for a home server, I think I’d prefer something a little more stable (and familiar to me) like an Ubuntu LTS release.
|
# ? Apr 18, 2024 18:27 |
|
Yeah, ZFS on Ubuntu is definitely supported and very easy to install: https://openzfs.github.io/openzfs-docs/Getting%20Started/Ubuntu/index.html
|
# ? Apr 18, 2024 18:34 |
|
DeathSandwich posted:
I don’t have a TrueNAS mini X but I run TrueNAS Scale. I like it - the older diehard users have grumps but I suspect it’s because the initial release was bumpy. Scale works fine - I have a pretty typical setup with media serving and storing data. I built my own NAS, but for any purpose built device they’ll be pretty quiet. Largely unnoticeable unless you’re reading/writing a shitload of data. As to whether to spend the extra $400, probably not? The big thing the mini x has is ECC ram which, depending on who you ask, is important. I decided it was important because I didn’t want cosmic particles flipping bits on my raw photography backups. You may not care about this though.
|
# ? Apr 18, 2024 20:20 |
|
Do we have an unraid thread or do we do all that poo poo here? If so, I guess please point me there. For preface, this is my first time diving into unraid/similar systems outside of drivepool on windows. I interface with plenty of CLI tools, done some messing around with docker on HAOS on Pi, and usually do pretty well with research and reading documentation so I’m not super concerned about figuring it out. I’ve been running my PMS with ~ 60TB of space on my gaming desktop in windows via drivepool. Mixed HDDs from 1TB-20TB, something like 8x drives, and some are on the older side. My pool is nearly full but I’ve been de-duping and clearing space because I know at minimum I need enough space to empty a large drive to start the slow move over. I’ve finally upgrade my CPU/Board/ram and plan on using this as a push to finally move to unraid. Lack of hard links, general pain-in-the-rear end-ness of most of the tools I wanna run on windows, and accessibility to docker containers for home assistant, pi-hole style stuff, and tons of other containers are why I’m moving over. Not to mention stability. My main question is around the parity drive system in unraid, and what happens on a drive failure. I’m not really wanting to purchase an entire 20TB drive just for parity, especially as the majority of the data is for my Plex system and is ultimately replaceable, and because unraid is supposed to save me TBs of space via hard linking For ease I’ll try to list each question out separately.
As a just in case, here’s the hardware for this and my transition plan
My plan is to do a bench build of the unraid server while I move the data over 1 drive at a time either via unassigned drive plugin method or via network from the windows build. Please do let me know if I’m being stupid or there are better ways to do this. Due to unraid seemingly running fairly unique configs for most people, I’m having some slight trouble researching exact answers and it’s a little tougher for me without the box in front of me. I wanna minimize downtime so I want all my pieces in place before I start if possible.
|
# ? Apr 18, 2024 23:32 |
|
Don’t do it. Save yourself the future headache and janitoring and just shuck some cheap WD easystores and allocate one toward a parity drive so you won’t hate yourself if and when a drive dies. Unraid arrays don’t traditionally stripe data, so if you lose one drive, you only lose that drive’s data, and you may get some errors before it goes bad, but you’re saving maybe $200 on a server you’re going to utilize for the next decade - Just get the extra 14TB-18TB parity drive the next time they go on sale for $200 at Best Buy. Corb3t fucked around with this message at 00:34 on Apr 19, 2024 |
# ? Apr 18, 2024 23:56 |
|
Pilfered Pallbearers posted:Do we have an unraid thread or do we do all that poo poo here? If so, I guess please point me there. I moved to unraid about 4 months ago by a similar method. Had everything in a windows storage space, bought a 20tb drive, copied everything to it. Then moved to unraid and ran the old drives without parity until things got copied over, then the 20tb became parity. I was obviously only able to do this because my total data was less than the one big drive, but what you're wanting to do is similar. As to your specific questions: 1. Honestly, I don't know how bad a drive has to fail to be inaccessable. 2. Without parity, it would again depend on the disk. With parity the data can be rebuilt. 3. Unraid isn't raid, so data is contained entirely upon its disk. It doesn't get striped 4. This is what parity is for. Otherwise there is probably a plugin that would let you mirror data across them. 5. What's your main worry? If stuff is replaceable and the downtime caused by replacing it is acceptable go ahead.
|
# ? Apr 19, 2024 01:45 |
|
1:Unraid is pretty quick to disable drives. 2: If you can recover data from a failed drive if you don't have parity depends entirely on how badly the drive has failed. Each disk in the array on unraid has its own independent filesystem, so you can mount it separately on your server or another device and try to access the files from it. 3: For the same reasons, drive failure won't affect any of the other disks. 4: Nothing native in unraid to duplicate data across disks, but you could pretty easily set this up with a scheduled script to copy a directory from one share to another, and set those two shares to use different sets of physical disks. 5: I think you should run parity for convenience. Even if you have everything backed up, it's a hassle. And drives failing is an inevitability. A potentially bigger issue with losing replaceable data is figuring out what you need to replace. The 'Arr suite will help you figure out movies/tv shows that were lost, but for media that isn't tracked by something like that it could be a hassle. Unraid has some ways to manage share/directory structure to try to keep data on the same disks, but it is a bit tedious to set that up. With unraid you can add a parity drive later, it doesn't need to be setup from the start.
|
# ? Apr 19, 2024 06:05 |
|
Is there a preference or best practice for what type of disk partitions to use for ZFS? I decided to just delete all the partitions on the disks I'm using and let zpool decide for me, and I got this:code:
|
# ? Apr 20, 2024 00:52 |
ZFS works best with whole disks without any partitioning, if you don’t need to plug the disks into a system likely to try and “initialize” a disk that appears empty, or don’t need things like boot records or swap partitions.
|
|
# ? Apr 20, 2024 01:52 |
|
I think the partition type is literally just a label so the OS knows what it's working with, and doesn't affect anything about the actual layout or functionality of the partition - asking which is best is like asking which file extension is best for a particular type of file. As long as the OS recognizes what it's working with, you should be good.
|
# ? Apr 20, 2024 02:16 |
|
mekyabetsu posted:Is there a preference or best practice for what type of disk partitions to use for ZFS? I decided to just delete all the partitions on the disks I'm using and let zpool decide for me, and I got this: The best practice is to give zfs the entire disk and that is what mine looks like as well. I assume it's something to do with linux not understanding how zfs works
|
# ? Apr 20, 2024 03:05 |
|
BlankSystemDaemon posted:ZFS works best with whole disks without any partitioning hifi posted:The best practice is to give zfs the entire disk and that is what mine looks like as well. I assume it's something to do with linux not understanding how zfs works The 8M partitions were created automatically for what I assume is a very good reason. Eletriarnation posted:I think the partition type is literally just a label so the OS knows what it's working with, and doesn't affect anything about the actual layout or functionality of the partition - asking which is best is like asking which file extension is best for a particular type of file. As long as the OS recognizes what it's working with, you should be good. This makes sense to me. Thank you!
|
# ? Apr 20, 2024 04:16 |
mekyabetsu posted:Yup, this is what I did when I created the pool. I ran “zpool create” with 2 drives that were unpartitioned, and that was the result. If that works for ZFS, it’s fine with me. I just wasn’t sure why it chose those particular partition types. I know ZFS was originally a Sun Solaris thing, so it’s probably related to that. You need to do this, because Linux is the one Unix-like that doesn't understand that it shouldn't reassign drives between reboots (the reasons why it does this has to do with its floppy disk handling) - so there's a small risk that you'll trigger a resilver; typically this isn't a problem, but does degrade the array, meaning that a URE could cause dataloss. On my fileserver, the 24/7 online pool is is a raidz2 of 3x6TB+1x8TB internal disks in raidz2 totalling ~20TB, and the offline onsite backup pool is just shy of 200TB in total made up of 15x2TB raidz3 vdevs each in their own SAS2 enclosure. The internal drives are what the system boots to and they're where all the 24/7 storage lives, so they have partitioning both for the EFI System Partition, a swap partition on the 8TB, and the rest is used for root-on-ZFS The external drives are all completely unpartitioned, because this lets me simply run sesutil locate to turn on a LED to make it easy to identify the disk that needs replacing, and then I just go pull the disk and insert a new one - this is the advantage of unpartitioned disks, because zfs automatically starts replacing the disk on its own, and if all devices in a vdev have been replaced with something bigger, the vdev grows automatically too (this is accomplished using the autoreplace and autoexpand properties documented in zpoolprops(7)). EDIT: I still need to figure out if it's possible to automatically turn on the fault LED in FreeBSD. Trouble is, every failure of spinning rust I've had has been the kind of error that's hard to know about without ZFS (and about half have been impossible to figure out by using S.M.A.R.T alone), so I'm not sure I'd even have benefited. BlankSystemDaemon fucked around with this message at 13:20 on Apr 20, 2024 |
|
# ? Apr 20, 2024 13:10 |
|
BlankSystemDaemon posted:I was phone-posting from bed when responding, so I didn't notice it then - but there's something you do want to take care of: Switch to using /dev/disk/by-id/ for your devices, instead of plain /dev/ devices. code:
|
# ? Apr 20, 2024 14:03 |
|
For zfs best practices truenas should know what they are doing. They turn on a ton options presumably for compatibility and performance, if you run "zpool history" you can all the commands. Here is the zpool command trunas scale did for my 6 drive raidz2. code:
Perplx fucked around with this message at 14:18 on Apr 20, 2024 |
# ? Apr 20, 2024 14:16 |
|
Last I remember, TrueNAS creates 2GB swap partitions on all disks you put in a pool. I created my pools on the command line to do whole unpartitioned disks like in ye olde OpenSolaris days.
|
# ? Apr 20, 2024 16:12 |
|
mekyabetsu posted:Ah, okay. I saw the /dev/disk/by-id stuff mentioned, but I didn't understand why it was important. If each drive on my server has multiple IDs, does it matter which one I use? For example, here are the files in my server's /dev/disk/by-id/ directory that all symlink to /dev/sda: Any of them should be equivalent because none of those IDs can possibly ever mean a different disk. "sda" could be anything, but ata...VLKMST1Y will always be that disk no matter if it gets picked up as sda or sdx. I used the IDs starting with 'scsi-S' for all of mine because that got all of my SAS and SATA drives all in the same format, and it includes the full drive model / serial number in the drive name so it's that much easier to know which drive has hosed off. Yes, you can do this without recreating the pool from scratch. Export the pool, then re-import it with "zpool import -d /dev/disk/by-id/ [poolname]". If you care which symlink format you want zpool to use, delete all of the ones you don't want from /dev/disk/by-id after you export but before you reimport. They're just symlinks that get recreated every time the system boots, specifically for you to use with poo poo like this.
|
# ? Apr 20, 2024 17:27 |
|
Ok, I think it's time to take the plunge on this NAS project. Whipped up a quick parts list on PC Part Picker, aiming to use it mainly for storage, potentially as a future PLEX server. Took some advice given from earlier in the thread and added an Intel A380 to the list for the AV1 encoding. Anyone see any glaring issues that'd be cause for concern with this setup? CPU: AMD Ryzen 5 7600X 4.7 GHz 6-Core Processor ($208.50 @ Amazon) CPU Cooler: Noctua NH-L9A-AM5 CHROMAX.BLACK 33.84 CFM CPU Cooler ($54.95 @ Amazon) Motherboard: MSI MPG B650I EDGE WIFI Mini ITX AM5 Motherboard ($260.00 @ MSI) Memory: G.Skill Flare X5 32 GB (2 x 16 GB) DDR5-6000 CL32 Memory ($96.90 @ Amazon) Storage: Western Digital Red Plus 12 TB 3.5" 7200 RPM Internal Hard Drive ($229.99 @ Best Buy) Storage: Western Digital Red Plus 12 TB 3.5" 7200 RPM Internal Hard Drive ($229.99 @ Best Buy) Storage: Western Digital Red Plus 12 TB 3.5" 7200 RPM Internal Hard Drive ($229.99 @ Best Buy) Storage: Western Digital Red Plus 12 TB 3.5" 7200 RPM Internal Hard Drive ($229.99 @ Best Buy) Video Card: ASRock Low Profile Arc A380 6 GB Video Card ($113.99 @ Newegg) Case: Jonsbo N3 Mini ITX Desktop Case Power Supply: Corsair SF600 600 W 80+ Platinum Certified Fully Modular SFX Power Supply ($225.00 @ Amazon) Total: $1879.30 Prices include shipping, taxes, and discounts when available Generated by PCPartPicker 2024-04-20 16:50 EDT-0400 I've got some spare M.2 NVMe SSDs I can throw in there too for an OS and maybe use as a cache. Additionally, follow-up question: what's the go-to choice of OS for running a NAS?
|
# ? Apr 20, 2024 21:53 |
|
MadFriarAvelyn posted:Ok, I think it's time to take the plunge on this NAS project. Whipped up a quick parts list on PC Part Picker, aiming to use it mainly for storage, potentially as a future PLEX server. Took some advice given from earlier in the thread and added an Intel A380 to the list for the AV1 encoding. Anyone see any glaring issues that'd be cause for concern with this setup? In the absence of any other criteria, I recommend TrueNAS SCALE/Core. Scale is the future from what they’re signaling so if it’s fresh, worth considering just starting there. With the beefy gpu, you could go substantially cheaper on the cpu IMO. Question to consider: how important is data integrity to you? If that’s super important, consider whether you want ECC memory. That’ll help ensure, along with ZFS, that your bits stay correctly flipped. I care about ecc memory because I backup my raw photo library and some important financial docs via my NAS (as well as offsite). This may not be a concern for you, especially if you’re just downloading Linux ISOs. If you want ECC memory, your easiest route is usually through either Intel’s atom “enterprise” skus or a Xeon. You can do AMD Opteron, but when I was building in 2022, the options were pricier than the Xeons I was looking at.
|
# ? Apr 20, 2024 22:25 |
|
rufius posted:Question to consider: how important is data integrity to you? If that’s super important, consider whether you want ECC memory. That’ll help ensure, along with ZFS, that your bits stay correctly flipped. I was actually considering ECC memory but PC Part Picker didn't list any while I was picking parts. Doing a quick google search I guess I need a specific tier of AMD motherboard to get support for it? I'm not strict with going AMD for the processor for this one, I just want one that won't flounder if I try something more complex with this in the future. If Intel has something that won't be a power hungry gremlin I'm ok with choosing something over there too if it makes getting access to ECC memory easier. MadFriarAvelyn fucked around with this message at 23:59 on Apr 20, 2024 |
# ? Apr 20, 2024 23:56 |
|
I am not sure if it's commonly supported on consumer AM5 boards, but you can easily get ECC on AM4 with most ASRock boards or some ASUS/Gigabyte models. I don't know if MSI has any support for it. You just have to check the manufacturer specs page and see if ECC support is listed; all the ones I've seen that support it say "ECC & non-ECC" or something like that. Some of them even have it listed on the Newegg page. Here are a few example models with ECC: https://www.newegg.com/asus-prime-b550-plus-ac-hes/p/N82E16813119665 https://www.newegg.com/asrock-b550m-pro4/p/N82E16813157939 https://www.newegg.com/gigabyte-b550m-ds3h-ac/p/N82E16813145250 Of course, you also have to acquire ECC UDIMMs which are not incredibly common. I am using a pair of this 16GB Kingston model in an X570 Taichi, which has been problem-free: https://www.provantage.com/kingston-technology-ksm32es8-16hc~7KINM2JY.htm?source=googleps e: I don't think Intel has many advantages in this space. They have thankfully opened up ECC support on consumer CPUs starting with 12th gen, but unfortunately I believe you still need a server or workstation (W680 chipset) motherboard and those generally cost substantially more. Eletriarnation fucked around with this message at 14:41 on Apr 21, 2024 |
# ? Apr 21, 2024 00:41 |
|
Ya - when I was building it was peak pandemic and hard to find those AM4 boards that supported ECC mem. It was easier to find the server board and Xeon which is how I ended up there. Pricing was fine but go with AMD if you find the right support.
|
# ? Apr 21, 2024 02:15 |
|
If you live in the US or can otherwise get something from a US address Micron sells unbuffered ECC on their website. https://www.crucial.com/catalog/memory/server?module-type(-)ECC%20UDIMM(--)module-type(-)VLP%20ECC%20UDIMM
|
# ? Apr 21, 2024 07:34 |
|
Ok, parts ordered, with a few substitutions. Ended up going with a Fractal Node 304 for the case due to availability of the Jonsbo case I originally wanted, which led to a PSU and GPU replacement for ones that'd better fit the new choice of case. Opted against going with ECC memory because apparently it's a crapshoot for AM5 motherboards and I don't want to be locked into the dying AM4 platform otherwise. Wish me luck, goons.
|
# ? Apr 21, 2024 23:12 |
|
|
# ? Jun 5, 2024 06:00 |
|
I'm not sure the CPU of a NAS is something you'd ever upgrade unless you were doing some madcap "i'm delivering content to 100 users on the LAN" setup.
|
# ? Apr 22, 2024 10:57 |