Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Arishtat
Jan 2, 2011

mekyabetsu posted:

With ZFS, do mirrored vdevs need to be the same size? Let's say I have three mirrored vdevs setup like so:

vdev1: 2x 8 TB drives
vdev2: 2x 10 TB drives
vdev3: 2x 2 TB drives

I would end up with a single 20 TB pool. Right?

Also, it's not a problem to add a new pair of drives as a mirrored vdev after the pool has been created and is in use, correct? I understand that you aren't really meant to add drives to expand the size of a pool in a RAIDZ setup, but if I'm just using mirrored pairs of drives, adding a new vdev is a simple and expected use case, right?

Sorry for the newbie questions. I'm slowly going through ZFS documentation, but there's a lot of it and I'm dumb. :(

Correct you can expand the pool via adding new mirror vdevs.

One thing to watch out for is that your disk configuration has one vdev with disks which are significantly smaller than the other two which will likely result in an uneven distribution of data favoring the vdevs with larger drives.

https://jrs-s.net/2018/04/11/how-data-gets-imbalanced-on-zfs/

Adbot
ADBOT LOVES YOU

Pablo Bluth
Sep 7, 2007

I've made a huge mistake.
As I understand it: you can added mirrored vdevs ad hoc and it'll work. In a datacentre environment, wildly different sizes/performance/free-space can cause performance issues, but single user home use it should be fine.

mekyabetsu
Dec 17, 2018

Arishtat posted:

Correct you can expand the pool via adding new mirror vdevs.

One thing to watch out for is that your disk configuration has one vdev with disks which are significantly smaller than the other two which will likely result in an uneven distribution of data favoring the vdevs with larger drives.

https://jrs-s.net/2018/04/11/how-data-gets-imbalanced-on-zfs/

I read about this, but I don't understand why it's a problem. I mean, obviously more data is going to be written to the larger drives because they're... bigger.

Computer viking
May 30, 2011
Now with less breakage.

mekyabetsu posted:

With ZFS, do mirrored vdevs need to be the same size? Let's say I have three mirrored vdevs setup like so:

vdev1: 2x 8 TB drives
vdev2: 2x 10 TB drives
vdev3: 2x 2 TB drives

I would end up with a single 20 TB pool. Right?

Also, it's not a problem to add a new pair of drives as a mirrored vdev after the pool has been created and is in use, correct? I understand that you aren't really meant to add drives to expand the size of a pool in a RAIDZ setup, but if I'm just using mirrored pairs of drives, adding a new vdev is a simple and expected use case, right?

Sorry for the newbie questions. I'm slowly going through ZFS documentation, but there's a lot of it and I'm dumb. :(

You're mostly right - you will get 20TB, and you can add any vdevs you want to an existing pool. The debatable part is "not a problem" - ZFS tries to keep all vdevs at roughly equally full, so the new mirror will get near enough 100% of the write load until they catch up to the rest. If this is a problem or not depends on your use.

Yaoi Gagarin
Feb 20, 2014

mekyabetsu posted:

I read about this, but I don't understand why it's a problem. I mean, obviously more data is going to be written to the larger drives because they're... bigger.

Basically if you care a lot about throughput and iops you want all writes to be spread as evenly as possible among the vdevs. For example if you need to write 10 GB and you have 4 vdevs, the fastest thing is to have each vdev take 2.5 GB so they all finish at the same time. However zfs wants to balance the % used, so bigger vdevs get more writes, as do new empty vdevs. That means after adding a new one, that entire 10 GB write will go just to the new vdev. This is slower because we aren't doing anything in parallel.

For home use this is not a problem.

IOwnCalculus
Apr 2, 2003





As an aside, an individual vdev can be made up of differently-sized drives (i.e. an 8TB and a 10TB), with ZFS treating the vdev as if it's the size of the smallest drive. Typically this is something only done when you're expanding in place - replace one drive with a larger drive, resilver, replace the other drive with a larger drive, resilver again, expand to fill the new usable space.

I will third/fourth/whatever that mixed-size vdevs is not a problem for all but the most extreme "home" use cases. However, I'll throw out another concern. I'm assuming the 2TB drives are old, because 2TB. Is an extra 2TB storage worth the increased risk of losing the entire array if both of the 2TB drives die before you can finish replacing one of them? I would put them in a separate pool and use it for local backups of irreplaceable data instead of making it part of your main pool.

Tiny Timbs
Sep 6, 2008

What's the recommended approach for reverse proxying into UnRAID to get access to various containers? I've seen folks recommend Cloudflare's tunnel system but I don't feel like switching all my DNS stuff over from AWS. Nginx with port forwarding seems to be the alternative.

Tiny Timbs fucked around with this message at 21:57 on Apr 16, 2024

simble
May 11, 2004

Tailscale.

mekyabetsu
Dec 17, 2018

IOwnCalculus posted:

I'm assuming the 2TB drives are old, because 2TB. Is an extra 2TB storage worth the increased risk of losing the entire array if both of the 2TB drives die before you can finish replacing one of them? I would put them in a separate pool and use it for local backups of irreplaceable data instead of making it part of your main pool.

Yeah, I just used that as an example. I do have some smaller drives, but I'll likely just sell those and buy some larger 10+ TB drives to expand when needed.

Thanks to all for your help and answers!

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
When I do reverse proxying, I don’t need to install the SSL certificate in every drat container? Traefik worthwhile or too much?

Aware
Nov 18, 2003
No and yes worthwhile.

Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.

Tiny Timbs posted:

What's the recommended approach for reverse proxying into UnRAID to get access to various containers? I've seen folks recommend Cloudflare's tunnel system but I don't feel like switching all my DNS stuff over from AWS. Nginx with port forwarding seems to be the alternative.

Unraid has Wireguard built-in under Settings > VPN Tools. If you just need personal access to the server from offsite, would definitely recommend going that route or using Tailscale.

If you need it publicly accessible then I can recommend using Nginx Proxy Manager (NPM), it gives you an easy to use GUI for configuring your proxy routes and can create certs with LetsEncrypt or you can install certs from an external provider.

DeathSandwich
Apr 24, 2008

I fucking hate puzzles.
Been thinking about getting a backup Nas for home use. Mostly just wanted to get backups for my important documents on my main computer and my DJing laptop before it keels over and dies and I have to rebuild all my playlists and metrics by hand. I don't really care about plex or setting it up as a media server, but I wouldn't necessarily be against having some space dedicated for an iscsi drive to throw steam games on. I'm looking at a turnkey system rather than dealing with the hassle of building my own.

Has anyone used the Truenas Mini X? I have a lot of direct experience in the Synology space, but the mini x came up on my radar and I like the platform for the price - I just have no experience with the software stacks on either the Core or Scale side.

Do you feel it's worth the extra $400 to move from like a DS1522+ to the Truenas Mini X?

Also how concerned should I be for noise on either of those models? I live in a one bedroom where I can't necessarily get away from the sound of something if it's roaring like a jet engine constantly.

Tiny Timbs
Sep 6, 2008

Scruff McGruff posted:

Unraid has Wireguard built-in under Settings > VPN Tools. If you just need personal access to the server from offsite, would definitely recommend going that route or using Tailscale.

If you need it publicly accessible then I can recommend using Nginx Proxy Manager (NPM), it gives you an easy to use GUI for configuring your proxy routes and can create certs with LetsEncrypt or you can install certs from an external provider.

I ended up going with NPM and I really like it aside from some setup issues. The URL checking service for LetsEncrypt kept giving me weird and inconsistent errors and would only let me get a cert for my A record and not the CNAMEs. Using the Route53 API method worked flawlessly.

Now I have to figure out how split DNS works so I can direct local network traffic without going through the web, and I’m thinking about setting up Authelia for 2FA.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Seeing the BRT fixes that keep going into OpenZFS, they sure got blindsided by their own project complexity. :haw:

The 2.2.4 release of OpenZFS is gonna get some speculative prefetcher improvements I'm putting my hopes in, to improve streaming game data from a spinning rust mirror on a cold cache.

mekyabetsu
Dec 17, 2018

I’m choosing an OS for my file server which will be running ZFS along with Plex and some other relatively lightweight home lab stuff. I assume Ubuntu plays nicely with ZFS and will be suitable for my needs? I was looking at Manjaro as well, but for a home server, I think I’d prefer something a little more stable (and familiar to me) like an Ubuntu LTS release.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Yeah, ZFS on Ubuntu is definitely supported and very easy to install: https://openzfs.github.io/openzfs-docs/Getting%20Started/Ubuntu/index.html

rufius
Feb 27, 2011

Clear alcohols are for rich women on diets.

DeathSandwich posted:


Has anyone used the Truenas Mini X? I have a lot of direct experience in the Synology space, but the mini x came up on my radar and I like the platform for the price - I just have no experience with the software stacks on either the Core or Scale side.

Do you feel it's worth the extra $400 to move from like a DS1522+ to the Truenas Mini X?

Also how concerned should I be for noise on either of those models? I live in a one bedroom where I can't necessarily get away from the sound of something if it's roaring like a jet engine constantly.

I don’t have a TrueNAS mini X but I run TrueNAS Scale. I like it - the older diehard users have grumps but I suspect it’s because the initial release was bumpy.

Scale works fine - I have a pretty typical setup with media serving and storing data.

I built my own NAS, but for any purpose built device they’ll be pretty quiet. Largely unnoticeable unless you’re reading/writing a shitload of data.

As to whether to spend the extra $400, probably not? The big thing the mini x has is ECC ram which, depending on who you ask, is important.

I decided it was important because I didn’t want cosmic particles flipping bits on my raw photography backups. You may not care about this though.

Pilfered Pallbearers
Aug 2, 2007

Do we have an unraid thread or do we do all that poo poo here? If so, I guess please point me there.

For preface, this is my first time diving into unraid/similar systems outside of drivepool on windows. I interface with plenty of CLI tools, done some messing around with docker on HAOS on Pi, and usually do pretty well with research and reading documentation so I’m not super concerned about figuring it out.

I’ve been running my PMS with ~ 60TB of space on my gaming desktop in windows via drivepool. Mixed HDDs from 1TB-20TB, something like 8x drives, and some are on the older side. My pool is nearly full but I’ve been de-duping and clearing space because I know at minimum I need enough space to empty a large drive to start the slow move over.

I’ve finally upgrade my CPU/Board/ram and plan on using this as a push to finally move to unraid. Lack of hard links, general pain-in-the-rear end-ness of most of the tools I wanna run on windows, and accessibility to docker containers for home assistant, pi-hole style stuff, and tons of other containers are why I’m moving over. Not to mention stability.


My main question is around the parity drive system in unraid, and what happens on a drive failure. I’m not really wanting to purchase an entire 20TB drive just for parity, especially as the majority of the data is for my Plex system and is ultimately replaceable, and because unraid is supposed to save me TBs of space via hard linking For ease I’ll try to list each question out separately.


  • When running with no parity, what constitutes a drive failure that causes a drive to be inaccessible? Is it total failure, certain SMART warnings, bad sectors, mount failure, etc etc.

  • Is all data on a drive that falls of the array lost, or is there a way to recover it?

  • Does a drive failure affect the data on the other disks in the array?

  • Is it possible to identify specific data that should be duplicated across multiple physical disks to ensure it’s not lost if a drive fails? I plan to figure out more specific backup tools in the future, probably backblaze or similar, but want to keep my downtime during the transition to a minimum.

  • Factoring that most of this data is replaceable and my above questions, am I being an idiot by not running parity? If I’m gonna buy another 20TB drive I’d rather it go to expansion of the array so I can run more users than parity.



As a just in case, here’s the hardware for this and my transition plan


  • i5-14500
  • AsRock Z790 PG Riptide
  • 32Gb DDR5 6400 / 32
  • ~8x mixed size, brand, age 3.5” HDDs, from 1TB-20TB
  • 3x 500GB SATA 2.5” Samsung EVO SSDs (app data and etc?)
  • 1x 500GB Samsung NVME (likely a boot drive for windows games with anti-cheat that I can’t run in VM, pci-e 3.0)
  • 1x 1TB Crucial NVME (PCI-e 5.0) (cache? I dunno yet)
  • Dell H310 raid card if needed
  • Nvidia 3080FE (going to be for gaming pass through only, won’t be passed to Plex.
  • cooling/case/PSU and etc is well worked out

My plan is to do a bench build of the unraid server while I move the data over 1 drive at a time either via unassigned drive plugin method or via network from the windows build.


Please do let me know if I’m being stupid or there are better ways to do this. Due to unraid seemingly running fairly unique configs for most people, I’m having some slight trouble researching exact answers and it’s a little tougher for me without the box in front of me. I wanna minimize downtime so I want all my pieces in place before I start if possible.

Corb3t
Jun 7, 2003

Don’t do it. Save yourself the future headache and janitoring and just shuck some cheap WD easystores and allocate one toward a parity drive so you won’t hate yourself if and when a drive dies.

Unraid arrays don’t traditionally stripe data, so if you lose one drive, you only lose that drive’s data, and you may get some errors before it goes bad, but you’re saving maybe $200 on a server you’re going to utilize for the next decade - Just get the extra 14TB-18TB parity drive the next time they go on sale for $200 at Best Buy.

Corb3t fucked around with this message at 00:34 on Apr 19, 2024

Rap Game Goku
Apr 2, 2008

Word to your moms, I came to drop spirit bombs


Pilfered Pallbearers posted:

Do we have an unraid thread or do we do all that poo poo here? If so, I guess please point me there.

For preface, this is my first time diving into unraid/similar systems outside of drivepool on windows. I interface with plenty of CLI tools, done some messing around with docker on HAOS on Pi, and usually do pretty well with research and reading documentation so I’m not super concerned about figuring it out.

I’ve been running my PMS with ~ 60TB of space on my gaming desktop in windows via drivepool. Mixed HDDs from 1TB-20TB, something like 8x drives, and some are on the older side. My pool is nearly full but I’ve been de-duping and clearing space because I know at minimum I need enough space to empty a large drive to start the slow move over.

I’ve finally upgrade my CPU/Board/ram and plan on using this as a push to finally move to unraid. Lack of hard links, general pain-in-the-rear end-ness of most of the tools I wanna run on windows, and accessibility to docker containers for home assistant, pi-hole style stuff, and tons of other containers are why I’m moving over. Not to mention stability.


My main question is around the parity drive system in unraid, and what happens on a drive failure. I’m not really wanting to purchase an entire 20TB drive just for parity, especially as the majority of the data is for my Plex system and is ultimately replaceable, and because unraid is supposed to save me TBs of space via hard linking For ease I’ll try to list each question out separately.


  • When running with no parity, what constitutes a drive failure that causes a drive to be inaccessible? Is it total failure, certain SMART warnings, bad sectors, mount failure, etc etc.

  • Is all data on a drive that falls of the array lost, or is there a way to recover it?

  • Does a drive failure affect the data on the other disks in the array?

  • Is it possible to identify specific data that should be duplicated across multiple physical disks to ensure it’s not lost if a drive fails? I plan to figure out more specific backup tools in the future, probably backblaze or similar, but want to keep my downtime during the transition to a minimum.

  • Factoring that most of this data is replaceable and my above questions, am I being an idiot by not running parity? If I’m gonna buy another 20TB drive I’d rather it go to expansion of the array so I can run more users than parity.



As a just in case, here’s the hardware for this and my transition plan


  • i5-14500
  • AsRock Z790 PG Riptide
  • 32Gb DDR5 6400 / 32
  • ~8x mixed size, brand, age 3.5” HDDs, from 1TB-20TB
  • 3x 500GB SATA 2.5” Samsung EVO SSDs (app data and etc?)
  • 1x 500GB Samsung NVME (likely a boot drive for windows games with anti-cheat that I can’t run in VM, pci-e 3.0)
  • 1x 1TB Crucial NVME (PCI-e 5.0) (cache? I dunno yet)
  • Dell H310 raid card if needed
  • Nvidia 3080FE (going to be for gaming pass through only, won’t be passed to Plex.
  • cooling/case/PSU and etc is well worked out

My plan is to do a bench build of the unraid server while I move the data over 1 drive at a time either via unassigned drive plugin method or via network from the windows build.


Please do let me know if I’m being stupid or there are better ways to do this. Due to unraid seemingly running fairly unique configs for most people, I’m having some slight trouble researching exact answers and it’s a little tougher for me without the box in front of me. I wanna minimize downtime so I want all my pieces in place before I start if possible.

I moved to unraid about 4 months ago by a similar method. Had everything in a windows storage space, bought a 20tb drive, copied everything to it. Then moved to unraid and ran the old drives without parity until things got copied over, then the 20tb became parity. I was obviously only able to do this because my total data was less than the one big drive, but what you're wanting to do is similar.

As to your specific questions:

1. Honestly, I don't know how bad a drive has to fail to be inaccessable.
2. Without parity, it would again depend on the disk. With parity the data can be rebuilt.
3. Unraid isn't raid, so data is contained entirely upon its disk. It doesn't get striped
4. This is what parity is for. Otherwise there is probably a plugin that would let you mirror data across them.
5. What's your main worry? If stuff is replaceable and the downtime caused by replacing it is acceptable go ahead.

THF13
Sep 26, 2007

Keep an adversary in the dark about what you're capable of, and he has to assume the worst.
1:Unraid is pretty quick to disable drives.
2: If you can recover data from a failed drive if you don't have parity depends entirely on how badly the drive has failed. Each disk in the array on unraid has its own independent filesystem, so you can mount it separately on your server or another device and try to access the files from it.
3: For the same reasons, drive failure won't affect any of the other disks.
4: Nothing native in unraid to duplicate data across disks, but you could pretty easily set this up with a scheduled script to copy a directory from one share to another, and set those two shares to use different sets of physical disks.
5: I think you should run parity for convenience. Even if you have everything backed up, it's a hassle. And drives failing is an inevitability. A potentially bigger issue with losing replaceable data is figuring out what you need to replace. The 'Arr suite will help you figure out movies/tv shows that were lost, but for media that isn't tracked by something like that it could be a hassle. Unraid has some ways to manage share/directory structure to try to keep data on the same disks, but it is a bit tedious to set that up. With unraid you can add a parity drive later, it doesn't need to be setup from the start.

mekyabetsu
Dec 17, 2018

Is there a preference or best practice for what type of disk partitions to use for ZFS? I decided to just delete all the partitions on the disks I'm using and let zpool decide for me, and I got this:

code:
Device           Start         End     Sectors  Size Type
/dev/sdb1         2048 15628036095 15628034048  7.3T Solaris /usr & Apple ZFS
/dev/sdb9  15628036096 15628052479       16384    8M Solaris reserved 1

Device           Start         End     Sectors  Size Type
/dev/sda1         2048 15628036095 15628034048  7.3T Solaris /usr & Apple ZFS
/dev/sda9  15628036096 15628052479       16384    8M Solaris reserved 1
which isn't what I expected, but I assume zpool knows best? Any reason why zpool defaults to "Solaris /usr & Apple ZFS" partitions?

BlankSystemDaemon
Mar 13, 2009



ZFS works best with whole disks without any partitioning, if you don’t need to plug the disks into a system likely to try and “initialize” a disk that appears empty, or don’t need things like boot records or swap partitions.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
I think the partition type is literally just a label so the OS knows what it's working with, and doesn't affect anything about the actual layout or functionality of the partition - asking which is best is like asking which file extension is best for a particular type of file. As long as the OS recognizes what it's working with, you should be good.

hifi
Jul 25, 2012

mekyabetsu posted:

Is there a preference or best practice for what type of disk partitions to use for ZFS? I decided to just delete all the partitions on the disks I'm using and let zpool decide for me, and I got this:

code:
Device           Start         End     Sectors  Size Type
/dev/sdb1         2048 15628036095 15628034048  7.3T Solaris /usr & Apple ZFS
/dev/sdb9  15628036096 15628052479       16384    8M Solaris reserved 1

Device           Start         End     Sectors  Size Type
/dev/sda1         2048 15628036095 15628034048  7.3T Solaris /usr & Apple ZFS
/dev/sda9  15628036096 15628052479       16384    8M Solaris reserved 1
which isn't what I expected, but I assume zpool knows best? Any reason why zpool defaults to "Solaris /usr & Apple ZFS" partitions?

The best practice is to give zfs the entire disk and that is what mine looks like as well. I assume it's something to do with linux not understanding how zfs works

mekyabetsu
Dec 17, 2018

BlankSystemDaemon posted:

ZFS works best with whole disks without any partitioning

hifi posted:

The best practice is to give zfs the entire disk and that is what mine looks like as well. I assume it's something to do with linux not understanding how zfs works
Yup, this is what I did when I created the pool. I ran “zpool create” with 2 drives that were unpartitioned, and that was the result. If that works for ZFS, it’s fine with me. I just wasn’t sure why it chose those particular partition types. I know ZFS was originally a Sun Solaris thing, so it’s probably related to that.

The 8M partitions were created automatically for what I assume is a very good reason.

Eletriarnation posted:

I think the partition type is literally just a label so the OS knows what it's working with, and doesn't affect anything about the actual layout or functionality of the partition - asking which is best is like asking which file extension is best for a particular type of file. As long as the OS recognizes what it's working with, you should be good.

This makes sense to me. Thank you! :)

BlankSystemDaemon
Mar 13, 2009



mekyabetsu posted:

Yup, this is what I did when I created the pool. I ran “zpool create” with 2 drives that were unpartitioned, and that was the result. If that works for ZFS, it’s fine with me. I just wasn’t sure why it chose those particular partition types. I know ZFS was originally a Sun Solaris thing, so it’s probably related to that.

The 8M partitions were created automatically for what I assume is a very good reason.

This makes sense to me. Thank you! :)
I was phone-posting from bed when responding, so I didn't notice it then - but there's something you do want to take care of: Switch to using /dev/disk/by-id/ for your devices, instead of plain /dev/ devices.
You need to do this, because Linux is the one Unix-like that doesn't understand that it shouldn't reassign drives between reboots (the reasons why it does this has to do with its floppy disk handling) - so there's a small risk that you'll trigger a resilver; typically this isn't a problem, but does degrade the array, meaning that a URE could cause dataloss.

On my fileserver, the 24/7 online pool is is a raidz2 of 3x6TB+1x8TB internal disks in raidz2 totalling ~20TB, and the offline onsite backup pool is just shy of 200TB in total made up of 15x2TB raidz3 vdevs each in their own SAS2 enclosure.
The internal drives are what the system boots to and they're where all the 24/7 storage lives, so they have partitioning both for the EFI System Partition, a swap partition on the 8TB, and the rest is used for root-on-ZFS
The external drives are all completely unpartitioned, because this lets me simply run sesutil locate to turn on a LED to make it easy to identify the disk that needs replacing, and then I just go pull the disk and insert a new one - this is the advantage of unpartitioned disks, because zfs automatically starts replacing the disk on its own, and if all devices in a vdev have been replaced with something bigger, the vdev grows automatically too (this is accomplished using the autoreplace and autoexpand properties documented in zpoolprops(7)).

EDIT: I still need to figure out if it's possible to automatically turn on the fault LED in FreeBSD.
Trouble is, every failure of spinning rust I've had has been the kind of error that's hard to know about without ZFS (and about half have been impossible to figure out by using S.M.A.R.T alone), so I'm not sure I'd even have benefited.

BlankSystemDaemon fucked around with this message at 13:20 on Apr 20, 2024

mekyabetsu
Dec 17, 2018

BlankSystemDaemon posted:

I was phone-posting from bed when responding, so I didn't notice it then - but there's something you do want to take care of: Switch to using /dev/disk/by-id/ for your devices, instead of plain /dev/ devices.
You need to do this, because Linux is the one Unix-like that doesn't understand that it shouldn't reassign drives between reboots (the reasons why it does this has to do with its floppy disk handling) - so there's a small risk that you'll trigger a resilver; typically this isn't a problem, but does degrade the array, meaning that a URE could cause dataloss.
Ah, okay. I saw the /dev/disk/by-id stuff mentioned, but I didn't understand why it was important. If each drive on my server has multiple IDs, does it matter which one I use? For example, here are the files in my server's /dev/disk/by-id/ directory that all symlink to /dev/sda:

code:
lrwxrwxrwx 1 root root  9 Apr 20 03:35 ata-WDC_WD80EFZX-68UW8N0_VLKMST1Y -> ../../sda
lrwxrwxrwx 1 root root  9 Apr 20 03:35 scsi-0ATA_WDC_WD80EFZX-68U_VLKMST1Y -> ../../sda
lrwxrwxrwx 1 root root  9 Apr 20 03:35 scsi-1ATA_WDC_WD80EFZX-68UW8N0_VLKMST1Y -> ../../sda
lrwxrwxrwx 1 root root  9 Apr 20 03:35 scsi-35000cca260f342d0 -> ../../sda
lrwxrwxrwx 1 root root  9 Apr 20 03:35 scsi-SATA_WDC_WD80EFZX-68U_VLKMST1Y -> ../../sda
lrwxrwxrwx 1 root root  9 Apr 20 03:35 wwn-0x5000cca260f342d0 -> ../../sda
Also, will I be able to do this without recreating the pool? Because I just got done copying a ton of stuff to it. :(

Perplx
Jun 26, 2004


Best viewed on Orgasma Plasma
Lipstick Apathy
For zfs best practices truenas should know what they are doing. They turn on a ton options presumably for compatibility and performance, if you run "zpool history" you can all the commands.

Here is the zpool command trunas scale did for my 6 drive raidz2.

code:
zpool create \
-o feature@lz4_compress=enabled \
-o altroot=/mnt \
-o cachefile=/data/zfs/zpool.cache \
-o failmode=continue 
-o autoexpand=on \
-o ashift12 \
-o feature@async_destroy=enabled \
-o feature@empty_bpobj=enabled \
-o feature@multi_vdev_crash_dump=enabled \
-o feature@spacemap_histogram=enabled \
-o feature@enabled_txg=enabled \
-o feature@hole_birth=enabled \
-o feature@extensible_dataset=enabled \
-o feature@embedded_data=enabled \
-o feature@bookmarks=enabled \
-o feature@filesystem_limits=enabled \
-o feature@large_blocks=enabled \
-o feature@large_dnode=enabled \
-o feature@sha512=enabled \
-o feature@skein=enabled \
-o feature@edonr=enabled \
-o feature@userobj_accounting=enabled \
-o feature@encryption=enabled \
-o feature@project_quota=enabled \
-o feature@device_removal=enabled \
-o feature@obsolete_counts=enabled \
-o feature@zpool_checkpoint=enabled \
-o feature@spacemap_v2=enabled \
-o feature@allocation_classes=enabled \
-o feature@resilver_defer=enabled \
-o feature@bookmark_v2=enabled \
-o feature@redaction_bookmarks=enabled \
-o feature@redacted_datasets=enabled \
-o feature@bookmark_written=enabled \
-o feature@log_spacemap=enabled \
-o feature@livelist=enabled \
-o feature@device_rebuild=enabled \
-o feature@zstd_compress=enabled \
-o feature@draid=enabled \
-O atime=off \
-O compression=lz4 \
-O aclinherit=passthrough \
-O mountpoint=/tank \
-O acltype=posix \
-O aclmode=discard \
tank raidz2 \
/dev/disk/by-partuuid/1c5839c3-5268-4177-aef3-09c87fac0923 \
/dev/disk/by-partuuid/a269c6ee-b07c-4b5d-8a3a-3af2622f6f4c \
/dev/disk/by-partuuid/ae20bf11-7cf9-4cbe-bab5-fc2549b76f63 \
/dev/disk/by-partuuid/a8eb0101-300b-4fac-9cce-392ed7f0bb75 \
/dev/disk/by-partuuid/c8a12cf0-ed62-4459-a363-b22cafdd5d7b \
/dev/disk/by-partuuid/c43bd175-58b1-4910-a7a6-b8779f9fe1e1

zfs create \
-o aclinherit=discard \
-o acltype=posix \
-o casesensitivity=sensitive \
-o copies=1 \
-o org.truenas:managedby=192.168.9.61 \
-o xattr=sa tank/media

Perplx fucked around with this message at 14:18 on Apr 20, 2024

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Last I remember, TrueNAS creates 2GB swap partitions on all disks you put in a pool.

I created my pools on the command line to do whole unpartitioned disks like in ye olde OpenSolaris days.

IOwnCalculus
Apr 2, 2003





mekyabetsu posted:

Ah, okay. I saw the /dev/disk/by-id stuff mentioned, but I didn't understand why it was important. If each drive on my server has multiple IDs, does it matter which one I use? For example, here are the files in my server's /dev/disk/by-id/ directory that all symlink to /dev/sda:

code:
lrwxrwxrwx 1 root root  9 Apr 20 03:35 ata-WDC_WD80EFZX-68UW8N0_VLKMST1Y -> ../../sda
lrwxrwxrwx 1 root root  9 Apr 20 03:35 scsi-0ATA_WDC_WD80EFZX-68U_VLKMST1Y -> ../../sda
lrwxrwxrwx 1 root root  9 Apr 20 03:35 scsi-1ATA_WDC_WD80EFZX-68UW8N0_VLKMST1Y -> ../../sda
lrwxrwxrwx 1 root root  9 Apr 20 03:35 scsi-35000cca260f342d0 -> ../../sda
lrwxrwxrwx 1 root root  9 Apr 20 03:35 scsi-SATA_WDC_WD80EFZX-68U_VLKMST1Y -> ../../sda
lrwxrwxrwx 1 root root  9 Apr 20 03:35 wwn-0x5000cca260f342d0 -> ../../sda
Also, will I be able to do this without recreating the pool? Because I just got done copying a ton of stuff to it. :(

Any of them should be equivalent because none of those IDs can possibly ever mean a different disk. "sda" could be anything, but ata...VLKMST1Y will always be that disk no matter if it gets picked up as sda or sdx. I used the IDs starting with 'scsi-S' for all of mine because that got all of my SAS and SATA drives all in the same format, and it includes the full drive model / serial number in the drive name so it's that much easier to know which drive has hosed off.

Yes, you can do this without recreating the pool from scratch. Export the pool, then re-import it with "zpool import -d /dev/disk/by-id/ [poolname]". If you care which symlink format you want zpool to use, delete all of the ones you don't want from /dev/disk/by-id after you export but before you reimport. They're just symlinks that get recreated every time the system boots, specifically for you to use with poo poo like this.

MadFriarAvelyn
Sep 25, 2007

Ok, I think it's time to take the plunge on this NAS project. Whipped up a quick parts list on PC Part Picker, aiming to use it mainly for storage, potentially as a future PLEX server. Took some advice given from earlier in the thread and added an Intel A380 to the list for the AV1 encoding. Anyone see any glaring issues that'd be cause for concern with this setup?

CPU: AMD Ryzen 5 7600X 4.7 GHz 6-Core Processor ($208.50 @ Amazon)
CPU Cooler: Noctua NH-L9A-AM5 CHROMAX.BLACK 33.84 CFM CPU Cooler ($54.95 @ Amazon)
Motherboard: MSI MPG B650I EDGE WIFI Mini ITX AM5 Motherboard ($260.00 @ MSI)
Memory: G.Skill Flare X5 32 GB (2 x 16 GB) DDR5-6000 CL32 Memory ($96.90 @ Amazon)
Storage: Western Digital Red Plus 12 TB 3.5" 7200 RPM Internal Hard Drive ($229.99 @ Best Buy)
Storage: Western Digital Red Plus 12 TB 3.5" 7200 RPM Internal Hard Drive ($229.99 @ Best Buy)
Storage: Western Digital Red Plus 12 TB 3.5" 7200 RPM Internal Hard Drive ($229.99 @ Best Buy)
Storage: Western Digital Red Plus 12 TB 3.5" 7200 RPM Internal Hard Drive ($229.99 @ Best Buy)
Video Card: ASRock Low Profile Arc A380 6 GB Video Card ($113.99 @ Newegg)
Case: Jonsbo N3 Mini ITX Desktop Case
Power Supply: Corsair SF600 600 W 80+ Platinum Certified Fully Modular SFX Power Supply ($225.00 @ Amazon)
Total: $1879.30
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2024-04-20 16:50 EDT-0400

I've got some spare M.2 NVMe SSDs I can throw in there too for an OS and maybe use as a cache.

Additionally, follow-up question: what's the go-to choice of OS for running a NAS?

rufius
Feb 27, 2011

Clear alcohols are for rich women on diets.

MadFriarAvelyn posted:

Ok, I think it's time to take the plunge on this NAS project. Whipped up a quick parts list on PC Part Picker, aiming to use it mainly for storage, potentially as a future PLEX server. Took some advice given from earlier in the thread and added an Intel A380 to the list for the AV1 encoding. Anyone see any glaring issues that'd be cause for concern with this setup?

CPU: AMD Ryzen 5 7600X 4.7 GHz 6-Core Processor ($208.50 @ Amazon)
CPU Cooler: Noctua NH-L9A-AM5 CHROMAX.BLACK 33.84 CFM CPU Cooler ($54.95 @ Amazon)
Motherboard: MSI MPG B650I EDGE WIFI Mini ITX AM5 Motherboard ($260.00 @ MSI)
Memory: G.Skill Flare X5 32 GB (2 x 16 GB) DDR5-6000 CL32 Memory ($96.90 @ Amazon)
Storage: Western Digital Red Plus 12 TB 3.5" 7200 RPM Internal Hard Drive ($229.99 @ Best Buy)
Storage: Western Digital Red Plus 12 TB 3.5" 7200 RPM Internal Hard Drive ($229.99 @ Best Buy)
Storage: Western Digital Red Plus 12 TB 3.5" 7200 RPM Internal Hard Drive ($229.99 @ Best Buy)
Storage: Western Digital Red Plus 12 TB 3.5" 7200 RPM Internal Hard Drive ($229.99 @ Best Buy)
Video Card: ASRock Low Profile Arc A380 6 GB Video Card ($113.99 @ Newegg)
Case: Jonsbo N3 Mini ITX Desktop Case
Power Supply: Corsair SF600 600 W 80+ Platinum Certified Fully Modular SFX Power Supply ($225.00 @ Amazon)
Total: $1879.30
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2024-04-20 16:50 EDT-0400

I've got some spare M.2 NVMe SSDs I can throw in there too for an OS and maybe use as a cache.

Additionally, follow-up question: what's the go-to choice of OS for running a NAS?

In the absence of any other criteria, I recommend TrueNAS SCALE/Core. Scale is the future from what they’re signaling so if it’s fresh, worth considering just starting there.

With the beefy gpu, you could go substantially cheaper on the cpu IMO.

Question to consider: how important is data integrity to you? If that’s super important, consider whether you want ECC memory. That’ll help ensure, along with ZFS, that your bits stay correctly flipped.

I care about ecc memory because I backup my raw photo library and some important financial docs via my NAS (as well as offsite). This may not be a concern for you, especially if you’re just downloading Linux ISOs.

If you want ECC memory, your easiest route is usually through either Intel’s atom “enterprise” skus or a Xeon. You can do AMD Opteron, but when I was building in 2022, the options were pricier than the Xeons I was looking at.

MadFriarAvelyn
Sep 25, 2007

rufius posted:

Question to consider: how important is data integrity to you? If that’s super important, consider whether you want ECC memory. That’ll help ensure, along with ZFS, that your bits stay correctly flipped.

I care about ecc memory because I backup my raw photo library and some important financial docs via my NAS (as well as offsite). This may not be a concern for you, especially if you’re just downloading Linux ISOs.

If you want ECC memory, your easiest route is usually through either Intel’s atom “enterprise” skus or a Xeon. You can do AMD Opteron, but when I was building in 2022, the options were pricier than the Xeons I was looking at.

I was actually considering ECC memory but PC Part Picker didn't list any while I was picking parts. Doing a quick google search I guess I need a specific tier of AMD motherboard to get support for it? I'm not strict with going AMD for the processor for this one, I just want one that won't flounder if I try something more complex with this in the future. If Intel has something that won't be a power hungry gremlin I'm ok with choosing something over there too if it makes getting access to ECC memory easier.

MadFriarAvelyn fucked around with this message at 23:59 on Apr 20, 2024

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
I am not sure if it's commonly supported on consumer AM5 boards, but you can easily get ECC on AM4 with most ASRock boards or some ASUS/Gigabyte models. I don't know if MSI has any support for it. You just have to check the manufacturer specs page and see if ECC support is listed; all the ones I've seen that support it say "ECC & non-ECC" or something like that. Some of them even have it listed on the Newegg page. Here are a few example models with ECC:

https://www.newegg.com/asus-prime-b550-plus-ac-hes/p/N82E16813119665
https://www.newegg.com/asrock-b550m-pro4/p/N82E16813157939
https://www.newegg.com/gigabyte-b550m-ds3h-ac/p/N82E16813145250

Of course, you also have to acquire ECC UDIMMs which are not incredibly common. I am using a pair of this 16GB Kingston model in an X570 Taichi, which has been problem-free: https://www.provantage.com/kingston-technology-ksm32es8-16hc~7KINM2JY.htm?source=googleps

e: I don't think Intel has many advantages in this space. They have thankfully opened up ECC support on consumer CPUs starting with 12th gen, but unfortunately I believe you still need a server or workstation (W680 chipset) motherboard and those generally cost substantially more.

Eletriarnation fucked around with this message at 14:41 on Apr 21, 2024

rufius
Feb 27, 2011

Clear alcohols are for rich women on diets.
Ya - when I was building it was peak pandemic and hard to find those AM4 boards that supported ECC mem.

It was easier to find the server board and Xeon which is how I ended up there. Pricing was fine but go with AMD if you find the right support.

Anime Schoolgirl
Nov 28, 2002

If you live in the US or can otherwise get something from a US address Micron sells unbuffered ECC on their website. https://www.crucial.com/catalog/memory/server?module-type(-)ECC%20UDIMM(--)module-type(-)VLP%20ECC%20UDIMM

MadFriarAvelyn
Sep 25, 2007

Ok, parts ordered, with a few substitutions. Ended up going with a Fractal Node 304 for the case due to availability of the Jonsbo case I originally wanted, which led to a PSU and GPU replacement for ones that'd better fit the new choice of case. Opted against going with ECC memory because apparently it's a crapshoot for AM5 motherboards and I don't want to be locked into the dying AM4 platform otherwise.

Wish me luck, goons. :ohdear:

Adbot
ADBOT LOVES YOU

Anime Schoolgirl
Nov 28, 2002

I'm not sure the CPU of a NAS is something you'd ever upgrade unless you were doing some madcap "i'm delivering content to 100 users on the LAN" setup.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply