Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
mekyabetsu
Dec 17, 2018

With ZFS, do mirrored vdevs need to be the same size? Let's say I have three mirrored vdevs setup like so:

vdev1: 2x 8 TB drives
vdev2: 2x 10 TB drives
vdev3: 2x 2 TB drives

I would end up with a single 20 TB pool. Right?

Also, it's not a problem to add a new pair of drives as a mirrored vdev after the pool has been created and is in use, correct? I understand that you aren't really meant to add drives to expand the size of a pool in a RAIDZ setup, but if I'm just using mirrored pairs of drives, adding a new vdev is a simple and expected use case, right?

Sorry for the newbie questions. I'm slowly going through ZFS documentation, but there's a lot of it and I'm dumb. :(

mekyabetsu fucked around with this message at 16:18 on Apr 16, 2024

Adbot
ADBOT LOVES YOU

mekyabetsu
Dec 17, 2018

Arishtat posted:

Correct you can expand the pool via adding new mirror vdevs.

One thing to watch out for is that your disk configuration has one vdev with disks which are significantly smaller than the other two which will likely result in an uneven distribution of data favoring the vdevs with larger drives.

https://jrs-s.net/2018/04/11/how-data-gets-imbalanced-on-zfs/

I read about this, but I don't understand why it's a problem. I mean, obviously more data is going to be written to the larger drives because they're... bigger.

mekyabetsu
Dec 17, 2018

IOwnCalculus posted:

I'm assuming the 2TB drives are old, because 2TB. Is an extra 2TB storage worth the increased risk of losing the entire array if both of the 2TB drives die before you can finish replacing one of them? I would put them in a separate pool and use it for local backups of irreplaceable data instead of making it part of your main pool.

Yeah, I just used that as an example. I do have some smaller drives, but I'll likely just sell those and buy some larger 10+ TB drives to expand when needed.

Thanks to all for your help and answers!

mekyabetsu
Dec 17, 2018

I’m choosing an OS for my file server which will be running ZFS along with Plex and some other relatively lightweight home lab stuff. I assume Ubuntu plays nicely with ZFS and will be suitable for my needs? I was looking at Manjaro as well, but for a home server, I think I’d prefer something a little more stable (and familiar to me) like an Ubuntu LTS release.

mekyabetsu
Dec 17, 2018

Is there a preference or best practice for what type of disk partitions to use for ZFS? I decided to just delete all the partitions on the disks I'm using and let zpool decide for me, and I got this:

code:
Device           Start         End     Sectors  Size Type
/dev/sdb1         2048 15628036095 15628034048  7.3T Solaris /usr & Apple ZFS
/dev/sdb9  15628036096 15628052479       16384    8M Solaris reserved 1

Device           Start         End     Sectors  Size Type
/dev/sda1         2048 15628036095 15628034048  7.3T Solaris /usr & Apple ZFS
/dev/sda9  15628036096 15628052479       16384    8M Solaris reserved 1
which isn't what I expected, but I assume zpool knows best? Any reason why zpool defaults to "Solaris /usr & Apple ZFS" partitions?

mekyabetsu
Dec 17, 2018

BlankSystemDaemon posted:

ZFS works best with whole disks without any partitioning

hifi posted:

The best practice is to give zfs the entire disk and that is what mine looks like as well. I assume it's something to do with linux not understanding how zfs works
Yup, this is what I did when I created the pool. I ran “zpool create” with 2 drives that were unpartitioned, and that was the result. If that works for ZFS, it’s fine with me. I just wasn’t sure why it chose those particular partition types. I know ZFS was originally a Sun Solaris thing, so it’s probably related to that.

The 8M partitions were created automatically for what I assume is a very good reason.

Eletriarnation posted:

I think the partition type is literally just a label so the OS knows what it's working with, and doesn't affect anything about the actual layout or functionality of the partition - asking which is best is like asking which file extension is best for a particular type of file. As long as the OS recognizes what it's working with, you should be good.

This makes sense to me. Thank you! :)

Adbot
ADBOT LOVES YOU

mekyabetsu
Dec 17, 2018

BlankSystemDaemon posted:

I was phone-posting from bed when responding, so I didn't notice it then - but there's something you do want to take care of: Switch to using /dev/disk/by-id/ for your devices, instead of plain /dev/ devices.
You need to do this, because Linux is the one Unix-like that doesn't understand that it shouldn't reassign drives between reboots (the reasons why it does this has to do with its floppy disk handling) - so there's a small risk that you'll trigger a resilver; typically this isn't a problem, but does degrade the array, meaning that a URE could cause dataloss.
Ah, okay. I saw the /dev/disk/by-id stuff mentioned, but I didn't understand why it was important. If each drive on my server has multiple IDs, does it matter which one I use? For example, here are the files in my server's /dev/disk/by-id/ directory that all symlink to /dev/sda:

code:
lrwxrwxrwx 1 root root  9 Apr 20 03:35 ata-WDC_WD80EFZX-68UW8N0_VLKMST1Y -> ../../sda
lrwxrwxrwx 1 root root  9 Apr 20 03:35 scsi-0ATA_WDC_WD80EFZX-68U_VLKMST1Y -> ../../sda
lrwxrwxrwx 1 root root  9 Apr 20 03:35 scsi-1ATA_WDC_WD80EFZX-68UW8N0_VLKMST1Y -> ../../sda
lrwxrwxrwx 1 root root  9 Apr 20 03:35 scsi-35000cca260f342d0 -> ../../sda
lrwxrwxrwx 1 root root  9 Apr 20 03:35 scsi-SATA_WDC_WD80EFZX-68U_VLKMST1Y -> ../../sda
lrwxrwxrwx 1 root root  9 Apr 20 03:35 wwn-0x5000cca260f342d0 -> ../../sda
Also, will I be able to do this without recreating the pool? Because I just got done copying a ton of stuff to it. :(

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply