Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
IOwnCalculus
Apr 2, 2003





As an aside, an individual vdev can be made up of differently-sized drives (i.e. an 8TB and a 10TB), with ZFS treating the vdev as if it's the size of the smallest drive. Typically this is something only done when you're expanding in place - replace one drive with a larger drive, resilver, replace the other drive with a larger drive, resilver again, expand to fill the new usable space.

I will third/fourth/whatever that mixed-size vdevs is not a problem for all but the most extreme "home" use cases. However, I'll throw out another concern. I'm assuming the 2TB drives are old, because 2TB. Is an extra 2TB storage worth the increased risk of losing the entire array if both of the 2TB drives die before you can finish replacing one of them? I would put them in a separate pool and use it for local backups of irreplaceable data instead of making it part of your main pool.

Adbot
ADBOT LOVES YOU

IOwnCalculus
Apr 2, 2003





mekyabetsu posted:

Ah, okay. I saw the /dev/disk/by-id stuff mentioned, but I didn't understand why it was important. If each drive on my server has multiple IDs, does it matter which one I use? For example, here are the files in my server's /dev/disk/by-id/ directory that all symlink to /dev/sda:

code:
lrwxrwxrwx 1 root root  9 Apr 20 03:35 ata-WDC_WD80EFZX-68UW8N0_VLKMST1Y -> ../../sda
lrwxrwxrwx 1 root root  9 Apr 20 03:35 scsi-0ATA_WDC_WD80EFZX-68U_VLKMST1Y -> ../../sda
lrwxrwxrwx 1 root root  9 Apr 20 03:35 scsi-1ATA_WDC_WD80EFZX-68UW8N0_VLKMST1Y -> ../../sda
lrwxrwxrwx 1 root root  9 Apr 20 03:35 scsi-35000cca260f342d0 -> ../../sda
lrwxrwxrwx 1 root root  9 Apr 20 03:35 scsi-SATA_WDC_WD80EFZX-68U_VLKMST1Y -> ../../sda
lrwxrwxrwx 1 root root  9 Apr 20 03:35 wwn-0x5000cca260f342d0 -> ../../sda
Also, will I be able to do this without recreating the pool? Because I just got done copying a ton of stuff to it. :(

Any of them should be equivalent because none of those IDs can possibly ever mean a different disk. "sda" could be anything, but ata...VLKMST1Y will always be that disk no matter if it gets picked up as sda or sdx. I used the IDs starting with 'scsi-S' for all of mine because that got all of my SAS and SATA drives all in the same format, and it includes the full drive model / serial number in the drive name so it's that much easier to know which drive has hosed off.

Yes, you can do this without recreating the pool from scratch. Export the pool, then re-import it with "zpool import -d /dev/disk/by-id/ [poolname]". If you care which symlink format you want zpool to use, delete all of the ones you don't want from /dev/disk/by-id after you export but before you reimport. They're just symlinks that get recreated every time the system boots, specifically for you to use with poo poo like this.

IOwnCalculus
Apr 2, 2003





Talorat posted:

If more than 3 drives fail in the new array

Talorat posted:

if either HBA fails, if the disk shelf fails, if a cable fails

These are two very different failure modes. Total and unrecoverable loss of three drives in the original vdev or four in the new vdev will cause you to lose the whole array, yes.

As long as the HBA/shelf/cable don't fail in a way that results in writing a huge amount of garbage to the disks, the most you'll experience is downtime until you resolve the issue, plus possibly a small amount of data corruption. Remember that ZFS was originally built for enterprise systems with multiple disk shelves attached to a controller; they had to expect that at some point an entire shelf of disks would disappear for any reason.

IOwnCalculus
Apr 2, 2003





Nulldevice posted:

Your drive has bad sectors and needs to be replaced.

This, but also, 100k power on hours? Give that thing a burial with honors.

IOwnCalculus
Apr 2, 2003





PitViper posted:

If anything, this is reinforcing my appreciation of ZFS and how fault-tolerant it is, even in the face of my own abject idiocy.

Right? A regular RAID would've gone completely unrecoverable very early on in the process, here you're dealing with bitrot that's probably nigh-undetectable because it's such a small amount of data in a video file.

IOwnCalculus
Apr 2, 2003





fridge corn posted:

Hello. I have a question. My dad has a NAS server setup for his music collection and is having difficulty playing music from it. Previously he has been using Sonos, but he has run into problems with Sonos having a hard track limit (something like 64,000 songs, which is not nearly enough for his entire collection) and also their app is currently hosed from a recent update. He is wondering if there is a better solution to playing music directly off a NAS server than Sonos? Any insight would be greatly appreciated thanks!!

Plex with Plexamp but that might be on the overkill side.

Adbot
ADBOT LOVES YOU

IOwnCalculus
Apr 2, 2003





BlankSystemDaemon posted:

With ZFS, there's nothing preventing you from replacing it with an NVMe SSD using the zpool replace command.
ZFS doesn't give a gently caress about what driver you're using, nor what the disk is.

I expect that Generic Monk accidentally added it as a new single-disk vdev.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply