|
As an aside, an individual vdev can be made up of differently-sized drives (i.e. an 8TB and a 10TB), with ZFS treating the vdev as if it's the size of the smallest drive. Typically this is something only done when you're expanding in place - replace one drive with a larger drive, resilver, replace the other drive with a larger drive, resilver again, expand to fill the new usable space. I will third/fourth/whatever that mixed-size vdevs is not a problem for all but the most extreme "home" use cases. However, I'll throw out another concern. I'm assuming the 2TB drives are old, because 2TB. Is an extra 2TB storage worth the increased risk of losing the entire array if both of the 2TB drives die before you can finish replacing one of them? I would put them in a separate pool and use it for local backups of irreplaceable data instead of making it part of your main pool.
|
# ¿ Apr 16, 2024 19:36 |
|
|
# ¿ May 16, 2024 07:31 |
|
mekyabetsu posted:Ah, okay. I saw the /dev/disk/by-id stuff mentioned, but I didn't understand why it was important. If each drive on my server has multiple IDs, does it matter which one I use? For example, here are the files in my server's /dev/disk/by-id/ directory that all symlink to /dev/sda: Any of them should be equivalent because none of those IDs can possibly ever mean a different disk. "sda" could be anything, but ata...VLKMST1Y will always be that disk no matter if it gets picked up as sda or sdx. I used the IDs starting with 'scsi-S' for all of mine because that got all of my SAS and SATA drives all in the same format, and it includes the full drive model / serial number in the drive name so it's that much easier to know which drive has hosed off. Yes, you can do this without recreating the pool from scratch. Export the pool, then re-import it with "zpool import -d /dev/disk/by-id/ [poolname]". If you care which symlink format you want zpool to use, delete all of the ones you don't want from /dev/disk/by-id after you export but before you reimport. They're just symlinks that get recreated every time the system boots, specifically for you to use with poo poo like this.
|
# ¿ Apr 20, 2024 17:27 |
|
Talorat posted:If more than 3 drives fail in the new array Talorat posted:if either HBA fails, if the disk shelf fails, if a cable fails These are two very different failure modes. Total and unrecoverable loss of three drives in the original vdev or four in the new vdev will cause you to lose the whole array, yes. As long as the HBA/shelf/cable don't fail in a way that results in writing a huge amount of garbage to the disks, the most you'll experience is downtime until you resolve the issue, plus possibly a small amount of data corruption. Remember that ZFS was originally built for enterprise systems with multiple disk shelves attached to a controller; they had to expect that at some point an entire shelf of disks would disappear for any reason.
|
# ¿ May 4, 2024 21:00 |
|
Nulldevice posted:Your drive has bad sectors and needs to be replaced. This, but also, 100k power on hours? Give that thing a burial with honors.
|
# ¿ May 7, 2024 06:02 |
|
PitViper posted:If anything, this is reinforcing my appreciation of ZFS and how fault-tolerant it is, even in the face of my own abject idiocy. Right? A regular RAID would've gone completely unrecoverable very early on in the process, here you're dealing with bitrot that's probably nigh-undetectable because it's such a small amount of data in a video file.
|
# ¿ May 7, 2024 19:52 |
|
fridge corn posted:Hello. I have a question. My dad has a NAS server setup for his music collection and is having difficulty playing music from it. Previously he has been using Sonos, but he has run into problems with Sonos having a hard track limit (something like 64,000 songs, which is not nearly enough for his entire collection) and also their app is currently hosed from a recent update. He is wondering if there is a better solution to playing music directly off a NAS server than Sonos? Any insight would be greatly appreciated thanks!! Plex with Plexamp but that might be on the overkill side.
|
# ¿ May 9, 2024 23:43 |
|
|
# ¿ May 16, 2024 07:31 |
|
BlankSystemDaemon posted:With ZFS, there's nothing preventing you from replacing it with an NVMe SSD using the zpool replace command. I expect that Generic Monk accidentally added it as a new single-disk vdev.
|
# ¿ May 15, 2024 21:40 |