|
Talorat posted:So be clear, these things have no size limits on the drives as long as you don’t use the interposers right? From what I have seen, no. I recall some Netapp expansion shelves having a goofy firmware on their expansion cards where they wouldn't recognize non NetApp branded disks (disks running their firmware), but I thought you could flash the controllers to a standard firmware to get around it. Just look for a shelf that meets your needs, then Google the model number. Some other nerd has most likely already done the legwork of testing for you.
|
# ¿ Mar 23, 2024 20:03 |
|
|
# ¿ May 15, 2024 13:23 |
|
Back to a project that went on the backburner. Migration to new TrueNAS server. I have enough new/unused disks to create a 6 disk RAIDZ2 vdev that will hold existing data. After that migration of data, I'll end up with a some disks from the old box that I'll be reusing, so adding another 6 disk RAIDZ2 vdev to that new pool. After that, I would like to rebalance the pool. Anyone done any "in-place rebalancing"? https://github.com/markusressel/zfs-inplace-rebalancing
|
# ¿ Mar 25, 2024 01:33 |
|
BlankSystemDaemon posted:That script just does what zfs send/receive does, but worse because it re-computes all checksums, and takes significantly longer. How would send/receive help me in this current situation though? Without bringing in some temporary storage as a dumping ground for a two-transfer migration. I don't see any benefit? Unless I am missing some feature of ZFS send/receive (which I admittedly do not use). Edit: Oooo, are you suggesting zpool send/receive within the same pool. Same concept as the script, so every file does a copy then delete (of original)? I was not aware send/receive could operate within a single pool (if that is what you were hinting at). Double edit: Like this? https://forum.proxmox.com/threads/zfs-send-recv-inside-same-pool.119983/post-521105 Moey fucked around with this message at 07:16 on Mar 25, 2024 |
# ¿ Mar 25, 2024 02:16 |
|
BlankSystemDaemon posted:Yep, got it in one; zfs send -R tank/olddataset@snapshot | mbuffer | zfs receive newdataset, then once it’s done you delete the old one and rename the new one. Neato. Gracias. I'll do some testing before migrating data and letting it rip on the actual "final" disk layout.
|
# ¿ Mar 25, 2024 14:25 |
|
Yaoi Gagarin posted:why even make a second pool, you could put those drives in as a new vdev in the original pool? Because Wibla will call you dumb.
|
# ¿ Mar 28, 2024 03:40 |
|
Talorat posted:Tell me more about that second option. What’s a vdev? Will this allow the single pool to have a single mount point? vdev = Single or multiple sets of drives. Single disk, mirror , RAIDZ, RAIDZ2...... Pool (zpool) = collection of single or multiple vdevs If you create a pool with multiple vdevs, and one vdev suffers a catastrophic failure (more disk failures than your level of parity) your pools data is gone. e:fb
|
# ¿ Mar 28, 2024 08:19 |
|
BlankSystemDaemon posted:Just an observation, but if you aren't willing to switch to ZFS, it probably means you either don't have backups, or don't trust them - so you might also wanna address that. Or they are using a different mature solution. Not everyone is in the same ZFS cult as you. The backhand poo poo that comes out of you is amazing.
|
# ¿ Apr 14, 2024 05:39 |
|
Hughlander posted:TO be fair... Yeah, I didn't even look at the context before the reply, that's on me. But this isn't the first time I've seen similar responses/comments
|
# ¿ Apr 14, 2024 09:32 |
|
|
# ¿ May 15, 2024 13:23 |
|
BlankSystemDaemon posted:I'm sorry I phrased myself so poorly, it wasn't my intention to come off backhanded, but I can see how I did. I also apologize, didn't mean to come off so pissy, it just seemed very blunt when I read it without scrolling back. So I'm the dumb looking one here. I do appreciate your ZFS knowledge bombs in here. Now let's carry on with nerd storage chat.
|
# ¿ Apr 14, 2024 12:22 |