|
Oysters Autobio posted:Was considering an HBA card but now if this sort of setup makes it easy then I probably will get one then. I was told to get LSI cards and avoid Adaptec ones, which was advice I followed but don't really know the reasons behind. You'll want one that's capable of being flashed to IT mode, but you'll probably find the seller advertising they pre-flashed it and won't need to do it yourself. Different cards will use different numbers of PCIE lanes. I think you can use an 8x card in a 4x slot, but I'm unsure if that will only affect total bandwidth (combined max speed of all drives limited based on number of lanes and pcie version), or if it will cause errors or weirdness if you approach that speed. You connect hard drives to the hba with breakout cables that let you hook up 4 drives to each port on the hba. Depending on the hba the type of connector is different, so make sure you have the right one. The naming convention of lsi hba's is pretty simple. An LSI 92## card should be a PCIE gen 2 version, and a 93## will be a gen 3. Gen 2 is still probably fine for spinning drives and are extremely cheap. But gen 3 ones have come down in price recently. After the 9### model number, they will have another number and letter. That number is the number of drives it supports and if the connections are internal or external. So a 9300-8i is a PCIE gen 3 card with 2 internal connectors that you can connect 8 drives to. Don't assume PCIE lane requirements from the number of hard drives connections, lookup the specific model details. You can use more drives than a HBA card supports with a SAS expander card. These are pcie cards that just need power and a connection the the GBA, and act as the hard drive equivalent of a network switch. EDIT: As a final note, some HBAs were designed for servers with the expectation that there would be airflow over them and can overheat, so a common recommendation is to zip tie a tiny fan onto its heatsink. Some don't need this, but I don't know anywhere you could check.
|
# ? Mar 23, 2024 21:45 |
|
|
# ? Mar 29, 2024 14:18 |
|
Hmm ok I'll take a look. The one appealing thing about the Adaptec HBA I was looking at was that it wouldn't require any flashing the BIOS, which I'm not keen on because I'm looking to assemble everything over the next week except for HDDs that I'm waiting on, so it'd be nice if I don't have to redo anything when I got the HBA, unless I'm misunderstanding something with them.
|
# ? Mar 24, 2024 03:00 |
|
Oysters Autobio posted:Hmm ok I'll take a look. The one appealing thing about the Adaptec HBA I was looking at was that it wouldn't require any flashing the BIOS, which I'm not keen on because I'm looking to assemble everything over the next week except for HDDs that I'm waiting on, so it'd be nice if I don't have to redo anything when I got the HBA, unless I'm misunderstanding something with them. You can also buy a Broadcom HBA that is designed to be an HBA and not a hardware RAID controller, like a 9300 or 9400 and just use that. No flashing required.
|
# ? Mar 24, 2024 03:08 |
|
Back to a project that went on the backburner. Migration to new TrueNAS server. I have enough new/unused disks to create a 6 disk RAIDZ2 vdev that will hold existing data. After that migration of data, I'll end up with a some disks from the old box that I'll be reusing, so adding another 6 disk RAIDZ2 vdev to that new pool. After that, I would like to rebalance the pool. Anyone done any "in-place rebalancing"? https://github.com/markusressel/zfs-inplace-rebalancing
|
# ? Mar 25, 2024 01:33 |
That script just does what zfs send/receive does, but worse because it re-computes all checksums, and takes significantly longer.
|
|
# ? Mar 25, 2024 01:35 |
|
BlankSystemDaemon posted:That script just does what zfs send/receive does, but worse because it re-computes all checksums, and takes significantly longer. How would send/receive help me in this current situation though? Without bringing in some temporary storage as a dumping ground for a two-transfer migration. I don't see any benefit? Unless I am missing some feature of ZFS send/receive (which I admittedly do not use). Edit: Oooo, are you suggesting zpool send/receive within the same pool. Same concept as the script, so every file does a copy then delete (of original)? I was not aware send/receive could operate within a single pool (if that is what you were hinting at). Double edit: Like this? https://forum.proxmox.com/threads/zfs-send-recv-inside-same-pool.119983/post-521105 Moey fucked around with this message at 07:16 on Mar 25, 2024 |
# ? Mar 25, 2024 02:16 |
|
THF13 posted:I was told to get LSI cards and avoid Adaptec ones, which was advice I followed but don't really know the reasons behind. I decided to go with the Adaptec, mainly because of reviews on them like here. Snagged used one off ebay for around $35USD. Seems like the only downside I could see with these adaptec is very much needing to attach a 40mm fan to prevent overheating. But looking at other options, it's a really decent price point for something that can both support up to 16 HDDs, is on PCIe 3.0, and doesn't require re-flashing when switching between IR/IT modes. Someone in that thread also showed an easy way to thread through two unused threads on the heatsink to mount a small 40mm fan, so i like that over any kind of zipties or anything. Oysters Autobio fucked around with this message at 04:19 on Mar 25, 2024 |
# ? Mar 25, 2024 04:10 |
Moey posted:How would send/receive help me in this current situation though? Without bringing in some temporary storage as a dumping ground for a two-transfer migration. If you’re smart about it, you enable zpool checkpoint until you’re satisfied that everything made it over - that way, you can revert administrative changes like dataset removal. Just don’t forget to turn it off again. BlankSystemDaemon fucked around with this message at 11:37 on Mar 25, 2024 |
|
# ? Mar 25, 2024 11:30 |
|
BlankSystemDaemon posted:Yep, got it in one; zfs send -R tank/olddataset@snapshot | mbuffer | zfs receive newdataset, then once it’s done you delete the old one and rename the new one. Neato. Gracias. I'll do some testing before migrating data and letting it rip on the actual "final" disk layout.
|
# ? Mar 25, 2024 14:25 |
Moey posted:Neato. Gracias. I still periodically do it if it's been a while since I've done some administrative task and want to make sure I'm doing it right. This, of course, goes hand in hand with using the -n flag at least once before running it without, on any administrative command.
|
|
# ? Mar 25, 2024 15:11 |
|
BlankSystemDaemon posted:One trick I was taught early was to truncate a small handful of files, give them GEOM gate devices so they're exposed via devfs the same way memory devices are, and create a testing pool to try commands on. Something about playing with a large stack of real disks (before putting them to their final use) feels good, though. I can't explain why.
|
# ? Mar 25, 2024 23:09 |
Computer viking posted:Something about playing with a large stack of real disks (before putting them to their final use) feels good, though. I can't explain why.
|
|
# ? Mar 26, 2024 01:40 |
|
BlankSystemDaemon posted:But truncate can do arbitrary-sized files??? Sure, but they don't make fun disk access noises.
|
# ? Mar 26, 2024 13:26 |
Computer viking posted:Sure, but they don't make fun disk access noises. BlankSystemDaemon fucked around with this message at 15:20 on Mar 26, 2024 |
|
# ? Mar 26, 2024 15:18 |
|
BlankSystemDaemon posted:If you can hear the disk access noises, you should be wearing hearing protection against the fan boise from all the fans in the rack Will someone pick up the phone?
|
# ? Mar 26, 2024 17:56 |
|
IOwnCalculus posted:Will someone pick up the phone? HELLO
|
# ? Mar 26, 2024 23:52 |
|
Is there any way to merge two discrete zfs pools so that to the filesystem they appear as a single mount point? I’d rather not go to the trouble of moving specific files and folders to this new pool. Alternatively, any way to hardlink across filesystem boundaries?
|
# ? Mar 28, 2024 01:31 |
|
Can't hardlink but you could symlink, or use a bind mount. But if you want to "merge" the pools - why even make a second pool, you could put those drives in as a new vdev in the original pool?
|
# ? Mar 28, 2024 02:57 |
|
Yaoi Gagarin posted:why even make a second pool, you could put those drives in as a new vdev in the original pool? Because Wibla will call you dumb.
|
# ? Mar 28, 2024 03:40 |
|
I thought that was BSD's job Multiple vdevs in one pool will lock you into a certain drive/pool layout though, be aware of that.
|
# ? Mar 28, 2024 04:23 |
|
Yaoi Gagarin posted:Can't hardlink but you could symlink, or use a bind mount. Tell me more about that second option. What’s a vdev? Will this allow the single pool to have a single mount point?
|
# ? Mar 28, 2024 06:59 |
Wibla posted:I thought that was BSD's job Talorat posted:Tell me more about that second option. What’s a vdev? Will this allow the single pool to have a single mount point? If you add a vdev to an existing pool, you expand the pool, and data will be distributed across then span such that they ahould end up being approximately equally full. See zfsconcepts(7). EDIT: Looking at it, I think this article from Klara explains it best. BlankSystemDaemon fucked around with this message at 08:25 on Mar 28, 2024 |
|
# ? Mar 28, 2024 08:17 |
|
Talorat posted:Tell me more about that second option. What’s a vdev? Will this allow the single pool to have a single mount point? vdev = Single or multiple sets of drives. Single disk, mirror , RAIDZ, RAIDZ2...... Pool (zpool) = collection of single or multiple vdevs If you create a pool with multiple vdevs, and one vdev suffers a catastrophic failure (more disk failures than your level of parity) your pools data is gone. e:fb
|
# ? Mar 28, 2024 08:19 |
|
Would you mind pasting the output of `zpool status -v` here?
|
# ? Mar 28, 2024 11:19 |
|
As for adding another vdev to a pool: It's nice to avoid adding new vdevs to almost full pools, for performance reasons: The pool will prioritize the new vdev until they're about equally full, which slows down writes compared to spreading them equally over the entire pool. Reading back that data in the future will also be slower (since it's spread over fewer disks), but depending on your access pattern that may not be a problem. For bulk storage that's probably not a problem, doubly so if it's just connected over Gbit network.
|
# ? Mar 28, 2024 13:26 |
|
Computer viking posted:As for adding another vdev to a pool: It's nice to avoid adding new vdevs to almost full pools, for performance reasons: The pool will prioritize the new vdev until they're about equally full, which slows down writes compared to spreading them equally over the entire pool. Reading back that data in the future will also be slower (since it's spread over fewer disks), but depending on your access pattern that may not be a problem. For bulk storage that's probably not a problem, doubly so if it's just connected over Gbit network. This is absolutely a thing and while it's not problematic for people with hoards of Linux ISOs, actual production data is a whole different ballgame. I've seen the results from adding a single 2-drive mirror vdev to a nearly-full production pool that was already made up of ~20 vdevs; it was not pretty.
|
# ? Mar 28, 2024 21:04 |
|
Does it give the performance of the single mirror vdev, or does it end up being even worse?
|
# ? Mar 28, 2024 21:26 |
|
|
# ? Mar 29, 2024 14:18 |
|
If I remember right, it was the performance of the single vdev - amplified heavily by the fact that it was a pair of spinning disks trying to simultaneously handle the bulk of incoming writes and also the vast majority of the reads because the newest data was the most popular.
|
# ? Mar 28, 2024 22:17 |