Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
THF13
Sep 26, 2007

Keep an adversary in the dark about what you're capable of, and he has to assume the worst.

Oysters Autobio posted:

Was considering an HBA card but now if this sort of setup makes it easy then I probably will get one then.

These are really new hardware expansions for me. What do I need to do with my build to look for compatibility and/or performance when picking an HBA card? I'm seeing a decently priced ASR-71605 ADAPTEC 6GB/S SAS SATA PCI-E RAID card but have zero clue how to spec these to my mobo and build-form factor.

I was told to get LSI cards and avoid Adaptec ones, which was advice I followed but don't really know the reasons behind.

You'll want one that's capable of being flashed to IT mode, but you'll probably find the seller advertising they pre-flashed it and won't need to do it yourself.

Different cards will use different numbers of PCIE lanes. I think you can use an 8x card in a 4x slot, but I'm unsure if that will only affect total bandwidth (combined max speed of all drives limited based on number of lanes and pcie version), or if it will cause errors or weirdness if you approach that speed.

You connect hard drives to the hba with breakout cables that let you hook up 4 drives to each port on the hba. Depending on the hba the type of connector is different, so make sure you have the right one.

The naming convention of lsi hba's is pretty simple. An LSI 92## card should be a PCIE gen 2 version, and a 93## will be a gen 3. Gen 2 is still probably fine for spinning drives and are extremely cheap. But gen 3 ones have come down in price recently.
After the 9### model number, they will have another number and letter. That number is the number of drives it supports and if the connections are internal or external.
So a 9300-8i is a PCIE gen 3 card with 2 internal connectors that you can connect 8 drives to.
Don't assume PCIE lane requirements from the number of hard drives connections, lookup the specific model details.

You can use more drives than a HBA card supports with a SAS expander card. These are pcie cards that just need power and a connection the the GBA, and act as the hard drive equivalent of a network switch.

EDIT:
As a final note, some HBAs were designed for servers with the expectation that there would be airflow over them and can overheat, so a common recommendation is to zip tie a tiny fan onto its heatsink. Some don't need this, but I don't know anywhere you could check.

Adbot
ADBOT LOVES YOU

Oysters Autobio
Mar 13, 2017
Hmm ok I'll take a look. The one appealing thing about the Adaptec HBA I was looking at was that it wouldn't require any flashing the BIOS, which I'm not keen on because I'm looking to assemble everything over the next week except for HDDs that I'm waiting on, so it'd be nice if I don't have to redo anything when I got the HBA, unless I'm misunderstanding something with them.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Oysters Autobio posted:

Hmm ok I'll take a look. The one appealing thing about the Adaptec HBA I was looking at was that it wouldn't require any flashing the BIOS, which I'm not keen on because I'm looking to assemble everything over the next week except for HDDs that I'm waiting on, so it'd be nice if I don't have to redo anything when I got the HBA, unless I'm misunderstanding something with them.

You can also buy a Broadcom HBA that is designed to be an HBA and not a hardware RAID controller, like a 9300 or 9400 and just use that. No flashing required.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
Back to a project that went on the backburner. Migration to new TrueNAS server.

I have enough new/unused disks to create a 6 disk RAIDZ2 vdev that will hold existing data.

After that migration of data, I'll end up with a some disks from the old box that I'll be reusing, so adding another 6 disk RAIDZ2 vdev to that new pool.

After that, I would like to rebalance the pool. Anyone done any "in-place rebalancing"?

https://github.com/markusressel/zfs-inplace-rebalancing

BlankSystemDaemon
Mar 13, 2009



That script just does what zfs send/receive does, but worse because it re-computes all checksums, and takes significantly longer.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

BlankSystemDaemon posted:

That script just does what zfs send/receive does, but worse because it re-computes all checksums, and takes significantly longer.

How would send/receive help me in this current situation though? Without bringing in some temporary storage as a dumping ground for a two-transfer migration.

I don't see any benefit? Unless I am missing some feature of ZFS send/receive (which I admittedly do not use).

Edit:

Oooo, are you suggesting zpool send/receive within the same pool.

Same concept as the script, so every file does a copy then delete (of original)?

I was not aware send/receive could operate within a single pool (if that is what you were hinting at).

Double edit:

Like this? https://forum.proxmox.com/threads/zfs-send-recv-inside-same-pool.119983/post-521105

Moey fucked around with this message at 07:16 on Mar 25, 2024

Oysters Autobio
Mar 13, 2017

THF13 posted:

I was told to get LSI cards and avoid Adaptec ones, which was advice I followed but don't really know the reasons behind.

You'll want one that's capable of being flashed to IT mode, but you'll probably find the seller advertising they pre-flashed it and won't need to do it yourself.

Different cards will use different numbers of PCIE lanes. I think you can use an 8x card in a 4x slot, but I'm unsure if that will only affect total bandwidth (combined max speed of all drives limited based on number of lanes and pcie version), or if it will cause errors or weirdness if you approach that speed.

You connect hard drives to the hba with breakout cables that let you hook up 4 drives to each port on the hba. Depending on the hba the type of connector is different, so make sure you have the right one.

The naming convention of lsi hba's is pretty simple. An LSI 92## card should be a PCIE gen 2 version, and a 93## will be a gen 3. Gen 2 is still probably fine for spinning drives and are extremely cheap. But gen 3 ones have come down in price recently.
After the 9### model number, they will have another number and letter. That number is the number of drives it supports and if the connections are internal or external.
So a 9300-8i is a PCIE gen 3 card with 2 internal connectors that you can connect 8 drives to.
Don't assume PCIE lane requirements from the number of hard drives connections, lookup the specific model details.

You can use more drives than a HBA card supports with a SAS expander card. These are pcie cards that just need power and a connection the the GBA, and act as the hard drive equivalent of a network switch.

EDIT:
As a final note, some HBAs were designed for servers with the expectation that there would be airflow over them and can overheat, so a common recommendation is to zip tie a tiny fan onto its heatsink. Some don't need this, but I don't know anywhere you could check.

I decided to go with the Adaptec, mainly because of reviews on them like here. Snagged used one off ebay for around $35USD.

Seems like the only downside I could see with these adaptec is very much needing to attach a 40mm fan to prevent overheating. But looking at other options, it's a really decent price point for something that can both support up to 16 HDDs, is on PCIe 3.0, and doesn't require re-flashing when switching between IR/IT modes. Someone in that thread also showed an easy way to thread through two unused threads on the heatsink to mount a small 40mm fan, so i like that over any kind of zipties or anything.

Oysters Autobio fucked around with this message at 04:19 on Mar 25, 2024

BlankSystemDaemon
Mar 13, 2009



Moey posted:

How would send/receive help me in this current situation though? Without bringing in some temporary storage as a dumping ground for a two-transfer migration.

I don't see any benefit? Unless I am missing some feature of ZFS send/receive (which I admittedly do not use).

Edit:

Oooo, are you suggesting zpool send/receive within the same pool.

Same concept as the script, so every file does a copy then delete (of original)?

I was not aware send/receive could operate within a single pool (if that is what you were hinting at).

Double edit:

Like this? https://forum.proxmox.com/threads/zfs-send-recv-inside-same-pool.119983/post-521105
Yep, got it in one; zfs send -R tank/olddataset@snapshot | mbuffer | zfs receive newdataset, then once it’s done you delete the old one and rename the new one.

If you’re smart about it, you enable zpool checkpoint until you’re satisfied that everything made it over - that way, you can revert administrative changes like dataset removal.
Just don’t forget to turn it off again.

BlankSystemDaemon fucked around with this message at 11:37 on Mar 25, 2024

Moey
Oct 22, 2010

I LIKE TO MOVE IT

BlankSystemDaemon posted:

Yep, got it in one; zfs send -R tank/olddataset@snapshot | mbuffer | zfs receive newdataset, then once it’s done you delete the old one and rename the new one.

If you’re smart about it, you enable zpool checkpoint until you’re satisfied that everything made it over - that way, you can revert administrative changes like dataset removal.
Just don’t forget to turn it off again.

Neato. Gracias.

I'll do some testing before migrating data and letting it rip on the actual "final" disk layout.

BlankSystemDaemon
Mar 13, 2009



Moey posted:

Neato. Gracias.

I'll do some testing before migrating data and letting it rip on the actual "final" disk layout.
One trick I was taught early was to truncate a small handful of files, give them GEOM gate devices so they're exposed via devfs the same way memory devices are, and create a testing pool to try commands on.

I still periodically do it if it's been a while since I've done some administrative task and want to make sure I'm doing it right.
This, of course, goes hand in hand with using the -n flag at least once before running it without, on any administrative command.

Computer viking
May 30, 2011
Now with less breakage.

BlankSystemDaemon posted:

One trick I was taught early was to truncate a small handful of files, give them GEOM gate devices so they're exposed via devfs the same way memory devices are, and create a testing pool to try commands on.

I still periodically do it if it's been a while since I've done some administrative task and want to make sure I'm doing it right.
This, of course, goes hand in hand with using the -n flag at least once before running it without, on any administrative command.

Something about playing with a large stack of real disks (before putting them to their final use) feels good, though. I can't explain why.

BlankSystemDaemon
Mar 13, 2009



Computer viking posted:

Something about playing with a large stack of real disks (before putting them to their final use) feels good, though. I can't explain why.
But truncate can do arbitrary-sized files???

Computer viking
May 30, 2011
Now with less breakage.

BlankSystemDaemon posted:

But truncate can do arbitrary-sized files???

Sure, but they don't make fun disk access noises. :colbert:

BlankSystemDaemon
Mar 13, 2009



Computer viking posted:

Sure, but they don't make fun disk access noises. :colbert:
If you can hear the disk access noises, you should be wearing hearing protection against the fan boise from all the fans in the rack :c00lbert:

BlankSystemDaemon fucked around with this message at 15:20 on Mar 26, 2024

IOwnCalculus
Apr 2, 2003





BlankSystemDaemon posted:

If you can hear the disk access noises, you should be wearing hearing protection against the fan boise from all the fans in the rack :c00lbert:

Will someone pick up the phone?

Shumagorath
Jun 6, 2001

IOwnCalculus posted:

Will someone pick up the phone?
JBOD
HELLO

Talorat
Sep 18, 2007

Hahaha! Aw come on, I can't tell you everything right away! That would make for a boring story, don't you think?
Is there any way to merge two discrete zfs pools so that to the filesystem they appear as a single mount point? I’d rather not go to the trouble of moving specific files and folders to this new pool. Alternatively, any way to hardlink across filesystem boundaries?

Yaoi Gagarin
Feb 20, 2014

Can't hardlink but you could symlink, or use a bind mount.

But if you want to "merge" the pools - why even make a second pool, you could put those drives in as a new vdev in the original pool?

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Yaoi Gagarin posted:

why even make a second pool, you could put those drives in as a new vdev in the original pool?

Because Wibla will call you dumb.

Wibla
Feb 16, 2011

I thought that was BSD's job :smith:

Multiple vdevs in one pool will lock you into a certain drive/pool layout though, be aware of that.

Talorat
Sep 18, 2007

Hahaha! Aw come on, I can't tell you everything right away! That would make for a boring story, don't you think?

Yaoi Gagarin posted:

Can't hardlink but you could symlink, or use a bind mount.

But if you want to "merge" the pools - why even make a second pool, you could put those drives in as a new vdev in the original pool?

Tell me more about that second option. What’s a vdev? Will this allow the single pool to have a single mount point?

BlankSystemDaemon
Mar 13, 2009



Wibla posted:

I thought that was BSD's job :smith:

Multiple vdevs in one pool will lock you into a certain drive/pool layout though, be aware of that.
Look buddy, I just workmanual-page-at-people here.

Talorat posted:

Tell me more about that second option. What’s a vdev? Will this allow the single pool to have a single mount point?
A ZFS pool consists of vdevs, each of which is its own RAID configuration, and data is spanned across multiple vdevs.
If you add a vdev to an existing pool, you expand the pool, and data will be distributed across then span such that they ahould end up being approximately equally full.

See zfsconcepts(7).
EDIT: Looking at it, I think this article from Klara explains it best.

BlankSystemDaemon fucked around with this message at 08:25 on Mar 28, 2024

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Talorat posted:

Tell me more about that second option. What’s a vdev? Will this allow the single pool to have a single mount point?

vdev = Single or multiple sets of drives. Single disk, mirror , RAIDZ, RAIDZ2......

Pool (zpool) = collection of single or multiple vdevs

If you create a pool with multiple vdevs, and one vdev suffers a catastrophic failure (more disk failures than your level of parity) your pools data is gone.


e:fb

Yaoi Gagarin
Feb 20, 2014

Would you mind pasting the output of `zpool status -v` here?

Computer viking
May 30, 2011
Now with less breakage.

As for adding another vdev to a pool: It's nice to avoid adding new vdevs to almost full pools, for performance reasons: The pool will prioritize the new vdev until they're about equally full, which slows down writes compared to spreading them equally over the entire pool. Reading back that data in the future will also be slower (since it's spread over fewer disks), but depending on your access pattern that may not be a problem. For bulk storage that's probably not a problem, doubly so if it's just connected over Gbit network.

IOwnCalculus
Apr 2, 2003





Computer viking posted:

As for adding another vdev to a pool: It's nice to avoid adding new vdevs to almost full pools, for performance reasons: The pool will prioritize the new vdev until they're about equally full, which slows down writes compared to spreading them equally over the entire pool. Reading back that data in the future will also be slower (since it's spread over fewer disks), but depending on your access pattern that may not be a problem. For bulk storage that's probably not a problem, doubly so if it's just connected over Gbit network.

This is absolutely a thing and while it's not problematic for people with hoards of Linux ISOs, actual production data is a whole different ballgame. I've seen the results from adding a single 2-drive mirror vdev to a nearly-full production pool that was already made up of ~20 vdevs; it was not pretty.

Pablo Bluth
Sep 7, 2007

I've made a huge mistake.
Does it give the performance of the single mirror vdev, or does it end up being even worse?

Adbot
ADBOT LOVES YOU

IOwnCalculus
Apr 2, 2003





If I remember right, it was the performance of the single vdev - amplified heavily by the fact that it was a pair of spinning disks trying to simultaneously handle the bulk of incoming writes and also the vast majority of the reads because the newest data was the most popular.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply