|
Oysters Autobio posted:Was considering an HBA card but now if this sort of setup makes it easy then I probably will get one then. I was told to get LSI cards and avoid Adaptec ones, which was advice I followed but don't really know the reasons behind. You'll want one that's capable of being flashed to IT mode, but you'll probably find the seller advertising they pre-flashed it and won't need to do it yourself. Different cards will use different numbers of PCIE lanes. I think you can use an 8x card in a 4x slot, but I'm unsure if that will only affect total bandwidth (combined max speed of all drives limited based on number of lanes and pcie version), or if it will cause errors or weirdness if you approach that speed. You connect hard drives to the hba with breakout cables that let you hook up 4 drives to each port on the hba. Depending on the hba the type of connector is different, so make sure you have the right one. The naming convention of lsi hba's is pretty simple. An LSI 92## card should be a PCIE gen 2 version, and a 93## will be a gen 3. Gen 2 is still probably fine for spinning drives and are extremely cheap. But gen 3 ones have come down in price recently. After the 9### model number, they will have another number and letter. That number is the number of drives it supports and if the connections are internal or external. So a 9300-8i is a PCIE gen 3 card with 2 internal connectors that you can connect 8 drives to. Don't assume PCIE lane requirements from the number of hard drives connections, lookup the specific model details. You can use more drives than a HBA card supports with a SAS expander card. These are pcie cards that just need power and a connection the the GBA, and act as the hard drive equivalent of a network switch. EDIT: As a final note, some HBAs were designed for servers with the expectation that there would be airflow over them and can overheat, so a common recommendation is to zip tie a tiny fan onto its heatsink. Some don't need this, but I don't know anywhere you could check.
|
# ? Mar 23, 2024 21:45 |
|
|
# ? May 15, 2024 03:25 |
|
Hmm ok I'll take a look. The one appealing thing about the Adaptec HBA I was looking at was that it wouldn't require any flashing the BIOS, which I'm not keen on because I'm looking to assemble everything over the next week except for HDDs that I'm waiting on, so it'd be nice if I don't have to redo anything when I got the HBA, unless I'm misunderstanding something with them.
|
# ? Mar 24, 2024 03:00 |
|
Oysters Autobio posted:Hmm ok I'll take a look. The one appealing thing about the Adaptec HBA I was looking at was that it wouldn't require any flashing the BIOS, which I'm not keen on because I'm looking to assemble everything over the next week except for HDDs that I'm waiting on, so it'd be nice if I don't have to redo anything when I got the HBA, unless I'm misunderstanding something with them. You can also buy a Broadcom HBA that is designed to be an HBA and not a hardware RAID controller, like a 9300 or 9400 and just use that. No flashing required.
|
# ? Mar 24, 2024 03:08 |
|
Back to a project that went on the backburner. Migration to new TrueNAS server. I have enough new/unused disks to create a 6 disk RAIDZ2 vdev that will hold existing data. After that migration of data, I'll end up with a some disks from the old box that I'll be reusing, so adding another 6 disk RAIDZ2 vdev to that new pool. After that, I would like to rebalance the pool. Anyone done any "in-place rebalancing"? https://github.com/markusressel/zfs-inplace-rebalancing
|
# ? Mar 25, 2024 01:33 |
That script just does what zfs send/receive does, but worse because it re-computes all checksums, and takes significantly longer.
|
|
# ? Mar 25, 2024 01:35 |
|
BlankSystemDaemon posted:That script just does what zfs send/receive does, but worse because it re-computes all checksums, and takes significantly longer. How would send/receive help me in this current situation though? Without bringing in some temporary storage as a dumping ground for a two-transfer migration. I don't see any benefit? Unless I am missing some feature of ZFS send/receive (which I admittedly do not use). Edit: Oooo, are you suggesting zpool send/receive within the same pool. Same concept as the script, so every file does a copy then delete (of original)? I was not aware send/receive could operate within a single pool (if that is what you were hinting at). Double edit: Like this? https://forum.proxmox.com/threads/zfs-send-recv-inside-same-pool.119983/post-521105 Moey fucked around with this message at 07:16 on Mar 25, 2024 |
# ? Mar 25, 2024 02:16 |
|
THF13 posted:I was told to get LSI cards and avoid Adaptec ones, which was advice I followed but don't really know the reasons behind. I decided to go with the Adaptec, mainly because of reviews on them like here. Snagged used one off ebay for around $35USD. Seems like the only downside I could see with these adaptec is very much needing to attach a 40mm fan to prevent overheating. But looking at other options, it's a really decent price point for something that can both support up to 16 HDDs, is on PCIe 3.0, and doesn't require re-flashing when switching between IR/IT modes. Someone in that thread also showed an easy way to thread through two unused threads on the heatsink to mount a small 40mm fan, so i like that over any kind of zipties or anything. Oysters Autobio fucked around with this message at 04:19 on Mar 25, 2024 |
# ? Mar 25, 2024 04:10 |
Moey posted:How would send/receive help me in this current situation though? Without bringing in some temporary storage as a dumping ground for a two-transfer migration. If you’re smart about it, you enable zpool checkpoint until you’re satisfied that everything made it over - that way, you can revert administrative changes like dataset removal. Just don’t forget to turn it off again. BlankSystemDaemon fucked around with this message at 11:37 on Mar 25, 2024 |
|
# ? Mar 25, 2024 11:30 |
|
BlankSystemDaemon posted:Yep, got it in one; zfs send -R tank/olddataset@snapshot | mbuffer | zfs receive newdataset, then once it’s done you delete the old one and rename the new one. Neato. Gracias. I'll do some testing before migrating data and letting it rip on the actual "final" disk layout.
|
# ? Mar 25, 2024 14:25 |
Moey posted:Neato. Gracias. I still periodically do it if it's been a while since I've done some administrative task and want to make sure I'm doing it right. This, of course, goes hand in hand with using the -n flag at least once before running it without, on any administrative command.
|
|
# ? Mar 25, 2024 15:11 |
|
BlankSystemDaemon posted:One trick I was taught early was to truncate a small handful of files, give them GEOM gate devices so they're exposed via devfs the same way memory devices are, and create a testing pool to try commands on. Something about playing with a large stack of real disks (before putting them to their final use) feels good, though. I can't explain why.
|
# ? Mar 25, 2024 23:09 |
Computer viking posted:Something about playing with a large stack of real disks (before putting them to their final use) feels good, though. I can't explain why.
|
|
# ? Mar 26, 2024 01:40 |
|
BlankSystemDaemon posted:But truncate can do arbitrary-sized files??? Sure, but they don't make fun disk access noises.
|
# ? Mar 26, 2024 13:26 |
Computer viking posted:Sure, but they don't make fun disk access noises. BlankSystemDaemon fucked around with this message at 15:20 on Mar 26, 2024 |
|
# ? Mar 26, 2024 15:18 |
|
BlankSystemDaemon posted:If you can hear the disk access noises, you should be wearing hearing protection against the fan boise from all the fans in the rack Will someone pick up the phone?
|
# ? Mar 26, 2024 17:56 |
|
IOwnCalculus posted:Will someone pick up the phone? HELLO
|
# ? Mar 26, 2024 23:52 |
|
Is there any way to merge two discrete zfs pools so that to the filesystem they appear as a single mount point? I’d rather not go to the trouble of moving specific files and folders to this new pool. Alternatively, any way to hardlink across filesystem boundaries?
|
# ? Mar 28, 2024 01:31 |
|
Can't hardlink but you could symlink, or use a bind mount. But if you want to "merge" the pools - why even make a second pool, you could put those drives in as a new vdev in the original pool?
|
# ? Mar 28, 2024 02:57 |
|
Yaoi Gagarin posted:why even make a second pool, you could put those drives in as a new vdev in the original pool? Because Wibla will call you dumb.
|
# ? Mar 28, 2024 03:40 |
|
I thought that was BSD's job Multiple vdevs in one pool will lock you into a certain drive/pool layout though, be aware of that.
|
# ? Mar 28, 2024 04:23 |
|
Yaoi Gagarin posted:Can't hardlink but you could symlink, or use a bind mount. Tell me more about that second option. What’s a vdev? Will this allow the single pool to have a single mount point?
|
# ? Mar 28, 2024 06:59 |
Wibla posted:I thought that was BSD's job Talorat posted:Tell me more about that second option. What’s a vdev? Will this allow the single pool to have a single mount point? If you add a vdev to an existing pool, you expand the pool, and data will be distributed across then span such that they ahould end up being approximately equally full. See zfsconcepts(7). EDIT: Looking at it, I think this article from Klara explains it best. BlankSystemDaemon fucked around with this message at 08:25 on Mar 28, 2024 |
|
# ? Mar 28, 2024 08:17 |
|
Talorat posted:Tell me more about that second option. What’s a vdev? Will this allow the single pool to have a single mount point? vdev = Single or multiple sets of drives. Single disk, mirror , RAIDZ, RAIDZ2...... Pool (zpool) = collection of single or multiple vdevs If you create a pool with multiple vdevs, and one vdev suffers a catastrophic failure (more disk failures than your level of parity) your pools data is gone. e:fb
|
# ? Mar 28, 2024 08:19 |
|
Would you mind pasting the output of `zpool status -v` here?
|
# ? Mar 28, 2024 11:19 |
|
As for adding another vdev to a pool: It's nice to avoid adding new vdevs to almost full pools, for performance reasons: The pool will prioritize the new vdev until they're about equally full, which slows down writes compared to spreading them equally over the entire pool. Reading back that data in the future will also be slower (since it's spread over fewer disks), but depending on your access pattern that may not be a problem. For bulk storage that's probably not a problem, doubly so if it's just connected over Gbit network.
|
# ? Mar 28, 2024 13:26 |
|
Computer viking posted:As for adding another vdev to a pool: It's nice to avoid adding new vdevs to almost full pools, for performance reasons: The pool will prioritize the new vdev until they're about equally full, which slows down writes compared to spreading them equally over the entire pool. Reading back that data in the future will also be slower (since it's spread over fewer disks), but depending on your access pattern that may not be a problem. For bulk storage that's probably not a problem, doubly so if it's just connected over Gbit network. This is absolutely a thing and while it's not problematic for people with hoards of Linux ISOs, actual production data is a whole different ballgame. I've seen the results from adding a single 2-drive mirror vdev to a nearly-full production pool that was already made up of ~20 vdevs; it was not pretty.
|
# ? Mar 28, 2024 21:04 |
|
Does it give the performance of the single mirror vdev, or does it end up being even worse?
|
# ? Mar 28, 2024 21:26 |
|
If I remember right, it was the performance of the single vdev - amplified heavily by the fact that it was a pair of spinning disks trying to simultaneously handle the bulk of incoming writes and also the vast majority of the reads because the newest data was the most popular.
|
# ? Mar 28, 2024 22:17 |
|
I've been out of the loop on all things 'NAS' and am wondering what the current recommendation for a low power, barebones kits? LTT kinda of re-ignited my desire to actually set this up again, pointing out that this exists (https://wiki.friendlyelec.com/wiki/index.php/CM3588_NAS_Kit). This is pretty compelling and it's actually available by some skeezy 3rd party here but I'm not sure what the other options are. I always imagined this as a box, but if I can avoid a big box and can instead stick a set of M.2 drives onto a small board that's sipping 30w~ of power, that would be highly preferable. My requirements are: • Small-ish at least • Low Power • Has 4+ M.2 slots • No proprietary nonsense. Canine Blues Arooo fucked around with this message at 01:17 on Mar 31, 2024 |
# ? Mar 31, 2024 01:06 |
|
TopTon on AliExpress has a few N100-based boards that'll do what you want, and they run on a 12v barrel plug, drawing like 16w total.
|
# ? Apr 1, 2024 15:32 |
|
I am building a new NAS (TrueNAS Core) and am trying to figure out the best approach for balancing drives is. I have qty 38 of 12 TB drives, and qty 34 of 18 TB drives. How would you allocate these across vdevs? I was thinking something like: RAIDZ3 Qty 3 of 12-disk vdevs of 12 TB drives (36 drives) Plus qty 2 spares Qty 3 of 11-disk vdevs of 18 TB drives (33 drives) Plus qty 1 spare And just put that all into a big pool. RAIDZ3 for 11-12 drives seems fine, not burning too many for parity, keeping the vdevs spindle count the same (although space will be ~25% different).
|
# ? Apr 2, 2024 01:00 |
|
Moey posted:vdev = Single or multiple sets of drives. Single disk, mirror , RAIDZ, RAIDZ2...... Got it! Thanks, so I guess the main disadvantage would be that since the new vDev is going to be an external DAS connected through a cable, if the DAS, HBA card or cable failed, I would nuke my entire pool until I was able to fix it.
|
# ? Apr 2, 2024 19:42 |
|
madsushi posted:I am building a new NAS (TrueNAS Core) and am trying to figure out the best approach for balancing drives is. My inner OCD would want them all to be 11-disk vdevs but I can't imagine that it really matters when you're talking about that many spindles and spindle sizes that different between vdevs. Talorat posted:Got it! Thanks, so I guess the main disadvantage would be that since the new vDev is going to be an external DAS connected through a cable, if the DAS, HBA card or cable failed, I would nuke my entire pool until I was able to fix it. Yes, the pool requires all vdevs to be at least healthy enough to read in order to mount. Though (knock on wood) DAS/HBA/SAS cable failures have been extremely rare for me compared to drive failures.
|
# ? Apr 3, 2024 01:04 |
|
Why is my Synology constantly flipping over to battery power? It's on a 3-week old APC 850VA and it hasn't happened until like a week or two ago. No issues with electric anywhere in the house, and my utility essentially never goes down unless there's a storm or some idiot runs his car into a substation. edit: it did it again while I was typing this post. There's a clicking noise when it switches over. Henrik Zetterberg fucked around with this message at 23:09 on Apr 5, 2024 |
# ? Apr 5, 2024 23:07 |
|
Sounds like a problem with the UPS, I can't think of how anything about the load would trigger that. I would contact APC support and ask them about it; since the unit is so new, they may just replace it proactively. I have a 1500W APC unit and I hear it spontaneously click twice a couple seconds apart once in a while, which I assume is some kind of self test, but it doesn't generate alarms and is nowhere near that often - maybe weekly?
Eletriarnation fucked around with this message at 23:15 on Apr 5, 2024 |
# ? Apr 5, 2024 23:13 |
|
Henrik Zetterberg posted:Why is my Synology constantly flipping over to battery power? It's on a 3-week old APC 850VA and it hasn't happened until like a week or two ago. No issues with electric anywhere in the house, and my utility essentially never goes down unless there's a storm or some idiot runs his car into a substation. Could be low voltage/high voltage on your lines if there's AVR on the system. Mine will click over to battery power to keep it closer to 115V if it goes down to about 113 or up to about 119. It happens more to me in the summer when the AC kicks on but also if there's storms or just weird power in the area.
|
# ? Apr 5, 2024 23:27 |
|
the clicking when it switches over is normal, that's just mechanical relays being a big robot finger mashing a pushbutton, that's how they work. As for why that's happening every ~4 minutes, yeah could be highly sensitive voltage or you have some kind of big motor load starting up and causing a voltage sag. I would suggest pulling up a live log on your phone or laptop or some mobile device and then stand next to your HVAC unit, then your refrigerator, and then any other chest freezers or etc and see if any of those cutting on/off correlates to the UPS switchovers. e: yeah like 5-10 seconds at a time some compressor motor somewhere in your house almost certainly needs a new starter capacitor shame on an IGA fucked around with this message at 01:08 on Apr 6, 2024 |
# ? Apr 6, 2024 01:02 |
|
The UPS logs will have detail on why it's switching. There are also usually knobs to tweak in the UPS settings -- like if it's undervoltage you can widen the tolerance.
|
# ? Apr 6, 2024 03:48 |
|
I did a reboot on my Synology and it hasn't happened in 30 mins. Checking the PowerChute logs (if they exist) would have been my next step. edit: nevermind it started doing that poo poo again KS posted:There are also usually knobs to tweak in the UPS settings -- like if it's undervoltage you can widen the tolerance. Ahh this is good to know. Thanks! Henrik Zetterberg fucked around with this message at 04:07 on Apr 6, 2024 |
# ? Apr 6, 2024 03:58 |
|
|
# ? May 15, 2024 03:25 |
|
FYI for using an Intel ARC card in Plex: I spent ages trying to figure out why HW transcode wasn't working in Unraid and it turns out HDR Tone Mapping is broken. Turning that off allowed Plex to use the GPU.
|
# ? Apr 6, 2024 04:18 |