|
Don't sleep on the ultrastars either, they're the same drives pretty much https://youtu.be/QDyqNry_mDo?si=Dh1F7i1YR37Rp1NC Recent video comparing WD's whole rainbow
|
# ? May 13, 2024 15:36 |
|
|
# ? May 28, 2024 07:13 |
|
i see that ultrastars have a rep of being fairly loud. this one's going in my computer and a nas in a closet, so i'm a little more sensitive to noise, but it's possible to figure out how much of this is an older issue that's still relevant today interesting that gold are basically rebranded ultrastars
|
# ? May 13, 2024 15:48 |
|
A PR was just opened on the OpenZFS Github that'll enable a failsafe mode on metadata special devices, meaning all writes get passed through to the pool, too, so that in case that your special vdev craps out (for reasons, or simply lacking redundancy), your pool will be safe anyway. Guess I'll be using a spare NVMe slot for metadata, as soon this is deemed stable.
|
# ? May 13, 2024 17:54 |
Combat Pretzel posted:A PR was just opened on the OpenZFS Github that'll enable a failsafe mode on metadata special devices, meaning all writes get passed through to the pool, too, so that in case that your special vdev craps out (for reasons, or simply lacking redundancy), your pool will be safe anyway. Guess I'll be using a spare NVMe slot for metadata, as soon this is deemed stable.
|
|
# ? May 13, 2024 19:12 |
|
The loudest HDDs I own are a couple of WD Red 4TBs. Higher capacity red pros and the one exos I have are noticeably quieter. I'm sure that's partially a function of what they're installed in, but I can hear those 4tbs through the wall.
|
# ? May 13, 2024 19:35 |
|
Combat Pretzel posted:A PR was just opened on the OpenZFS Github that'll enable a failsafe mode on metadata special devices, meaning all writes get passed through to the pool, too, so that in case that your special vdev craps out (for reasons, or simply lacking redundancy), your pool will be safe anyway. Guess I'll be using a spare NVMe slot for metadata, as soon this is deemed stable. That's brilliant, really
|
# ? May 13, 2024 21:09 |
|
Combat Pretzel posted:A PR was just opened on the OpenZFS Github that'll enable a failsafe mode on metadata special devices, meaning all writes get passed through to the pool, too, so that in case that your special vdev craps out (for reasons, or simply lacking redundancy), your pool will be safe anyway. Guess I'll be using a spare NVMe slot for metadata, as soon this is deemed stable.
|
# ? May 13, 2024 22:21 |
|
Combat Pretzel posted:I'm hoping that Microsoft (and whoever else "spies") are going to release data, whether these extreme coronal mass ejections this weekend led to an increased amount of system crashes or not. There was a project back in the day to use Watson (aggregated windows crash data) to try to identify solar flares. It didn’t work though, too much noise.
|
# ? May 13, 2024 23:29 |
|
The NAS has been moved into its new home and the server/networking closet is now
|
# ? May 14, 2024 00:46 |
|
Why does your NAS use so many paper towels? Leaky bit bucket?
|
# ? May 14, 2024 02:57 |
|
Extra kindling so if the UPS battery goes up in flames I am assured the NAS is well and truly hosed. That's been my primary storage closest since I moved into this apartment and I needed to put them somewhere.
|
# ? May 14, 2024 03:02 |
|
How is only 2 paper towel rolls storage? This is the nas thread!
|
# ? May 14, 2024 03:44 |
|
Combat Pretzel posted:A PR was just opened on the OpenZFS Github that'll enable a failsafe mode on metadata special devices, meaning all writes get passed through to the pool, too, so that in case that your special vdev craps out (for reasons, or simply lacking redundancy), your pool will be safe anyway. Guess I'll be using a spare NVMe slot for metadata, as soon this is deemed stable. ahh, perfect. that's exactly what I wanted to do. That setup is already supported in BTRFS somewhat awkwardly. It's much more interesting for my use case than L2ARC because I have approaching 7 million files on my NAS and my biggest issue with the current setup is the lag accessing large directories when they've fallen out of dcache. If I can move 99% of the IOPS off spinning rust this thing will be an absolute beast. So much to configure. I've got a 40gb connectx-3 to flash for it, more drive trays tugm beat newegg shipping, lol. It arrived here from china before the rest of the parts arrived from california. And, like an idiot, I forgot a heatsink. Whoops. The best-in-class SP3 will be here sunday, x.x. how stupid would I be to ziptie down an old evo212 to do the initial setup? With the motherboard flat, obviously. I'm guessing somewhere between the range of 'Absolutely not' to 'Whatever you're smoking, please share' I've got oldschool aluminum block heatsinks, so I may just set one on top of it with a fan. At least that won't risk tipping over like a tower. Or I could do the sane thing and wait a week...
|
# ? May 14, 2024 03:58 |
|
id just wait for the correct heatsink, otherwise you'll have to clean off the thermal goop and reapply it
|
# ? May 14, 2024 04:03 |
|
Combat Pretzel posted:A PR was just opened on the OpenZFS Github that'll enable a failsafe mode on metadata special devices, meaning all writes get passed through to the pool, too, so that in case that your special vdev craps out (for reasons, or simply lacking redundancy), your pool will be safe anyway. Guess I'll be using a spare NVMe slot for metadata, as soon this is deemed stable. That's pretty neat. I look forward to it eventually showing up in the TrueNAS Core feature set.
|
# ? May 14, 2024 05:35 |
|
Yaoi Gagarin posted:id just wait for the correct heatsink, otherwise you'll have to clean off the thermal goop and reapply it but it's just sitting there, staring at me. demanding I do something. reddit-tier jokes aside, holy poo poo the meshify 2 xl is the nicest case I've ever built in. Whole thing breaks apart without tools at all for access, but you do need a screwdriver to convert it to storage configuration (pictured here.) Sadly only comes with 2 of the 14 trays to max it out and they want $10 each for the rest of them, yikes. They're stamped sheet metal and a screw! I haven't ordered the new drives because lol, the rest of the stuff wasn't supposed to get here until the 21st. Overall I'm thinking something like a 64-96 TB Z2 but I'm eyeing the deal threads to watch for anything really good. Hardest part is going to be data migration. 12tb of my current nas is welded into the current system via bcache on the boot ssd, so I can't move those without taking the whole thing offline. theoretically you can undo bcache but I'd prefer not to. So I'm really not sure. I can't really down them completely for a week to move stuff around... the half-formed plan in my head right now is: * bring up the new, empty array * buy a second 40gbe DAC (I have another connectx-3 for it) so they're both on my 40gbe switch * down the network share on the old machine * export the drives via iscsi/nbd * bring them up on the new machine as samba/nfs shares. Doing it as raw block rather than filesystem gets rid of a lot of the overhead and lets the massively larger and faster RAM cache on the new box cut down on IOPS required? * sync to the new array from a snapshot while leaving the shares writable. * offline everything to sync the new changes. * export the shares from the new array There's various issues with the obvious plan of "just put all the drives in the new box". first is that bcache fuckup. second is I'd have to buy a lot of extra drive trays @$10/ea to mount them temporarily. Or... ugh go across cases with the cables? Just asking for trouble. I dunno. Opinions on a better option? I suppose it's only an additional $60 for the trays, and I will be filling them with more drives eventually. Overall migration would be about the same minus the network connection. Snapshot, sync, offline, sync changes, export from new zvol, etc. tl;dr: moving a 4x4 raid5 + 3x10raid5 to ~6x16 z2 with the least downtime and risk.
|
# ? May 14, 2024 06:06 |
|
Is the ARM SBC that was too slow to build OpenWrt fast enough to run Zoneminder? So far it seems to be working! It's a RockPro64 with a 1TB NVME drive. Because of reasons, it seems to be best for Zoneminder to store its videos in a separate partition. I guess you can't just tell it "use no more than 150 GB"? Seems odd, but a lot of guides say to make a separate partition. So I live booted an SD card, and used resize2fs and fdisk to shrink my original partition and make a new one, in place. It worked. But what if I get another camera or need to change the size of the video partition? Should I have moved to LVM, even for just a single SSD? That should make adjusting the partition size on the fly pretty easy (still need to shut down zoneminder and unmount the fs tho). Should I backup my stuff and go through this all again to move to LVM? Anyone here done it on an ARM SBC running Armbian? I'm assuming it works...
|
# ? May 14, 2024 06:34 |
|
Combat Pretzel posted:A PR was just opened on the OpenZFS Github that'll enable a failsafe mode on metadata special devices, meaning all writes get passed through to the pool, too, so that in case that your special vdev craps out (for reasons, or simply lacking redundancy), your pool will be safe anyway. Guess I'll be using a spare NVMe slot for metadata, as soon this is deemed stable. Lmao I added a SATA ssd to my pool for this only to realise i hadn’t read the small print that you can’t remove it and it’s now a single point of failure for the whole pool. Who knows this might hit truenas before either the drive dies or I nuke and rebuild the pool
|
# ? May 14, 2024 12:19 |
|
Updated to the newest general release version of TrueNAS Scale and it does the two biggest things I’ve been waiting on: full Intel ARC GPU support and ZFS ARC can now use more than half the system memory. So now my 128GB of ram will see more usage.
|
# ? May 14, 2024 15:20 |
|
Generic Monk posted:Lmao I added a SATA ssd to my pool for this only to realise i hadn’t read the small print that you can’t remove it and it’s now a single point of failure for the whole pool. Who knows this might hit truenas before either the drive dies or I nuke and rebuild the pool If you can fit another SSD in there somehow (PCIe to M.2?), you should be able to turn it into a mirror with zpool attach; I think that works for all device types.
|
# ? May 14, 2024 16:09 |
|
I moved my desktop/"NAS" into a Fractal Define 7XL last year, and maxed out I have slots for ~11 3.5" disks in the front stack and 2 2.5" disks on the back of the motherboard tray. fully loaded with 140mm fans, and it's enough to hear it, but sitting next to my legs under the desk it's barely noticeable. Running 8 x 12TB disks currently, plus the system SSD. And yes the trays are ridiculous, but the rubber grommets and standoff screws dampen drive noises a lot. As far as drive noise, I've got a mix of 4 WD Red (EMAZ and EMFZ, don't know the difference) and 4 new HGST DC HC520 disks, and the HGST are not any louder than the Reds, even being 7200 vs 5400rpm. Temps are right in line as well, 29-32c across the whole stack.
|
# ? May 14, 2024 17:56 |
|
Zorak of Michigan posted:Why does your NAS use so many paper towels? Leaky bit bucket?
|
# ? May 14, 2024 18:11 |
|
Kibner posted:Updated to the newest general release version of TrueNAS Scale and it does the two biggest things I’ve been waiting on: full Intel ARC GPU support and ZFS ARC can now use more than half the system memory. So now my 128GB of ram will see more usage. The latter part (ZFS ARC usage) has always been a tunable though, right? Does it just out of the box now come with a different default?
|
# ? May 14, 2024 19:23 |
|
movax posted:The latter part (ZFS ARC usage) has always been a tunable though, right? Does it just out of the box now come with a different default? It was tunable but could result in system stability issues, iirc. I found this after a brief search: https://github.com/openzfs/zfs/commit/518b4876022eee58b14903da09b99c01b8caa754 I believe that issue has been fixed (either as part of kernel or part of zfs, I don't know) but Scale now uses like 90% of my RAM.
|
# ? May 14, 2024 19:42 |
|
Pablo Bluth posted:Lets be honest, 50% of all NAS usage is for porn backup.... If I had like a really large directory of vacation photos spanning many many years that are in all sorts of different folders with essentially no organizational structure. Can I use AI or something like that to sort that out on a NAS?
|
# ? May 14, 2024 19:51 |
|
If your EXIF data is intact in terms of the correct time stamps on the pictures and maybe GPS tags then you probably don't need any AI wizardry
|
# ? May 14, 2024 19:59 |
|
movax posted:The latter part (ZFS ARC usage) has always been a tunable though, right? Does it just out of the box now come with a different default?
|
# ? May 14, 2024 20:05 |
|
I don't seem to have exif data, it's in folders that are named pseudo randomly like this. I must have used some app to organize them a million years ago but many folders either have 100's of photos or 1 photo.
|
# ? May 14, 2024 20:06 |
|
Nolgthorn posted:If I had like a really large directory of vacation photos spanning many many years that are in all sorts of different folders with essentially no organizational structure. Can I use AI or something like that to sort that out on a NAS?
|
# ? May 14, 2024 20:07 |
|
I must have quoted the wrong post before.
|
# ? May 14, 2024 20:09 |
|
Nolgthorn posted:I must have quoted the wrong post before. It did seem very tongue in cheek. Or whatever you're into, I'm not judging.
|
# ? May 14, 2024 20:14 |
|
webcams for christ posted:okay so I found a good deal on an IT-Mode LSI 9300-16i. cross-posting here. I barely know what I'm doing
|
# ? May 14, 2024 21:09 |
|
webcams for christ posted:cross-posting here. I barely know what I'm doing That backplane should work - I think it has mini-SAS/SFF-8087 connectors, which are standard enough. The biggest problems will be physical fit (what are you putting it in?) and powering it - is that a PCIe power connector, or some weird and wonderful HPE design? Maybe the ever fun "same pin layout, different voltages"? (Though I think those connectors are usually keyed with square vs D-shaped pins to keep you from doing anything too destructive.) The 9300-16i apparently has SFF-8643 ("Mini-SAS HD") connectors, so for your cabling needs you need either SFF-8643 fanout cables or SFF-8643 to SFF-8087 cables depending on what you go for. Huge caveat: I'm partially guessing here and you should wait a bit for people to call out my mistakes before trusting this. Computer viking fucked around with this message at 23:07 on May 14, 2024 |
# ? May 14, 2024 23:04 |
|
Nolgthorn posted:I must have quoted the wrong post before. lol
|
# ? May 15, 2024 00:03 |
|
ryanrs posted:should I try LVM lol it worked First I installed lvm2, which among other things, rewrites your initramfs to teach it about LVM stuff. It's important to do this first. Then I booted from a fresh Armbian SD card and used resize2fs to scrunch down my existing NVME root filesystem (only a few gigs), then rewrote the partition tables, careful not to change the start sector or the first partition. This freed up 95% of the SSD, which I turned into a LVM pv. Then I dd'ed the blocks to a new lv, and used tune2fs to increment the UUID of the old root partition. Swapped back to my normal SD card, and it found the root filesystem with the old UUID in the new lv, and booted right up. If it had failed, I would have decremented the original partition UUID, which probably(?) would have got me back to booting off the original partition.
|
# ? May 15, 2024 03:37 |
|
Generic Monk posted:Lmao I added a SATA ssd to my pool for this only to realise i hadn’t read the small print that you can’t remove it and it’s now a single point of failure for the whole pool. Who knows this might hit truenas before either the drive dies or I nuke and rebuild the pool oof WTF. Why is it a single point of failure when it's just a mirror of the on-disk metadata? Should be as trivial as "poof, gone, now we read from the spinners". i don't even understand the point of a critical SPoF 'mirror'. I thought bcache's "Hi I'm a write-through cache but if I die I delete 100tb of data teehee" was bad.
|
# ? May 15, 2024 05:26 |
|
afaik it is not a mirror of the metadata, when you make a special vdev it has all the metadata.
|
# ? May 15, 2024 05:40 |
|
Yaoi Gagarin posted:afaik it is not a mirror of the metadata, when you make a special vdev it has all the metadata. we were talking about this: Combat Pretzel posted:A PR was just opened on the OpenZFS Github that'll enable a failsafe mode on metadata special devices, meaning all writes get passed through to the pool, too, so that in case that your special vdev craps out (for reasons, or simply lacking redundancy), your pool will be safe anyway. Guess I'll be using a spare NVMe slot for metadata, as soon this is deemed stable. but maybe Generic Monk was talking about the existing implementation, which is lmao levels of bad.
|
# ? May 15, 2024 05:47 |
Generic Monk posted:Lmao I added a SATA ssd to my pool for this only to realise i hadn’t read the small print that you can’t remove it and it’s now a single point of failure for the whole pool. Who knows this might hit truenas before either the drive dies or I nuke and rebuild the pool ZFS doesn't give a gently caress about what driver you're using, nor what the disk is. Kibner posted:It was tunable but could result in system stability issues, iirc. I found this after a brief search: https://github.com/openzfs/zfs/commit/518b4876022eee58b14903da09b99c01b8caa754 That'll be a pretty big advantage, if it turns out to be possible - something that I'm not quite sure of. I think it's possible on FreeBSD and Illumos, but it doesn't seem likely to be possible on Linux. Harik posted:we were talking about this: I'm interested to learn how Generic Monk managed to learn about the 'special' vdev, without learning that it should always have its own redundancy via mirroring (or even n-way mirroring), though. All the documentation I know of makes a big deal out of making it very explicit. BlankSystemDaemon fucked around with this message at 12:16 on May 15, 2024 |
|
# ? May 15, 2024 12:03 |
|
|
# ? May 28, 2024 07:13 |
|
Computer viking posted:Huge caveat: I'm partially guessing here and you should wait a bit for people to call out my mistakes before trusting this. Cool. The sales rep for the HP backplate was able to answer my questions pretty quickly, so I went ahead and pulled the trigger on it, along with the necessary cables. Thanks!
|
# ? May 15, 2024 20:26 |