Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Azhais
Feb 5, 2007
Switchblade Switcharoo
Don't sleep on the ultrastars either, they're the same drives pretty much

https://youtu.be/QDyqNry_mDo?si=Dh1F7i1YR37Rp1NC

Recent video comparing WD's whole rainbow

Adbot
ADBOT LOVES YOU

kliras
Mar 27, 2021
i see that ultrastars have a rep of being fairly loud. this one's going in my computer and a nas in a closet, so i'm a little more sensitive to noise, but it's possible to figure out how much of this is an older issue that's still relevant today

interesting that gold are basically rebranded ultrastars

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
A PR was just opened on the OpenZFS Github that'll enable a failsafe mode on metadata special devices, meaning all writes get passed through to the pool, too, so that in case that your special vdev craps out (for reasons, or simply lacking redundancy), your pool will be safe anyway. Guess I'll be using a spare NVMe slot for metadata, as soon this is deemed stable.

BlankSystemDaemon
Mar 13, 2009



Combat Pretzel posted:

A PR was just opened on the OpenZFS Github that'll enable a failsafe mode on metadata special devices, meaning all writes get passed through to the pool, too, so that in case that your special vdev craps out (for reasons, or simply lacking redundancy), your pool will be safe anyway. Guess I'll be using a spare NVMe slot for metadata, as soon this is deemed stable.
Huh, that's pretty neat!

Rap Game Goku
Apr 2, 2008

Word to your moms, I came to drop spirit bombs


The loudest HDDs I own are a couple of WD Red 4TBs. Higher capacity red pros and the one exos I have are noticeably quieter. I'm sure that's partially a function of what they're installed in, but I can hear those 4tbs through the wall.

Wibla
Feb 16, 2011

Combat Pretzel posted:

A PR was just opened on the OpenZFS Github that'll enable a failsafe mode on metadata special devices, meaning all writes get passed through to the pool, too, so that in case that your special vdev craps out (for reasons, or simply lacking redundancy), your pool will be safe anyway. Guess I'll be using a spare NVMe slot for metadata, as soon this is deemed stable.

That's brilliant, really :allears:

Kung-Fu Jesus
Dec 13, 2003

Combat Pretzel posted:

A PR was just opened on the OpenZFS Github that'll enable a failsafe mode on metadata special devices, meaning all writes get passed through to the pool, too, so that in case that your special vdev craps out (for reasons, or simply lacking redundancy), your pool will be safe anyway. Guess I'll be using a spare NVMe slot for metadata, as soon this is deemed stable.
That rules and I already know I want to use it

Talorat
Sep 18, 2007

Hahaha! Aw come on, I can't tell you everything right away! That would make for a boring story, don't you think?

Combat Pretzel posted:

I'm hoping that Microsoft (and whoever else "spies") are going to release data, whether these extreme coronal mass ejections this weekend led to an increased amount of system crashes or not.

There was a project back in the day to use Watson (aggregated windows crash data) to try to identify solar flares. It didn’t work though, too much noise.

MadFriarAvelyn
Sep 25, 2007

The NAS has been moved into its new home and the server/networking closet is now no longer a terrible mess of ethernet cables hanging down from the ceiling complete! :toot:

Zorak of Michigan
Jun 10, 2006


Why does your NAS use so many paper towels? Leaky bit bucket?

MadFriarAvelyn
Sep 25, 2007

Extra kindling so if the UPS battery goes up in flames I am assured the NAS is well and truly hosed.

That's been my primary storage closest since I moved into this apartment and I needed to put them somewhere.

FunOne
Aug 20, 2000
I am a slimey vat of concentrated stupidity

Fun Shoe
How is only 2 paper towel rolls storage?

This is the nas thread!

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Combat Pretzel posted:

A PR was just opened on the OpenZFS Github that'll enable a failsafe mode on metadata special devices, meaning all writes get passed through to the pool, too, so that in case that your special vdev craps out (for reasons, or simply lacking redundancy), your pool will be safe anyway. Guess I'll be using a spare NVMe slot for metadata, as soon this is deemed stable.

ahh, perfect. that's exactly what I wanted to do. That setup is already supported in BTRFS somewhat awkwardly. It's much more interesting for my use case than L2ARC because I have approaching 7 million files on my NAS and my biggest issue with the current setup is the lag accessing large directories when they've fallen out of dcache.

If I can move 99% of the IOPS off spinning rust this thing will be an absolute beast. So much to configure. I've got a 40gb connectx-3 to flash for it, more drive trays



tugm beat newegg shipping, lol. It arrived here from china before the rest of the parts arrived from california. And, like an idiot, I forgot a heatsink. Whoops. The best-in-class SP3 will be here sunday, x.x. how stupid would I be to ziptie down an old evo212 to do the initial setup? With the motherboard flat, obviously. I'm guessing somewhere between the range of 'Absolutely not' to 'Whatever you're smoking, please share'

I've got oldschool aluminum block heatsinks, so I may just set one on top of it with a fan. At least that won't risk tipping over like a tower. Or I could do the sane thing and wait a week...

Yaoi Gagarin
Feb 20, 2014

id just wait for the correct heatsink, otherwise you'll have to clean off the thermal goop and reapply it

Arishtat
Jan 2, 2011

Combat Pretzel posted:

A PR was just opened on the OpenZFS Github that'll enable a failsafe mode on metadata special devices, meaning all writes get passed through to the pool, too, so that in case that your special vdev craps out (for reasons, or simply lacking redundancy), your pool will be safe anyway. Guess I'll be using a spare NVMe slot for metadata, as soon this is deemed stable.

That's pretty neat. I look forward to it eventually showing up in the TrueNAS Core feature set.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Yaoi Gagarin posted:

id just wait for the correct heatsink, otherwise you'll have to clean off the thermal goop and reapply it

but it's just sitting there, staring at me. demanding I do something.



reddit-tier jokes aside, holy poo poo the meshify 2 xl is the nicest case I've ever built in. Whole thing breaks apart without tools at all for access, but you do need a screwdriver to convert it to storage configuration (pictured here.) Sadly only comes with 2 of the 14 trays to max it out and they want $10 each for the rest of them, yikes. They're stamped sheet metal and a screw!

I haven't ordered the new drives because lol, the rest of the stuff wasn't supposed to get here until the 21st. Overall I'm thinking something like a 64-96 TB Z2 but I'm eyeing the deal threads to watch for anything really good.

Hardest part is going to be data migration. 12tb of my current nas is welded into the current system via bcache on the boot ssd, so I can't move those without taking the whole thing offline. theoretically you can undo bcache but I'd prefer not to. So I'm really not sure. I can't really down them completely for a week to move stuff around...

the half-formed plan in my head right now is:
* bring up the new, empty array
* buy a second 40gbe DAC (I have another connectx-3 for it) so they're both on my 40gbe switch
* down the network share on the old machine
* export the drives via iscsi/nbd
* bring them up on the new machine as samba/nfs shares.
Doing it as raw block rather than filesystem gets rid of a lot of the overhead and lets the massively larger and faster RAM cache on the new box cut down on IOPS required?
* sync to the new array from a snapshot while leaving the shares writable.
* offline everything to sync the new changes.
* export the shares from the new array

There's various issues with the obvious plan of "just put all the drives in the new box". first is that bcache fuckup. second is I'd have to buy a lot of extra drive trays @$10/ea to mount them temporarily. Or... ugh go across cases with the cables? Just asking for trouble.

I dunno. Opinions on a better option? I suppose it's only an additional $60 for the trays, and I will be filling them with more drives eventually. Overall migration would be about the same minus the network connection. Snapshot, sync, offline, sync changes, export from new zvol, etc.

tl;dr: moving a 4x4 raid5 + 3x10raid5 to ~6x16 z2 with the least downtime and risk.

ryanrs
Jul 12, 2011

Is the ARM SBC that was too slow to build OpenWrt fast enough to run Zoneminder? So far it seems to be working! It's a RockPro64 with a 1TB NVME drive.

Because of reasons, it seems to be best for Zoneminder to store its videos in a separate partition. I guess you can't just tell it "use no more than 150 GB"? Seems odd, but a lot of guides say to make a separate partition.

So I live booted an SD card, and used resize2fs and fdisk to shrink my original partition and make a new one, in place. It worked.


But what if I get another camera or need to change the size of the video partition? Should I have moved to LVM, even for just a single SSD? That should make adjusting the partition size on the fly pretty easy (still need to shut down zoneminder and unmount the fs tho).

Should I backup my stuff and go through this all again to move to LVM? Anyone here done it on an ARM SBC running Armbian? I'm assuming it works...

Generic Monk
Oct 31, 2011

Combat Pretzel posted:

A PR was just opened on the OpenZFS Github that'll enable a failsafe mode on metadata special devices, meaning all writes get passed through to the pool, too, so that in case that your special vdev craps out (for reasons, or simply lacking redundancy), your pool will be safe anyway. Guess I'll be using a spare NVMe slot for metadata, as soon this is deemed stable.

Lmao I added a SATA ssd to my pool for this only to realise i hadn’t read the small print that you can’t remove it and it’s now a single point of failure for the whole pool. Who knows this might hit truenas before either the drive dies or I nuke and rebuild the pool

Kibner
Oct 21, 2008

Acguy Supremacy
Updated to the newest general release version of TrueNAS Scale and it does the two biggest things I’ve been waiting on: full Intel ARC GPU support and ZFS ARC can now use more than half the system memory. So now my 128GB of ram will see more usage.

Computer viking
May 30, 2011
Now with less breakage.

Generic Monk posted:

Lmao I added a SATA ssd to my pool for this only to realise i hadn’t read the small print that you can’t remove it and it’s now a single point of failure for the whole pool. Who knows this might hit truenas before either the drive dies or I nuke and rebuild the pool

If you can fit another SSD in there somehow (PCIe to M.2?), you should be able to turn it into a mirror with zpool attach; I think that works for all device types.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
I moved my desktop/"NAS" into a Fractal Define 7XL last year, and maxed out I have slots for ~11 3.5" disks in the front stack and 2 2.5" disks on the back of the motherboard tray. fully loaded with 140mm fans, and it's enough to hear it, but sitting next to my legs under the desk it's barely noticeable. Running 8 x 12TB disks currently, plus the system SSD. And yes the trays are ridiculous, but the rubber grommets and standoff screws dampen drive noises a lot.

As far as drive noise, I've got a mix of 4 WD Red (EMAZ and EMFZ, don't know the difference) and 4 new HGST DC HC520 disks, and the HGST are not any louder than the Reds, even being 7200 vs 5400rpm. Temps are right in line as well, 29-32c across the whole stack.

Pablo Bluth
Sep 7, 2007

I've made a huge mistake.

Zorak of Michigan posted:

Why does your NAS use so many paper towels? Leaky bit bucket?
Lets be honest, 50% of all NAS usage is for porn backup....

movax
Aug 30, 2008

Kibner posted:

Updated to the newest general release version of TrueNAS Scale and it does the two biggest things I’ve been waiting on: full Intel ARC GPU support and ZFS ARC can now use more than half the system memory. So now my 128GB of ram will see more usage.

The latter part (ZFS ARC usage) has always been a tunable though, right? Does it just out of the box now come with a different default?

Kibner
Oct 21, 2008

Acguy Supremacy

movax posted:

The latter part (ZFS ARC usage) has always been a tunable though, right? Does it just out of the box now come with a different default?

It was tunable but could result in system stability issues, iirc. I found this after a brief search: https://github.com/openzfs/zfs/commit/518b4876022eee58b14903da09b99c01b8caa754

I believe that issue has been fixed (either as part of kernel or part of zfs, I don't know) but Scale now uses like 90% of my RAM.

Nolgthorn
Jan 30, 2001

The pendulum of the mind alternates between sense and nonsense

Pablo Bluth posted:

Lets be honest, 50% of all NAS usage is for porn backup....

If I had like a really large directory of vacation photos spanning many many years that are in all sorts of different folders with essentially no organizational structure. Can I use AI or something like that to sort that out on a NAS?

Thanks Ants
May 21, 2004

#essereFerrari


If your EXIF data is intact in terms of the correct time stamps on the pictures and maybe GPS tags then you probably don't need any AI wizardry

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

movax posted:

The latter part (ZFS ARC usage) has always been a tunable though, right? Does it just out of the box now come with a different default?
Yeh. I've been running 52GB out of 64GB since Bluefin just fine.

Nolgthorn
Jan 30, 2001

The pendulum of the mind alternates between sense and nonsense


I don't seem to have exif data, it's in folders that are named pseudo randomly like this. I must have used some app to organize them a million years ago but many folders either have 100's of photos or 1 photo.

Pablo Bluth
Sep 7, 2007

I've made a huge mistake.

Nolgthorn posted:

If I had like a really large directory of vacation photos spanning many many years that are in all sorts of different folders with essentially no organizational structure. Can I use AI or something like that to sort that out on a NAS?
Lightroom has face recognition, so you could use something like that to organise by pornstar friends and family.

Nolgthorn
Jan 30, 2001

The pendulum of the mind alternates between sense and nonsense
I must have quoted the wrong post before.

Computer viking
May 30, 2011
Now with less breakage.

Nolgthorn posted:

I must have quoted the wrong post before.

It did seem very tongue in cheek.

Or whatever you're into, I'm not judging.

webcams for christ
Nov 2, 2005

webcams for christ posted:

okay so I found a good deal on an IT-Mode LSI 9300-16i.

my local marketplace seems to have a promising deal on a lot of Seagate Exos X18 SATA drives with less than 12k hours on them and warranties through 2026.

could I throw them in something like this HDD enclosure or could I repurpose something like this SAS case?

what would be the best way to get them all wired up in a cost-effective enclosure

cross-posting here. I barely know what I'm doing

Computer viking
May 30, 2011
Now with less breakage.

webcams for christ posted:

cross-posting here. I barely know what I'm doing
Throwing them in a simple enclosure works fine, you just need some sort of fan-out cable. SAS typically uses a single plug for power and data, so you need to hook it up with power cables - try to get SATA power, because plugging in 8 or more molex power plugs into low-quality wiggly connectors is deeply frustrating. Alternatively, there are SATA fanout cables (like this) that sidesteps that entire problem.

That backplane should work - I think it has mini-SAS/SFF-8087 connectors, which are standard enough. The biggest problems will be physical fit (what are you putting it in?) and powering it - is that a PCIe power connector, or some weird and wonderful HPE design? Maybe the ever fun "same pin layout, different voltages"? (Though I think those connectors are usually keyed with square vs D-shaped pins to keep you from doing anything too destructive.)

The 9300-16i apparently has SFF-8643 ("Mini-SAS HD") connectors, so for your cabling needs you need either SFF-8643 fanout cables or SFF-8643 to SFF-8087 cables depending on what you go for.

Huge caveat: I'm partially guessing here and you should wait a bit for people to call out my mistakes before trusting this.

Computer viking fucked around with this message at 23:07 on May 14, 2024

Internet Explorer
Jun 1, 2005





Nolgthorn posted:

I must have quoted the wrong post before.

lol

ryanrs
Jul 12, 2011

ryanrs posted:

should I try LVM

lol it worked

First I installed lvm2, which among other things, rewrites your initramfs to teach it about LVM stuff. It's important to do this first.

Then I booted from a fresh Armbian SD card and used resize2fs to scrunch down my existing NVME root filesystem (only a few gigs), then rewrote the partition tables, careful not to change the start sector or the first partition. This freed up 95% of the SSD, which I turned into a LVM pv. Then I dd'ed the blocks to a new lv, and used tune2fs to increment the UUID of the old root partition.

Swapped back to my normal SD card, and it found the root filesystem with the old UUID in the new lv, and booted right up. If it had failed, I would have decremented the original partition UUID, which probably(?) would have got me back to booting off the original partition.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Generic Monk posted:

Lmao I added a SATA ssd to my pool for this only to realise i hadn’t read the small print that you can’t remove it and it’s now a single point of failure for the whole pool. Who knows this might hit truenas before either the drive dies or I nuke and rebuild the pool

oof WTF. Why is it a single point of failure when it's just a mirror of the on-disk metadata? Should be as trivial as "poof, gone, now we read from the spinners".

i don't even understand the point of a critical SPoF 'mirror'.

I thought bcache's "Hi I'm a write-through cache but if I die I delete 100tb of data teehee" was bad.

Yaoi Gagarin
Feb 20, 2014

afaik it is not a mirror of the metadata, when you make a special vdev it has all the metadata.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Yaoi Gagarin posted:

afaik it is not a mirror of the metadata, when you make a special vdev it has all the metadata.

we were talking about this:

Combat Pretzel posted:

A PR was just opened on the OpenZFS Github that'll enable a failsafe mode on metadata special devices, meaning all writes get passed through to the pool, too, so that in case that your special vdev craps out (for reasons, or simply lacking redundancy), your pool will be safe anyway. Guess I'll be using a spare NVMe slot for metadata, as soon this is deemed stable.

but maybe Generic Monk was talking about the existing implementation, which is lmao levels of bad.

BlankSystemDaemon
Mar 13, 2009



Generic Monk posted:

Lmao I added a SATA ssd to my pool for this only to realise i hadn’t read the small print that you can’t remove it and it’s now a single point of failure for the whole pool. Who knows this might hit truenas before either the drive dies or I nuke and rebuild the pool
With ZFS, there's nothing preventing you from replacing it with an NVMe SSD using the zpool replace command.
ZFS doesn't give a gently caress about what driver you're using, nor what the disk is.

Kibner posted:

It was tunable but could result in system stability issues, iirc. I found this after a brief search: https://github.com/openzfs/zfs/commit/518b4876022eee58b14903da09b99c01b8caa754

I believe that issue has been fixed (either as part of kernel or part of zfs, I don't know) but Scale now uses like 90% of my RAM.
That reminds me, I wonder how the work to integrate ARC into FreeBSDs unified buffer cache is going.

That'll be a pretty big advantage, if it turns out to be possible - something that I'm not quite sure of.
I think it's possible on FreeBSD and Illumos, but it doesn't seem likely to be possible on Linux.

Harik posted:

we were talking about this:

but maybe Generic Monk was talking about the existing implementation, which is lmao levels of bad.
The PR adds the ability to also write allocation classes (ie. both metadata and dedup) onto the pools regular vdevs, instead of only on the vdev used by allocation classes.

I'm interested to learn how Generic Monk managed to learn about the 'special' vdev, without learning that it should always have its own redundancy via mirroring (or even n-way mirroring), though.
All the documentation I know of makes a big deal out of making it very explicit.

BlankSystemDaemon fucked around with this message at 12:16 on May 15, 2024

Adbot
ADBOT LOVES YOU

webcams for christ
Nov 2, 2005

Computer viking posted:

Huge caveat: I'm partially guessing here and you should wait a bit for people to call out my mistakes before trusting this.

Cool. The sales rep for the HP backplate was able to answer my questions pretty quickly, so I went ahead and pulled the trigger on it, along with the necessary cables. Thanks!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply