Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Wibla
Feb 16, 2011

I only care about transcoding enough that I have a P400 in my TrueNAS Scale box assigned to the Plex container it runs. Mostly because I already had the GPU in the machine.

movax: that's a fun throwback - Norco/Rosewill 20 bay case?

I have* a very similar build, but with a Supermicro X8DT motherboard and two s1366 Xeons...


*It hasn't been in use for a while, used to hold 11x4TB in raid6 (mdadm), wiped the drives and pulled them out a while back, but haven't bothered actually pulling the machine apart yet.

Adbot
ADBOT LOVES YOU

Wibla
Feb 16, 2011

That's the one! I tried to fit 8TB drives into mine, but apparently newer high-capacity drives are taller, so that was a no-go :sigh:

I never really had any problems with maintenance. Debian Linux + mdadm raid6 + XFS ... it just chugged along. Replaced two drives and added one over the life of the system, but that was entirely painless with so many drive bays available.

Wibla
Feb 16, 2011

What kind of expanders did you run? I have one of those HP expanders and it never gave me any problems beyond being stuck at SATA1 speeds (iirc). That's using 1.5TB, 4TB and 8TB drives, though...

Wibla
Feb 16, 2011

priznat posted:

are the 7000 series epycs pretty good price now? Just thinking about my own NAS upgrade, but probably would rather go with something lower TDP if possible

I'd go for a relatively new, cheap i5 or similar. Some caveats if you want/need ECC. QSV is amazing for transcoding.

Ryzens are a lot better power-wise - when I bench-tested my Ryzen 3700X on a cheap B550 board with 32GB ram and an NVMe drive, it pulled 22W from the wall idling in proxmox.

Wibla
Feb 16, 2011

Yeah the caveat is that you get to spend a fair chunk more money :v:

(At least over here in :norway: finding boards that support ECC is a pain in the rear end, and usually quite expensive)

Wibla
Feb 16, 2011

Computer viking posted:

My boyfriend is currently using an AM4 Ryzen on an ASRock desktop board with ECC RAM, they unofficially support ECC and it does seem to be reported correctly to the OS. Though I have no idea where he found the ECC sticks, it's not like Komplett has a selection of them.

(Intel, though - hah, no. Maybe on ebay from the great outlands.)

Yeah - I was referring to Intel :v:

I guess it might be time to grab an ASRock card and a cheap 5000-series CPU to replace my E5-2670 v3?

Wibla
Feb 16, 2011

Computer viking posted:

It's not a bad platform, though it's annoying that you have to choose between ECC and an embedded GPU; it's apparently a Ryzen Pro feature to have both. Not an issue for us, since we use it in a gaming PC - but annoying for a server. I guess you could throw in a cheap Intel Arc, I think they do transcoding decently well while being small and low power? (And for the sheer novelty of a reverse AMD/Intel setup.)

I have a P400 that I can use, so that's not a problem. But I need a SAS HBA and a 10gbe NIC as well, that might be tough to fit in the more and more gimped PCIe layouts these cards come with...

Wibla
Feb 16, 2011

BlankSystemDaemon posted:

It's targeted to be in v2.3 and is expected to be out in a years time or so.

As for melting down performance-wise, I've had pools get north of 95% and still be capable of satuating NFS over 1Gbps, so I really don't know - ultimately, it depends on the fragmentation of the data.

And speaking of that, the value you see in zpool list is not how fragmented the data is, it's how fragmented the free space is - ie. when you write data sequentially, and then later on delete part of it, there are going to be small segments which ZFS can once it's no longer capable of writing all its data sequentially.

So is there a way to show data fragmentation?

Wibla
Feb 16, 2011

I would not worry about Linux having worse hardware support than FreeBSD.

Wibla
Feb 16, 2011

Those should be fine, lots of endurance. I've got a few of their cheaper 120-240GB drives and only one died, after hard use.

SSD's last a lot longer if you overprovision them a bit - I pulled this Kingston A2000 1TB out of a server with heavy database load, but I left about 20% unpartitioned and that seems to have helped a fair bit. A2000 1TB is rated for 720 TBW.

Wibla
Feb 16, 2011

Twerk from Home posted:

Hey ZFS long-timers, I've got an offsite storage box with 36 disks that I want to work and not lose sleep over. I also need to use at least ~65% of the physical raw capacity, so RAID 10 is out. Suggested topologies for vdevs? The naive immediate one seems to be 4x9 disk raidz2, but I could also get greedy with 3x12 disk raid z2 or get smarter with some hot spares with 3x11 raid z2 with 3 hot spares.

Thoughts? Does ZFS handle odd striping across disks like this well, or do people stick to 4/6/8 disk vdevs for a reason and I should do something much more aligned with common usage, like 4x8 raidz2 with 4 hot spares?

Well, it looks like striping does become a problem. Time to think about this a moment more: https://jro.io/capacity/.

I'm not married to ZFS for this yet, but I specifically preferred an alternative storage tech to the Ceph cluster that this will be the DR environment for.

3x11 RAIDZ2 with 3 hotspares would be my suggestion.

Wibla
Feb 16, 2011

Can also recommend surge protectors :v:

Are you sure it's not just the PSU?

Wibla
Feb 16, 2011

withoutclass posted:

Anecdotal but I've been running shucked Easy Store drives for probably 1.5-2 years now without any issues.

Same, both 8TB and 14TB.

Wibla
Feb 16, 2011

BlankSystemDaemon posted:

The headache of dealing with all the bizarre failure modes of drives that fail enterprise QA mean I'm not interested in it - because even if they work fine, if they start exhibiting trouble, it might be the sort of trouble that can be hard to rootcause without an extensive amount of time and effort.

Is this something you've actually seen? because this smells a lot like a strawman :v:

I feel pretty comfortable running my shucked 14TB drives (bought from a Chia farmer, no less) in RAIDZ2, but I also follow the 3-2-1 backup strategy.

Wibla
Feb 16, 2011

BlankSystemDaemon posted:

Did you look at the video of Brendan Gregg shouting at disks in the datacenter?
That's what minor vibrations cause, now imagine there's a disk that's causing some issue, but you can't examine it because the bug is in the firmware and even finding it is gonna take many hours to collect stats which have to be analyzed?

I've seen things you people wouldn't believe... Storage servers with thousands of disks causing the Halon siren to go off... I watched dtrace IO histograms glitter in the dark on console screens in production. All those moments will be lost in time, like tears in rain... Time to back up.

I'd like to hear about actual things that you've experienced in this regard, though, not a reference to a video that is at best tenuously related to the matter at hand.

Nice blade runner quote adaptation though :sun:

Wibla
Feb 16, 2011

Someone spreading FUD in the NAS thread? Say it isn't so!

Wibla
Feb 16, 2011

Next build I think I'll 3D print brackets and build in a Meshify compact or something similar.

Wibla
Feb 16, 2011

Epyc would make sense for more pci-e lanes, but do you really need them? The 5900X is a beast of a CPU and will handle a lot of load.

Wibla
Feb 16, 2011

If you're running RAID1 with mdadm now, you can grow that to RAID5.

docs

Do not - I repeat - Do not bother with raid levels 5 or 6 with btrfs.

E: I've usually done monthly scrubs on RAID5 / RAID6 arrays, but you can also tweak mdadm settings to reduce the performance impact.

Wibla fucked around with this message at 01:35 on Feb 25, 2024

Wibla
Feb 16, 2011

Double parity reduces the risk of data loss from drive failure quite substantially. Having experienced a second drive fail during a 16-drive (Seagate LP 1.5TB :aaa: ) RAID6 rebuild in the past, I am fully prepared to believe that statistic.

In any event, backups are king. RAID is not backup. RAID mainly helps with uptime.

ZFS muddies those waters a bit because you have snapshots, encryption etc built in that makes the data a lot more resilient to attack (as long as you know what you're doing), but in the event of catastrophic drive failures or a cryptolocker attack, you're still most likely looking at restoring from backups. Plan accordingly.

Wibla
Feb 16, 2011

Newer PSUs are generally good enough to handle a short, transient spike like drives spinning up. I haven't worried about it for the last 15+ years :haw:

Wibla
Feb 16, 2011

I thought that was BSD's job :smith:

Multiple vdevs in one pool will lock you into a certain drive/pool layout though, be aware of that.

Wibla
Feb 16, 2011

Agrikk posted:

Do I ditch all of this as well as my 3-node vmware cluster and create a 4-node ProxMox cluster with Ceph?

I'd do this. But you need SSDs with power loss protection :v:

Wibla
Feb 16, 2011

School of How posted:

I was messing with TrueNAS for about an hour last night and wasn't able to get a single thing working.

Their whole Apps ecosystem is extremely poo poo.

I might just do proxmox + ZFS + LXC for samba the next time around, then run turnkeylinux containers for plex / qbittorrent etc.

Wibla
Feb 16, 2011

That's a solid 600mbit/s though? :v:

Adbot
ADBOT LOVES YOU

Wibla
Feb 16, 2011

Combat Pretzel posted:

A PR was just opened on the OpenZFS Github that'll enable a failsafe mode on metadata special devices, meaning all writes get passed through to the pool, too, so that in case that your special vdev craps out (for reasons, or simply lacking redundancy), your pool will be safe anyway. Guess I'll be using a spare NVMe slot for metadata, as soon this is deemed stable.

That's brilliant, really :allears:

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply