|
I only care about transcoding enough that I have a P400 in my TrueNAS Scale box assigned to the Plex container it runs. Mostly because I already had the GPU in the machine. movax: that's a fun throwback - Norco/Rosewill 20 bay case? I have* a very similar build, but with a Supermicro X8DT motherboard and two s1366 Xeons... *It hasn't been in use for a while, used to hold 11x4TB in raid6 (mdadm), wiped the drives and pulled them out a while back, but haven't bothered actually pulling the machine apart yet.
|
# ¿ Dec 20, 2023 18:49 |
|
|
# ¿ May 15, 2024 22:07 |
|
That's the one! I tried to fit 8TB drives into mine, but apparently newer high-capacity drives are taller, so that was a no-go I never really had any problems with maintenance. Debian Linux + mdadm raid6 + XFS ... it just chugged along. Replaced two drives and added one over the life of the system, but that was entirely painless with so many drive bays available.
|
# ¿ Dec 20, 2023 19:25 |
|
What kind of expanders did you run? I have one of those HP expanders and it never gave me any problems beyond being stuck at SATA1 speeds (iirc). That's using 1.5TB, 4TB and 8TB drives, though...
|
# ¿ Dec 20, 2023 19:32 |
|
priznat posted:are the 7000 series epycs pretty good price now? Just thinking about my own NAS upgrade, but probably would rather go with something lower TDP if possible I'd go for a relatively new, cheap i5 or similar. Some caveats if you want/need ECC. QSV is amazing for transcoding. Ryzens are a lot better power-wise - when I bench-tested my Ryzen 3700X on a cheap B550 board with 32GB ram and an NVMe drive, it pulled 22W from the wall idling in proxmox.
|
# ¿ Dec 20, 2023 23:22 |
|
Yeah the caveat is that you get to spend a fair chunk more money (At least over here in finding boards that support ECC is a pain in the rear end, and usually quite expensive)
|
# ¿ Dec 20, 2023 23:29 |
|
Computer viking posted:My boyfriend is currently using an AM4 Ryzen on an ASRock desktop board with ECC RAM, they unofficially support ECC and it does seem to be reported correctly to the OS. Though I have no idea where he found the ECC sticks, it's not like Komplett has a selection of them. Yeah - I was referring to Intel I guess it might be time to grab an ASRock card and a cheap 5000-series CPU to replace my E5-2670 v3?
|
# ¿ Dec 21, 2023 00:48 |
|
Computer viking posted:It's not a bad platform, though it's annoying that you have to choose between ECC and an embedded GPU; it's apparently a Ryzen Pro feature to have both. Not an issue for us, since we use it in a gaming PC - but annoying for a server. I guess you could throw in a cheap Intel Arc, I think they do transcoding decently well while being small and low power? (And for the sheer novelty of a reverse AMD/Intel setup.) I have a P400 that I can use, so that's not a problem. But I need a SAS HBA and a 10gbe NIC as well, that might be tough to fit in the more and more gimped PCIe layouts these cards come with...
|
# ¿ Dec 21, 2023 01:34 |
|
BlankSystemDaemon posted:It's targeted to be in v2.3 and is expected to be out in a years time or so. So is there a way to show data fragmentation?
|
# ¿ Dec 21, 2023 17:05 |
|
I would not worry about Linux having worse hardware support than FreeBSD.
|
# ¿ Dec 23, 2023 00:20 |
|
Those should be fine, lots of endurance. I've got a few of their cheaper 120-240GB drives and only one died, after hard use. SSD's last a lot longer if you overprovision them a bit - I pulled this Kingston A2000 1TB out of a server with heavy database load, but I left about 20% unpartitioned and that seems to have helped a fair bit. A2000 1TB is rated for 720 TBW.
|
# ¿ Dec 23, 2023 22:38 |
|
Twerk from Home posted:Hey ZFS long-timers, I've got an offsite storage box with 36 disks that I want to work and not lose sleep over. I also need to use at least ~65% of the physical raw capacity, so RAID 10 is out. Suggested topologies for vdevs? The naive immediate one seems to be 4x9 disk raidz2, but I could also get greedy with 3x12 disk raid z2 or get smarter with some hot spares with 3x11 raid z2 with 3 hot spares. 3x11 RAIDZ2 with 3 hotspares would be my suggestion.
|
# ¿ Dec 27, 2023 16:46 |
|
Can also recommend surge protectors Are you sure it's not just the PSU?
|
# ¿ Jan 2, 2024 17:02 |
|
withoutclass posted:Anecdotal but I've been running shucked Easy Store drives for probably 1.5-2 years now without any issues. Same, both 8TB and 14TB.
|
# ¿ Jan 11, 2024 19:31 |
|
BlankSystemDaemon posted:The headache of dealing with all the bizarre failure modes of drives that fail enterprise QA mean I'm not interested in it - because even if they work fine, if they start exhibiting trouble, it might be the sort of trouble that can be hard to rootcause without an extensive amount of time and effort. Is this something you've actually seen? because this smells a lot like a strawman I feel pretty comfortable running my shucked 14TB drives (bought from a Chia farmer, no less) in RAIDZ2, but I also follow the 3-2-1 backup strategy.
|
# ¿ Jan 12, 2024 00:07 |
|
BlankSystemDaemon posted:Did you look at the video of Brendan Gregg shouting at disks in the datacenter? I'd like to hear about actual things that you've experienced in this regard, though, not a reference to a video that is at best tenuously related to the matter at hand. Nice blade runner quote adaptation though
|
# ¿ Jan 12, 2024 00:36 |
|
Someone spreading FUD in the NAS thread? Say it isn't so!
|
# ¿ Jan 12, 2024 16:50 |
|
Next build I think I'll 3D print brackets and build in a Meshify compact or something similar.
|
# ¿ Jan 30, 2024 07:10 |
|
Epyc would make sense for more pci-e lanes, but do you really need them? The 5900X is a beast of a CPU and will handle a lot of load.
|
# ¿ Feb 18, 2024 17:25 |
|
If you're running RAID1 with mdadm now, you can grow that to RAID5. docs Do not - I repeat - Do not bother with raid levels 5 or 6 with btrfs. E: I've usually done monthly scrubs on RAID5 / RAID6 arrays, but you can also tweak mdadm settings to reduce the performance impact. Wibla fucked around with this message at 01:35 on Feb 25, 2024 |
# ¿ Feb 25, 2024 01:33 |
|
Double parity reduces the risk of data loss from drive failure quite substantially. Having experienced a second drive fail during a 16-drive (Seagate LP 1.5TB ) RAID6 rebuild in the past, I am fully prepared to believe that statistic. In any event, backups are king. RAID is not backup. RAID mainly helps with uptime. ZFS muddies those waters a bit because you have snapshots, encryption etc built in that makes the data a lot more resilient to attack (as long as you know what you're doing), but in the event of catastrophic drive failures or a cryptolocker attack, you're still most likely looking at restoring from backups. Plan accordingly.
|
# ¿ Feb 25, 2024 22:28 |
|
Newer PSUs are generally good enough to handle a short, transient spike like drives spinning up. I haven't worried about it for the last 15+ years
|
# ¿ Mar 1, 2024 18:02 |
|
I thought that was BSD's job Multiple vdevs in one pool will lock you into a certain drive/pool layout though, be aware of that.
|
# ¿ Mar 28, 2024 04:23 |
|
Agrikk posted:Do I ditch all of this as well as my 3-node vmware cluster and create a 4-node ProxMox cluster with Ceph? I'd do this. But you need SSDs with power loss protection
|
# ¿ Apr 15, 2024 06:56 |
|
School of How posted:I was messing with TrueNAS for about an hour last night and wasn't able to get a single thing working. Their whole Apps ecosystem is extremely poo poo. I might just do proxmox + ZFS + LXC for samba the next time around, then run turnkeylinux containers for plex / qbittorrent etc.
|
# ¿ Apr 24, 2024 17:26 |
|
That's a solid 600mbit/s though?
|
# ¿ May 4, 2024 23:31 |
|
|
# ¿ May 15, 2024 22:07 |
|
Combat Pretzel posted:A PR was just opened on the OpenZFS Github that'll enable a failsafe mode on metadata special devices, meaning all writes get passed through to the pool, too, so that in case that your special vdev craps out (for reasons, or simply lacking redundancy), your pool will be safe anyway. Guess I'll be using a spare NVMe slot for metadata, as soon this is deemed stable. That's brilliant, really
|
# ¿ May 13, 2024 21:09 |