Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Wibla
Feb 16, 2011

Last I checked in Norway, it was cheaper (per TB) to just buy WD HC550 18TB drives than buying some external drive. YMMV though.

How many drives do you have in that array?

Adbot
ADBOT LOVES YOU

Wibla
Feb 16, 2011

My new (to me) fileserver/vm host turned up today!






Going to do some transcode testing with plex in a test VM using the Quadro P400 and test some other stuff before I actually move the drives + HBA(s) over, but it looks promising for now.

Wibla
Feb 16, 2011

Motronic posted:

What is it that you need for Plex transcoding on a GPU to work? I thought I looked into this before and my 2670s did not support it. I'd love to be wrong. (yours are marked at "v3" and mine are "0", so that might be the difference)

You need quicksync (on an intel igpu) or a supported nVidia GPU.

Mr. Crow posted:

You can definitely use a gpu to transcode if your paying for it in plex (:lol:)

https://support.plex.tv/articles/115002178853-using-hardware-accelerated-streaming/

Alternatively jellyfin is completely free and open source and does it https://jellyfin.org/docs/general/administration/hardware-acceleration.html

Also its faster and snappier with large libraries.

Neither will work in a container without jumping through hoops.

I paid €74.99 for lifetime plex pass sometime in the dark ages, so that's covered :haw:

I'm not looking forward to messing with passthrough to make it work, but apparently it's not that painful with Quadro cards?

Wibla
Feb 16, 2011

New box is up and running (sans drives for now), testing 10gbit performance between two machines using a DAC cable and woah :haw:

code:
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.05  sec  10.9 GBytes  9.35 Gbits/sec                  receiver
Got GPU passthrough running as well, confirmed that the P400 works in plex inside a VM.
code:
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1543      C   ...diaserver/Plex Transcoder      552MiB |
+-----------------------------------------------------------------------------+
Idle power draw with no drives is around 84W after I turned off all the dumb auto-overclocking stuff. It'll probably be around 150W after I add drives.

Wibla
Feb 16, 2011

Torrents are hard on CPU/RAM and disk access. Not really surprising that it struggles.

Wibla
Feb 16, 2011

I just snipped the +3.3V wire going to my SATA power connectors :haw:

Wibla
Feb 16, 2011

SSD for downloading, HDD for seeding :v:

Wibla
Feb 16, 2011

The world is trying to tell you to just go with 10gbe :sun:

Wibla
Feb 16, 2011

BlankSystemDaemon posted:

And by 10GbE you mean using SFP+, not RJ45, hopefully?

RJ45 10GbE takes an absolutely dumb amount of power.

Using SFP+, yes. Like god intended.

Wibla
Feb 16, 2011

I'm still using mdraid + XFS and it just works with no drama... :shrug:

Wibla
Feb 16, 2011

I have a very early connect-x 10g card in my win10 pc and it works fine.

Wibla
Feb 16, 2011

ilkhan posted:

I'm getting a bunch of data and I/o errors on my DS413 Synology, and have ordered a replacement 1522+ model and a fresh set of drives 4x8TB drives (3 in raid 5, plus a warm spare probably. Don't need more space than that.)

What's the best way to transfer the data? Gbit is going to take a while, but I can't see any real alternative at this point.

A hotspare will do nothing for you when the second drive fails during rebuild, while RAID6 will still have your data available to you. Use RAID6. And take backups.

Transferring 16TB of data over gigabit is only 1.5 days or so, and there aren't many other alternatives unless you buy a larger NAS and move your old drives over, then transfer the data to a new array in the same NAS.

Wibla
Feb 16, 2011

Why bother paying for Unraid when TrueNAS Scale exists? :v:

Wibla
Feb 16, 2011

lead acid should be dead, but it's sadly still hanging on.

Wibla
Feb 16, 2011

Motronic posted:

Edit: I also don't understand why you or anyone else is worried about true sinewave output. Your 90 to 250 volt 50 to 60 hz switching power supply does not give one poo poo what you throw at it.

It matters if your PSU has power factor correction, which almost all modern switch mode power supplies do.

Wibla
Feb 16, 2011

AGM is poo poo for any sort of deep cycling, get lifepo4. AGM cycled more than 50% DoD will last maybe 500 cycles, lifepo4 cycled 80% DoD will last 5000+.

Wibla
Feb 16, 2011

Klyith posted:

Fail rate doesn't really start to climb until the 5th year:

This graph means: start looking for drive deals after about 4 years, then buy when convenient. Running raid6/raidz2 and having backups will help for peace of mind.

Wibla
Feb 16, 2011

priznat posted:

Would a comparable to hgst ultrastar be the wd gold at this point?

I have a few 6TB ultrastars and may want to get some more 6TB drives to add some capacity. The red pro 6TBs seem pretty good price these days, I probably don’t need the 7200rpm.

How many of those do you have, and how many hours have you put on them?

Wibla
Feb 16, 2011

priznat posted:

I have 4, and they’re around 38k hours I think (about 4 years)

I would get 4x12-14-16TB drives and replace those 6TB drives instead of expanding an array with four year old drives.

Wibla
Feb 16, 2011

Won't a new 6TB drive more than likely be SMR? I'd just go larger and get two CMR drives. Adding an old drive seems like asking for trouble down the road when it inevitably dies.

Wibla
Feb 16, 2011

I would just do a four-drive raid6 with mdadm or raidz2 with ZFS and call it good. mdadm can be expanded anytime, ZFS will have that feature soon (tm).

OpenZFS 3.0 was tentatively slated for a 2022 release, I don't think they'll make it, but work is being done on growing raidz vdevs.

Wibla
Feb 16, 2011


Temp setup (i5-2500k) with truenas scale on bare metal, pushing data from my old fileserver.

Wibla
Feb 16, 2011

Ihmemies posted:

Truenas scale is based on Debian linux. So I'll just do SMB datasets, fine. Now if I knew how to figure out if truenas made ashift 9, 12 or 13 pool. 9 sucks on a 18TB disk..

From my Truenas scale install:
code:
# zpool get all|grep ashift
boot-pool  ashift                         12                             local
data       ashift                         12                             local
scratch    ashift                         12                             local

Wibla
Feb 16, 2011

I'm putting logs and poo poo on the boot-pool. Dedicating a 1TB SSD to torrents, apps/VMs and other poo poo that slams the drives with random read/write. Don't need a bunch of random io hitting spinning rust.

Finished transferring 44TB last night, that only took most of the weekend :v:

Apparently rsync is CPU limited and topped out at 160-180MB/s per thread on my temporary setup using an i5-2500K. Managed to run 2-3 threads for most of it, but half of the dataset was one folder, so that took the longest.

One of these days I'll learn how to use rclone.

Wibla
Feb 16, 2011

My new NAS idles at 134W :haw:

That's with a Xeon E5-2670 v3, 128GB RAM, Quadro P400, intel 10GbE card (SFP+), LSI 9211 8i, 8x14TB (shucked WD Elements, so 7200RPM) in raidz2 and a couple of SSD's.

I'm very happy with that.

Wibla
Feb 16, 2011

Adolf Glitter posted:

134, Really? I Don't want to piss in your cornflakes, but that seems too low

That's with the drives spinning, measured at the wall.

Wibla
Feb 16, 2011

It's extremely unlikely that you will experience data loss as long as you're not actively writing data when the power disappears.

Wibla
Feb 16, 2011

Incessant Excess posted:

If I have 3 drives in raid5, can I add a fourth that's smaller or nah?

Nope.

Wibla
Feb 16, 2011

Mofabio posted:

I'm thinking of getting rid of RAID. I've been managing mdadm and now ZFS clusters on DIY NASes and I'm starting to realize that the downsides (even drive wear so everything fails around the same time, the potential for loving a whole cluster at once by user error, can't physically just move a drive to another computer and use it there, can't just give a drive to a friend, difficult expandability, can't stagger backups across the drives meaning the least-important data is as protected as the most-important in the cluster) outweigh the upsides (automated redundancy vs an rsync tangle, JBOD).

I'm using an SBC that only has power for 2 SATA HDDs, so part of the reason I'm thinking of switching to single-drive ZFS is so only one drive has to spin up at a time, rather than slamming the rail accelerating four+ platters at once.

Anyone else "downgrade" to just a bunch of drives?

Just out of interest, how many years do you generally run a set of drives before replacing them with a new set / upgrading to a new server?

Wibla
Feb 16, 2011

Incessant Excess posted:

Looking into new NAS options, I noticed a current 6 bay QNAP nas costs about the same as this setup would be running unraid or truenas:

CPU: Intel Core i3-10100T
CPU Cooler: be quiet! Pure Rock Slim 2
Motherboard: MSI B560M-A PRO Micro ATX LGA1200
Memory: G.Skill Value 32 GB (2 x 16 GB) DDR4-2666
Storage: KIOXIA EXCERIA G2 1 TB PCIe 3.0 X4 NVME
Storage: 4x 10tb + 2x 16tb HDDs
Case: Nanoxia Deep Silence 2 White ATX Mid Tower
PSU: Cooler Master V550 Gold V2 550 W 80+ Gold

Are there any pitfalls I should be aware of when self building a nas? I've built PCs before but I'm not sure about the particulars of building a server.

Get a small SSD for boot with TrueNAS, use the 1TB NVMe for VMs/apps and dump for downloads etc.

For what it's worth - 4x10 + 2x16 will not be used very efficiently in TrueNAS.

Wibla
Feb 16, 2011

Incessant Excess posted:

Is it bad practice to keep the OS and the apps/docker containers on the same drive? I assumed I could have both on the nvme.

Would the drives be used less efficiently than raid? ~50 TB usable?

Would the board and CPU above ECC ram and do I need that?

TrueNAS Scale won't let you do stuff with the boot drive, iirc, so a small SSD for boot/system is recommended. I'm running a 1TB SATA SSD as a "scratch" drive for apps + downloads, and an older 120GB SSD for boot/system.

I wouldn't combine 4x10 and 2x16 in the same vdev/raid array, you'd be missing out on 6TB per 16TB drive.

You'd have to look at if that motherboard supports ECC ram, but ECC ram is not a requirement.

Wibla
Feb 16, 2011

Tatsujin posted:

I'm already down to <5 TB free on my 8x6TB RAID6 NAS I built two years ago. Unforutately, the Fractal Node 804 case only has space for one more 3.5" and 2.5" drive. I already have a 2.5" 128 GB SSD boot drive and 3.5" 14 TB partial backup drive in addition to the NAS storage running off a LSI 9211-8i. I'm trying to determine what would be the best upgrade path in terms of storage capacity/performance and cost. Primary use case is media storage that is written once and then read many times.

Possible upgrades:

    * Replace the existing drives with larger ones one at a time (sucks as while they are hot swappable there's no backplane/drive trays).
    * Surplus 2U rackmount server with at least 12x 3.5" bays
    * Surplus 2U rackmount DAS with at least 12x 3.5" bays that connect to existing NAS via USB 3.0 or an external SATA/SAS RAID controller
    * Some SMB-levl offering from QNAP/Synology with at least 12x 3.5" bays

What OS are you running? I assume software raid since you're running a SAS HBA?

I'd get 8x14-16TB, whatever is cheaper per TB, along with another 9211 from ebay, then migrate the data over from your old array. 6TB drives are probably old enough at this point that it's time to retire them anyway. Your current PSU will more than likely be able to power 16 drives (as long as it's >500W) and most m-ATX boards will have enough slots for two raid controllers.

Here's from when I migrated servers, though I used 10gbe ethernet between machines instead of doing in-system copying:

There's a fan behind the drives :v:

Wibla
Feb 16, 2011

I usually try to retire drives after 5-6 years, or at least make sure it's not holding anything I care about.

That said I generally fill an array in 2-3 years, so I get 2-3 years of backup duty out of a set of drives after I've phased them out of the "prod" array.

Right now I have two (three) fileservers, 11x4TB (entire box being retired, it's an old dual X5675 setup, most drives have 5-6 years of runtime), 9x8TB (not re-assembled after my main fileserver got upgraded, have all the parts though), and an 8x14TB box that lives in my apartment.

Wibla
Feb 16, 2011

Less Fat Luke posted:


So much room for activities!

You'd want internal PCIe LSI HBA cards - 9211, 9240 and so on. I usually go on eBay and just search for "LSI IT mode" which are cards flashed already to initiator mode (where the card won't do any hardware RAID). There are lots of clones so make sure the seller has good ratings.

If you need to expand the gold standard are the Intel RES2SV240 cards - they can be powered by PCI or molex directly and have 6 ports (1 used for upstream).

Edit: also for PSU honestly drives don't use that much but I went overkill and use an EVGA 1000W G3, mostly for the absolute plethora of SATA power cable connections it has.

Post a pic with everything cabled up :sun:

Wibla
Feb 16, 2011

Enos Cabell posted:

Really wish I'd labeled my drives like this when I set up the server! I'm going to have to pull them one at a time when I start replacing.


Don't have to print the whole serial number either :v:

I bought some SATA power cables that have the plugs in a string, but they "feed from the top", so it just became a mess. sigh.

Wibla
Feb 16, 2011

No. Have you run "top" to see what's using so much cpu? Assuming you're on *nix.

Wibla
Feb 16, 2011

Basic burn in tests are mostly just suitable to find DOA drives. I've experienced one DOA drive since I started building computers in the 90s. My seagate lp based array had failures when temps got high, that's something others have documented as well.

Wibla
Feb 16, 2011

Eaton or bust for UPS.

Be prepared to pay accordingly.

Wibla
Feb 16, 2011

jawbroken posted:

And if you lose power that often then perhaps you should consider just getting a whole-house generator or battery.

At that point I'd look into a few kWh of lifepo4 batteries and one of those integrated hybrid off grid inverters. Maybe add some solar as well :v:

Adbot
ADBOT LOVES YOU

Wibla
Feb 16, 2011

Segment poo poo like that away from each other, yikes.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply