|
Last I checked in Norway, it was cheaper (per TB) to just buy WD HC550 18TB drives than buying some external drive. YMMV though. How many drives do you have in that array?
|
# ¿ May 17, 2022 17:03 |
|
|
# ¿ Apr 29, 2024 10:36 |
|
My new (to me) fileserver/vm host turned up today! Going to do some transcode testing with plex in a test VM using the Quadro P400 and test some other stuff before I actually move the drives + HBA(s) over, but it looks promising for now.
|
# ¿ May 18, 2022 16:01 |
|
Motronic posted:What is it that you need for Plex transcoding on a GPU to work? I thought I looked into this before and my 2670s did not support it. I'd love to be wrong. (yours are marked at "v3" and mine are "0", so that might be the difference) You need quicksync (on an intel igpu) or a supported nVidia GPU. Mr. Crow posted:You can definitely use a gpu to transcode if your paying for it in plex () I paid €74.99 for lifetime plex pass sometime in the dark ages, so that's covered I'm not looking forward to messing with passthrough to make it work, but apparently it's not that painful with Quadro cards?
|
# ¿ May 18, 2022 19:43 |
|
New box is up and running (sans drives for now), testing 10gbit performance between two machines using a DAC cable and woah code:
code:
|
# ¿ May 19, 2022 18:05 |
|
Torrents are hard on CPU/RAM and disk access. Not really surprising that it struggles.
|
# ¿ May 25, 2022 14:13 |
|
I just snipped the +3.3V wire going to my SATA power connectors
|
# ¿ May 28, 2022 19:12 |
|
SSD for downloading, HDD for seeding
|
# ¿ Aug 14, 2022 18:29 |
|
The world is trying to tell you to just go with 10gbe
|
# ¿ Aug 16, 2022 06:15 |
|
BlankSystemDaemon posted:And by 10GbE you mean using SFP+, not RJ45, hopefully? Using SFP+, yes. Like god intended.
|
# ¿ Aug 16, 2022 08:05 |
|
I'm still using mdraid + XFS and it just works with no drama...
|
# ¿ Aug 19, 2022 01:49 |
|
I have a very early connect-x 10g card in my win10 pc and it works fine.
|
# ¿ Sep 2, 2022 08:13 |
|
ilkhan posted:I'm getting a bunch of data and I/o errors on my DS413 Synology, and have ordered a replacement 1522+ model and a fresh set of drives 4x8TB drives (3 in raid 5, plus a warm spare probably. Don't need more space than that.) A hotspare will do nothing for you when the second drive fails during rebuild, while RAID6 will still have your data available to you. Use RAID6. And take backups. Transferring 16TB of data over gigabit is only 1.5 days or so, and there aren't many other alternatives unless you buy a larger NAS and move your old drives over, then transfer the data to a new array in the same NAS.
|
# ¿ Sep 10, 2022 23:11 |
|
Why bother paying for Unraid when TrueNAS Scale exists?
|
# ¿ Sep 16, 2022 13:08 |
|
lead acid should be dead, but it's sadly still hanging on.
|
# ¿ Sep 18, 2022 15:24 |
|
Motronic posted:Edit: I also don't understand why you or anyone else is worried about true sinewave output. Your 90 to 250 volt 50 to 60 hz switching power supply does not give one poo poo what you throw at it. It matters if your PSU has power factor correction, which almost all modern switch mode power supplies do.
|
# ¿ Sep 18, 2022 19:16 |
|
AGM is poo poo for any sort of deep cycling, get lifepo4. AGM cycled more than 50% DoD will last maybe 500 cycles, lifepo4 cycled 80% DoD will last 5000+.
|
# ¿ Sep 22, 2022 06:29 |
|
Klyith posted:Fail rate doesn't really start to climb until the 5th year: This graph means: start looking for drive deals after about 4 years, then buy when convenient. Running raid6/raidz2 and having backups will help for peace of mind.
|
# ¿ Sep 29, 2022 05:54 |
|
priznat posted:Would a comparable to hgst ultrastar be the wd gold at this point? How many of those do you have, and how many hours have you put on them?
|
# ¿ Oct 1, 2022 09:27 |
|
priznat posted:I have 4, and they’re around 38k hours I think (about 4 years) I would get 4x12-14-16TB drives and replace those 6TB drives instead of expanding an array with four year old drives.
|
# ¿ Oct 1, 2022 19:09 |
|
Won't a new 6TB drive more than likely be SMR? I'd just go larger and get two CMR drives. Adding an old drive seems like asking for trouble down the road when it inevitably dies.
|
# ¿ Oct 2, 2022 13:37 |
|
I would just do a four-drive raid6 with mdadm or raidz2 with ZFS and call it good. mdadm can be expanded anytime, ZFS will have that feature soon (tm). OpenZFS 3.0 was tentatively slated for a 2022 release, I don't think they'll make it, but work is being done on growing raidz vdevs.
|
# ¿ Oct 13, 2022 15:27 |
|
Temp setup (i5-2500k) with truenas scale on bare metal, pushing data from my old fileserver.
|
# ¿ Oct 14, 2022 20:20 |
|
Ihmemies posted:Truenas scale is based on Debian linux. So I'll just do SMB datasets, fine. Now if I knew how to figure out if truenas made ashift 9, 12 or 13 pool. 9 sucks on a 18TB disk.. From my Truenas scale install: code:
|
# ¿ Oct 15, 2022 13:49 |
|
I'm putting logs and poo poo on the boot-pool. Dedicating a 1TB SSD to torrents, apps/VMs and other poo poo that slams the drives with random read/write. Don't need a bunch of random io hitting spinning rust. Finished transferring 44TB last night, that only took most of the weekend Apparently rsync is CPU limited and topped out at 160-180MB/s per thread on my temporary setup using an i5-2500K. Managed to run 2-3 threads for most of it, but half of the dataset was one folder, so that took the longest. One of these days I'll learn how to use rclone.
|
# ¿ Oct 17, 2022 05:57 |
|
My new NAS idles at 134W That's with a Xeon E5-2670 v3, 128GB RAM, Quadro P400, intel 10GbE card (SFP+), LSI 9211 8i, 8x14TB (shucked WD Elements, so 7200RPM) in raidz2 and a couple of SSD's. I'm very happy with that.
|
# ¿ Oct 17, 2022 19:44 |
|
Adolf Glitter posted:134, Really? I Don't want to piss in your cornflakes, but that seems too low That's with the drives spinning, measured at the wall.
|
# ¿ Oct 17, 2022 22:26 |
|
It's extremely unlikely that you will experience data loss as long as you're not actively writing data when the power disappears.
|
# ¿ Nov 10, 2022 07:48 |
|
Incessant Excess posted:If I have 3 drives in raid5, can I add a fourth that's smaller or nah? Nope.
|
# ¿ Nov 24, 2022 16:51 |
|
Mofabio posted:I'm thinking of getting rid of RAID. I've been managing mdadm and now ZFS clusters on DIY NASes and I'm starting to realize that the downsides (even drive wear so everything fails around the same time, the potential for loving a whole cluster at once by user error, can't physically just move a drive to another computer and use it there, can't just give a drive to a friend, difficult expandability, can't stagger backups across the drives meaning the least-important data is as protected as the most-important in the cluster) outweigh the upsides (automated redundancy vs an rsync tangle, JBOD). Just out of interest, how many years do you generally run a set of drives before replacing them with a new set / upgrading to a new server?
|
# ¿ Nov 25, 2022 19:42 |
|
Incessant Excess posted:Looking into new NAS options, I noticed a current 6 bay QNAP nas costs about the same as this setup would be running unraid or truenas: Get a small SSD for boot with TrueNAS, use the 1TB NVMe for VMs/apps and dump for downloads etc. For what it's worth - 4x10 + 2x16 will not be used very efficiently in TrueNAS.
|
# ¿ Dec 11, 2022 13:14 |
|
Incessant Excess posted:Is it bad practice to keep the OS and the apps/docker containers on the same drive? I assumed I could have both on the nvme. TrueNAS Scale won't let you do stuff with the boot drive, iirc, so a small SSD for boot/system is recommended. I'm running a 1TB SATA SSD as a "scratch" drive for apps + downloads, and an older 120GB SSD for boot/system. I wouldn't combine 4x10 and 2x16 in the same vdev/raid array, you'd be missing out on 6TB per 16TB drive. You'd have to look at if that motherboard supports ECC ram, but ECC ram is not a requirement.
|
# ¿ Dec 11, 2022 14:12 |
|
Tatsujin posted:I'm already down to <5 TB free on my 8x6TB RAID6 NAS I built two years ago. Unforutately, the Fractal Node 804 case only has space for one more 3.5" and 2.5" drive. I already have a 2.5" 128 GB SSD boot drive and 3.5" 14 TB partial backup drive in addition to the NAS storage running off a LSI 9211-8i. I'm trying to determine what would be the best upgrade path in terms of storage capacity/performance and cost. Primary use case is media storage that is written once and then read many times. What OS are you running? I assume software raid since you're running a SAS HBA? I'd get 8x14-16TB, whatever is cheaper per TB, along with another 9211 from ebay, then migrate the data over from your old array. 6TB drives are probably old enough at this point that it's time to retire them anyway. Your current PSU will more than likely be able to power 16 drives (as long as it's >500W) and most m-ATX boards will have enough slots for two raid controllers. Here's from when I migrated servers, though I used 10gbe ethernet between machines instead of doing in-system copying: There's a fan behind the drives
|
# ¿ Jan 20, 2023 16:05 |
|
I usually try to retire drives after 5-6 years, or at least make sure it's not holding anything I care about. That said I generally fill an array in 2-3 years, so I get 2-3 years of backup duty out of a set of drives after I've phased them out of the "prod" array. Right now I have two (three) fileservers, 11x4TB (entire box being retired, it's an old dual X5675 setup, most drives have 5-6 years of runtime), 9x8TB (not re-assembled after my main fileserver got upgraded, have all the parts though), and an 8x14TB box that lives in my apartment.
|
# ¿ Jan 20, 2023 18:35 |
|
Less Fat Luke posted:
Post a pic with everything cabled up
|
# ¿ Jan 20, 2023 22:28 |
|
Enos Cabell posted:Really wish I'd labeled my drives like this when I set up the server! I'm going to have to pull them one at a time when I start replacing. Don't have to print the whole serial number either I bought some SATA power cables that have the plugs in a string, but they "feed from the top", so it just became a mess. sigh.
|
# ¿ Jan 20, 2023 23:02 |
|
No. Have you run "top" to see what's using so much cpu? Assuming you're on *nix.
|
# ¿ Jan 30, 2023 07:28 |
|
Basic burn in tests are mostly just suitable to find DOA drives. I've experienced one DOA drive since I started building computers in the 90s. My seagate lp based array had failures when temps got high, that's something others have documented as well.
|
# ¿ Feb 1, 2023 09:40 |
|
Eaton or bust for UPS. Be prepared to pay accordingly.
|
# ¿ Feb 24, 2023 15:54 |
|
jawbroken posted:And if you lose power that often then perhaps you should consider just getting a whole-house generator or battery. At that point I'd look into a few kWh of lifepo4 batteries and one of those integrated hybrid off grid inverters. Maybe add some solar as well
|
# ¿ Feb 24, 2023 17:28 |
|
|
# ¿ Apr 29, 2024 10:36 |
|
Segment poo poo like that away from each other, yikes.
|
# ¿ Feb 28, 2023 15:52 |