|
codo27 posted:You had me at first there. I'm just rocking a couple of 3tb Reds mirrored in a lovely old Buffalo Linkstation but the vast majority are movies & tv. I'd say my essential data might only total inside a TB Yeah, I prooooooooobably am actually somewhere around 1TB of data I actually give a poo poo about, but trimming that down from 3TB would be more effort than I'm willing to do at this point. So it goes.
|
# ? Aug 12, 2021 17:47 |
|
|
# ? May 26, 2024 04:07 |
|
The Seagates I bought have a 1.5% failure rate but that's not even the worst on the chart
|
# ? Aug 12, 2021 17:53 |
|
Munkeymon posted:The Seagates I bought have a 1.5% failure rate but that's not even the worst on the chart Just don't be this guy.
|
# ? Aug 12, 2021 18:32 |
|
I mean if you're buying seagate still then its on you
|
# ? Aug 12, 2021 18:40 |
|
Yeah, documents, photos, and stuff like that get backed up to B2 on a schedule. Then I have an unshucked easystore for that stuff, plus whatever will fit / or be a pain in the rear end to reacquire. Like I just throw all my music on there because the vast majority of it was ripped by me off CD/Vinyl and while I still have those in storage I sure as poo poo don't want to re-rip and musicbrainz hundreds of discs again. I just have a little bash script to automate that process, and it serves to track what stuff I'm doing that for.
|
# ? Aug 12, 2021 21:00 |
|
Incessant Excess posted:Anyone here know a good resource for NAS reviews? I'm looking to replace my seemingly broken Synology DS918+ and am wondering if the Qnap TS653D is a good pick, mainly interested in running various Docker containers as well as Plex transcoding. ServeTheHome does a ton of NAS-oriented reviews of everything from amateur to prosumer to actual server gear. And what you can't find on the site, people often are discussing it on their forum. They also have a "hot deals" subforum where people post some good deals on hardware. while I haven't read an overall review of that unit, it looks pretty decent. I really like those Gemini Lake processors for NAS use - they are power efficient, they use standard x86 binaries/distros, reasonably fast (between a core2 quad and a Nehalem i5 level performance), have a very good media block for transcoding, have HDMI 2.0b for 4K60 output (no HDR though) if you want to use them as a combo NAS/HTPC, and Intel's open-source linux/unix drivers are extremely good. That unit also has 2.5 GbE which is a nice feature to have at this point, and has a PCIe expansion slot to give you some expansion of your choice (there's lots of things you could do with it). the only bad thing I have to say about it is that the predecessor to that (called Apollo Lake) were known to die after a bit (and this was a long-standing bug in the Atom series), and the Gemini Lake processors still do have at least one errata where they have a limit on how much the USB 3.0 ports can write, they have an expected lifetime of like 12 TB of writes or something. Not sure if that still applies to Gemini Lake Plus. But personally I have done a ton of writes on my Gemini Lake NUCs and run them for extended periods of time without shutdown and haven't noticed any issue. Paul MaudDib fucked around with this message at 22:41 on Aug 12, 2021 |
# ? Aug 12, 2021 21:05 |
|
Thank you for your detailed response, just the kind of information i was hoping someone could provide. Much appreciated!
|
# ? Aug 12, 2021 21:46 |
|
codo27 posted:I mean if you're buying seagate still then its on you I thought Seagate was
|
# ? Aug 13, 2021 19:01 |
|
Munkeymon posted:I thought Seagate was They are fine for normal human / home NAS use. But IT people are like cats: you scare them with something once and they never forget for the rest of their lives. Just look at the Backblaze Q2 report: there's a Toshiba drive (usually looked at as a bastion of high quality HDDs) with a 4% failure rate, which is the second highest on the chart. So are we now saying Toshiba is bad? Of course not--especially since the sample size for those is tiny (<100 drives). Same with the chart-leading Seagate with a 5.5% failure rate--only 1,600 drives there. There are plenty of Seagate drives in that chart that have competitive failure rates with everyone else. It should say something that a company like Backblaze opts to continue using large numbers of Seagates, even given the somewhat elevated failure rates of some of them, while they basically don't use Western Digital at all.
|
# ? Aug 13, 2021 20:06 |
|
SolusLunes posted:For the media, just periodically export a list of filenames to your properly backed up directories so you can fetch it again when you rebuild your server. That's a good idea, I assume can I just run some kind of recursive ls command and > to a text file on daily, weekly and monthly cron jobs? What's a good format for something like that in sh/bash?
|
# ? Aug 14, 2021 02:46 |
Takes No Damage posted:That's a good idea, I assume can I just run some kind of recursive ls command and > to a text file on daily, weekly and monthly cron jobs? What's a good format for something like that in sh/bash? You could use a command like "find /path/to/whatever", I think the output would be a suitable format for this
|
|
# ? Aug 14, 2021 04:58 |
|
Having set up my Synology NAS again after a software issue, I'd like to what I always should have done and backup some configuration files (docker containers, DSM) to outside of my NAS. Is there a recommended way to do this? Cloud Sync and upload to Google Drive?
|
# ? Aug 14, 2021 17:02 |
|
Should just be a matter of automating a config backup and rsyncing it wherever. Digging out the command might be a pain though if it’s not a selectable dropdown somewhere.
|
# ? Aug 14, 2021 18:33 |
Warbird posted:Should just be a matter of automating a config backup and rsyncing it wherever. Digging out the command might be a pain though if it’s not a selectable dropdown somewhere. Yeah on a synology this stuff lives in a weird directory tree. Probably under /volume#/@appstore/[programname] but after you find it the relevant files are accessible like they are on Debian or whatever.
|
|
# ? Aug 14, 2021 20:17 |
|
My ZFS system hit 50% cap and I think it has issues now related to anything writing. When I attempt to do one of the below things with a large file (3 gigs for bigger, maybe less)
the transfer will start fast at ~70MBps but after a gig or so will drop to ~10MBps but after the third gig will just go down to nothing. Depending on the transfer method the transfer will wait at nothing and come back up to full speed (nc which is what I have been doing I have been doing for awhile) or just kill the connection because it thinks the server is unresponsive. I think scp has that stay alive ping it can do and it will not autokill the transfer the zfs properties that matter I was changing was logbias and sync. I think I set logbias to throughput and quickly changed it to latency (the default) since throughput reads it directly writes to the zfs blocks instead of using other methods like sending to memory or zilog. I think this is when I was noticed smb was hosed because I was relying on sonarr and radarr for all transfer. setting logbias to default did nothing to resolve the smb/scp/transfer problem of going to nothing speed. Recently I set sync from standard to always. Right now transferring will always be completed. Just not as fast. I watch the transfer stats via zfs iostat and htop and the system never writes to memory anything similar to the size of the file being transferred, the zilog is untouched ,and all the hard drives are pegged on write speed. any ideas?
|
# ? Aug 15, 2021 04:06 |
What's the free space fragmentation of the pool?
|
|
# ? Aug 15, 2021 08:41 |
|
Have there been any major thoughts on the TrueNAS Mini-x / Mini-x+? I like the TrueNAS software but also don't think I could buy the same hardware for less than they sell it on their site due to COVID pricing at the moment. Other options are some of the Synology systems, which seem nice for a plug and play experience.
|
# ? Aug 15, 2021 14:08 |
iXsystems use Ablecom as their ODM for the cases, and Ablecom has a list of all the products they make, including the new 5-bay ones. However, sourcing the cases is usually quite difficult as Ablecom only sells in batches of 100 units at a minimum - the only way I know of that people have done it successfully are through the group-purchase sites which order them and then hope to sell them. EDIT: Supermicro used to OEM the 4-bay variant as 721TQ-250B, so you may be able to find some old-new stock. BlankSystemDaemon fucked around with this message at 16:15 on Aug 15, 2021 |
|
# ? Aug 15, 2021 16:08 |
|
BlankSystemDaemon posted:iXsystems use Ablecom as their ODM for the cases, and Ablecom has a list of all the products they make, including the new 5-bay ones. I actually have that Supermicro chassis. They're a lot more expensive on the secondhand market than when I paid for mine. ($160) Now they're around $350 last I checked on eBay. They are great cases however. I build a TrueNAS server in mine, used 4TB Toshiba drives in the drive cages (they're hot swap) and there's room for two 2.5" SSDs for the boot pool. You just need to get a power extension to get to the one on the right side of the case. I put in an HBA for the backplane and you just plug in two molex ports for power for the drives in the cage. It does come with a PSU built in, so one less thing to worry about. The board mount is slide out which makes it easier to work on. Just bear in mind you'll need an adapter to hook up most motherboards to the case switches and lights and such. That was about $8 on Amazon. Passively cooled CPUs might not be the best way to go when looking at the case fan to cooler height as the cooler has to fit under the drive cage. Overall an easy to build in and would definitely recommend for a small NAS build.
|
# ? Aug 15, 2021 18:10 |
Very recently (this week) I got a HPE Microserver Gen10+ with the ILO5 enablement kit, a Xeon 2224, and 16GB ECC memory (and I bought another 16GB ECC, for $92,42) - so I've been using that as my primary server. Only real downside is the lack of a backplane, which I think HPE could've put in without affecting the price too much - as it is, it means that there's no hot-plug support.
|
|
# ? Aug 15, 2021 18:49 |
|
BlankSystemDaemon posted:What's the free space fragmentation of the pool? code:
|
# ? Aug 15, 2021 18:56 |
|
My gut reaction to 14% fragmentation was "isn't that a lot" but then I pulled the same on my pool:code:
Any drives throwing errors in dmesg?
|
# ? Aug 15, 2021 19:00 |
EVIL Gibson posted:
Free space fragmentation, as the name suggests, gives an indication of how much of the free space is fragmented - ie. assuming all records are written at the maximum allowed size, how many of them will be fragmented, preventing them from being written contiguously and sequentially. Therefore, there's quite a correlation between free space fragmentation and low free space resulting in decreased write speeds because ZFS has to work harder in order to allocate space on disks. In your case, however, that basically can't be the reason - and the only thing I know of that can cause behavior like what you're seeing is that one of your drives may be silently timing out as a result of an internal failure. If you're on FreeBSD, do you have kern.cam.(ada|da).retry_count=0 and kern.cam.(ada|da).default_timeout=<30? Similarily, have you tried observing disk access patterns through gstat(8)? It provides a much lower-level overview of which disks may or may not be exhibiting numbers outside the normal/expected ranges - it'd of course be best if you had historical data through prometheus (which has a gstat exporter, but even without it, as long as you have enough disks that are performing as one might expect, you should still be able to see if a disk isn't behaving like it should.
|
|
# ? Aug 15, 2021 19:12 |
|
I have a Synology NAS with a storage pool in SHR, if I want to expand that I can do so by adding a drive that's as big or bigger as the smallest drive inside that pool, right? To give a concrete example, I have a pool made up of 2x 16TB and 2x 10TB drives, can I add another 10TB drive to this pool or does it need to be 16TB? I looked at the documentation and I believe adding a 10TB drive should be possible but I just want to make absolutely certain.
|
# ? Aug 17, 2021 09:39 |
|
Synology have a calculator for this. https://www.synology.com/en-us/support/RAID_calculator Putting in your example, yes. You'll have 46TB of available space and 16TB for redundancy.
|
# ? Aug 17, 2021 10:10 |
|
Unfortunately the calculator doesn't really answer my question, as it's not about total storage space but about software restrictions on how you can expand an existing pool. For example, in the calculator you can add a 16TB drive as the first drive and then add a 10TB drive as the second drive, which is something I know first hand SHR doesn't actually let you do in a real world scenario.
|
# ? Aug 17, 2021 10:20 |
|
Oh right, sorry I misread. Well as you already have 10TB drives in there then yes you can add more 10GB.Synology posted:If an SHR storage pool is composed of three drives (2 TB, 1.5 TB, and 1 TB), we recommend that the newly-added drive should be at least 2 TB for better capacity usage. You can consider adding 1.5 TB and 1 TB drives, but please note that some capacity of the 2 TB drive will remain unused.
|
# ? Aug 17, 2021 10:30 |
|
I have built a server that I want to run some sort of expandable RAID on (with 3 12tb+ drives at least), I am currently running OMV, which allows for software RAID which I think is expandable (except for JBOD and 0), but I could use ZFS-based RAID? From my research using ZFS has many benefits but it's not easily growable, is that correct?
|
# ? Aug 19, 2021 15:42 |
|
hogofwar posted:I have built a server that I want to run some sort of expandable RAID on (with 3 12tb+ drives at least), I am currently running OMV, which allows for software RAID which I think is expandable (except for JBOD and 0), but I could use ZFS-based RAID? From my research using ZFS has many benefits but it's not easily growable, is that correct? If you want a storage solution with redundancy where you can later add one disk at a time to increase capacity, ZFS is probably not (yet) for you.
|
# ? Aug 19, 2021 15:52 |
|
hogofwar posted:I have built a server that I want to run some sort of expandable RAID on (with 3 12tb+ drives at least), I am currently running OMV, which allows for software RAID which I think is expandable (except for JBOD and 0), but I could use ZFS-based RAID? From my research using ZFS has many benefits but it's not easily growable, is that correct? Yes, correct. Main option for expanding one at a time is UnRAID, there are a few others but I haven't tried them so someone else might want to chime in.
|
# ? Aug 19, 2021 16:40 |
|
Keito posted:If you want a storage solution with redundancy where you can later add one disk at a time to increase capacity, ZFS is probably not (yet) for you. Matt Zerella posted:Yes, correct. To add to this, unRAID's storage is pretty much jbod with parity disk (up to 2). So it has the advantage over other raid levels, that if you lose your parity disk and a data disk, you only lose the data that was on the disk you lost. unRAID does not have as good of read/write performance as raid 5 type systems, but you can mitigate this with optional ssd cache drives.
|
# ? Aug 19, 2021 18:12 |
|
I forgot, what's the deal with Mellanox infiniband drivers on windows? iirc there was some thing where the old driver (OFED) didn't support newer versions of windows. I have a ConnectX-2 card, I tried a while back and even in non-RDMA mode (edit: IPoIB) I couldn't get it to come up in infiniband mode at all under windows. May have been my fault (some configuration I missed?) but it worked perfectly under Linux. I ended up just using it in Ethernet mode. if I wanted to do RDMA samba on windows (for the full 40gb/s speed) I'd obviously need an enterprise key for Windows, but as far as adapters, if I just ponied up for a connectx-3 generation card, would it actually just plug and play under windows? realistically though I think I'm just gonna do ethernet instead and not worry too much about infiniband anymore, it's just way easier to only have one network. Mikrotik now has a big-boy version of their desktop switches for a very reasonable price ($459 for 24 10gbe SFP+ ports, plus two 40GbE ports for uplinks or connecting between switches). With consumer gear that does leave you in the unfortunate position of needing SFP+ base-t modules at about $50 a pop for 10gbe/multi-gig modules (and of course there's some compatibility concerns) but the options for anything with native base-t ports is pretty bleak still. I'd actually like a mix of both, or at least a couple SFP links for my server and some connections between switches, but there aren't too many great options that have both. QNAP has an interesting one where it has 12 ports with 4 dedicated SFP and you can mix-n-match the other 8 between SFP and base-t, but it's $619 for 12 ports and it's unmanaged (so no connection bonding for dual 10gbe to my NAS). Netgear has a nice looking 10 port (2x SFP 10gb, 4x 10gb base-t multigig, 4x 2.5gb base-t multigig), but it's out of stock everywhere, and the TPLink alternative is 12 port but other than the 2x SFP links they're all 2.5gbit multi-gig. Paul MaudDib fucked around with this message at 06:42 on Aug 20, 2021 |
# ? Aug 20, 2021 00:02 |
|
Here's my old Infiniband-at-home trip report post:Sheep posted:I just got an Infiniband network running at home using two MHGH28-XTC cards - probably one of the more painful setups I've ever had to deal with (and one card having a broken firmware didn't help), but it's nice finally having a setup where my RAID array is the new bottleneck and I can move stuff around without destroying the LAN for everyone. As you found, it "just works" on Linux, so what I wound up doing was pulling the card from the Windows box and putting it in a spare miniITX chassis with a LFF backplate and using that as a staging system instead. Much better experience all around. ConnectX-3 seem to have a native Windows 10 client so you should be good there if you were to go that route. Sheep fucked around with this message at 06:48 on Aug 20, 2021 |
# ? Aug 20, 2021 06:41 |
|
Has mikrotik/qnap's security record improved any lately?
|
# ? Aug 20, 2021 07:28 |
|
I know that qnap recently got hit with a zero day a few months ago.
|
# ? Aug 20, 2021 08:52 |
|
They also got caught in April with hardcoded admin passwords in the firmware.
|
# ? Aug 20, 2021 09:00 |
Sheep posted:Here's my old Infiniband-at-home trip report post:
|
|
# ? Aug 20, 2021 10:55 |
Trip report with FreeBSD on my new HPE Microserver Gen10+ with a Xeon E-2224 @ 3.4GHz (boost to 4.6GHz for +30 seconds), 32GB memory, a 10G SFP+ X520 NIC and 3x 6TB + 1x 8TB (both because I didn't have four of any single size, and to prove that ZFS can expand in ways people don't seem to think it can, even if you don't have SAS backplanes and all sorts of nonsense). While the old HP Microserver Gen7 N36L @ 1.3GHz could do wirespeed ethernet with a bit of buffer tweaking for bulk transfers, one thing I'm not sure I realized how much was affected is all the small transfers involved in listing directories and such. The new Gen10+ is blazing fast without any noticable lag for listing even huge directories over 1Gbps RJ45, it feels indistinguishable from browsing local SSD storage on my ThinkPad T420 - and that's without any kind of tweaking. I have my HP Proliant DL380p Gen8 connected via Intel X520 too, so when I next find the need to boot it to build something for the jails on the Microserver with Poudriere, I plan on doing some network-to-network and disk-to-disk performance tests over 10G SFP+ before and after tweaking. EDIT: Obviously part of the reason for the speed is that, disregarding the sheer clock speed difference, the N36L is a core from 2007 while the E-2224 was released Summer 2019 - so even if we assume AMD had parity with Intel back then (which they definitely didn't), it's more than 100% improvement in instructions per clock. Another factor is that the memory speed has gone from DDR3-800 to DDR4-2666 - so it's more than tripled the memory speed. EDIT2: Oh, and looking at the power meter I've hooked up to it, at ACPI C2 idle levels, it uses less power than the Microserver. BlankSystemDaemon fucked around with this message at 19:42 on Aug 20, 2021 |
|
# ? Aug 20, 2021 19:38 |
|
Mega Comrade posted:They also got caught in April with hardcoded admin passwords in the firmware. Yikes. Is asustor recommended over qnap nowadays in the world of set it and forget it boxes for those not looking to pay the premium for synology?
|
# ? Aug 20, 2021 20:24 |
|
|
# ? May 26, 2024 04:07 |
|
BlankSystemDaemon posted:EDIT: Obviously part of the reason for the speed is that, disregarding the sheer clock speed difference, the N36L is a core from 2007 while the E-2224 was released Summer 2019 - so even if we assume AMD had parity with Intel back then (which they definitely didn't), it's more than 100% improvement in instructions per clock. Going from the N36L to the X3421 Gen10 was a world of difference for me, I can't imagine what the Gen10 Plus is like.
|
# ? Aug 20, 2021 20:38 |