Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
SolusLunes
Oct 10, 2011

I now have several regrets.

:barf:

codo27 posted:

You had me at first there. I'm just rocking a couple of 3tb Reds mirrored in a lovely old Buffalo Linkstation but the vast majority are movies & tv. I'd say my essential data might only total inside a TB

Yeah, I prooooooooobably am actually somewhere around 1TB of data I actually give a poo poo about, but trimming that down from 3TB would be more effort than I'm willing to do at this point. So it goes.

Adbot
ADBOT LOVES YOU

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.




The Seagates I bought have a 1.5% failure rate but that's not even the worst on the chart :shepface:

Sheep
Jul 24, 2003

Munkeymon posted:

The Seagates I bought have a 1.5% failure rate but that's not even the worst on the chart :shepface:

Just don't be this guy.

codo27
Apr 21, 2008

I mean if you're buying seagate still then its on you

Chilled Milk
Jun 22, 2003

No one here is alone,
satellites in every home
Yeah, documents, photos, and stuff like that get backed up to B2 on a schedule.

Then I have an unshucked easystore for that stuff, plus whatever will fit / or be a pain in the rear end to reacquire. Like I just throw all my music on there because the vast majority of it was ripped by me off CD/Vinyl and while I still have those in storage I sure as poo poo don't want to re-rip and musicbrainz hundreds of discs again.

I just have a little bash script to automate that process, and it serves to track what stuff I'm doing that for.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Incessant Excess posted:

Anyone here know a good resource for NAS reviews? I'm looking to replace my seemingly broken Synology DS918+ and am wondering if the Qnap TS653D is a good pick, mainly interested in running various Docker containers as well as Plex transcoding.

ServeTheHome does a ton of NAS-oriented reviews of everything from amateur to prosumer to actual server gear. And what you can't find on the site, people often are discussing it on their forum. They also have a "hot deals" subforum where people post some good deals on hardware.

while I haven't read an overall review of that unit, it looks pretty decent. I really like those Gemini Lake processors for NAS use - they are power efficient, they use standard x86 binaries/distros, reasonably fast (between a core2 quad and a Nehalem i5 level performance), have a very good media block for transcoding, have HDMI 2.0b for 4K60 output (no HDR though) if you want to use them as a combo NAS/HTPC, and Intel's open-source linux/unix drivers are extremely good. That unit also has 2.5 GbE which is a nice feature to have at this point, and has a PCIe expansion slot to give you some expansion of your choice (there's lots of things you could do with it).

the only bad thing I have to say about it is that the predecessor to that (called Apollo Lake) were known to die after a bit (and this was a long-standing bug in the Atom series), and the Gemini Lake processors still do have at least one errata where they have a limit on how much the USB 3.0 ports can write, they have an expected lifetime of like 12 TB of writes or something. Not sure if that still applies to Gemini Lake Plus. But personally I have done a ton of writes on my Gemini Lake NUCs and run them for extended periods of time without shutdown and haven't noticed any issue.

Paul MaudDib fucked around with this message at 22:41 on Aug 12, 2021

Incessant Excess
Aug 15, 2005

Cause of glitch:
Pretentiousness
Thank you for your detailed response, just the kind of information i was hoping someone could provide. Much appreciated!

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



codo27 posted:

I mean if you're buying seagate still then its on you

I thought Seagate was goodOK-ish again. Welp!

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Munkeymon posted:

I thought Seagate was goodOK-ish again. Welp!

They are fine for normal human / home NAS use. But IT people are like cats: you scare them with something once and they never forget for the rest of their lives.

Just look at the Backblaze Q2 report: there's a Toshiba drive (usually looked at as a bastion of high quality HDDs) with a 4% failure rate, which is the second highest on the chart. So are we now saying Toshiba is bad? Of course not--especially since the sample size for those is tiny (<100 drives). Same with the chart-leading Seagate with a 5.5% failure rate--only 1,600 drives there.

There are plenty of Seagate drives in that chart that have competitive failure rates with everyone else. It should say something that a company like Backblaze opts to continue using large numbers of Seagates, even given the somewhat elevated failure rates of some of them, while they basically don't use Western Digital at all.

Takes No Damage
Nov 20, 2004

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far.


Grimey Drawer

SolusLunes posted:

For the media, just periodically export a list of filenames to your properly backed up directories so you can fetch it again when you rebuild your server.

That's a good idea, I assume can I just run some kind of recursive ls command and > to a text file on daily, weekly and monthly cron jobs? What's a good format for something like that in sh/bash?

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Takes No Damage posted:

That's a good idea, I assume can I just run some kind of recursive ls command and > to a text file on daily, weekly and monthly cron jobs? What's a good format for something like that in sh/bash?

You could use a command like "find /path/to/whatever", I think the output would be a suitable format for this

Incessant Excess
Aug 15, 2005

Cause of glitch:
Pretentiousness
Having set up my Synology NAS again after a software issue, I'd like to what I always should have done and backup some configuration files (docker containers, DSM) to outside of my NAS. Is there a recommended way to do this? Cloud Sync and upload to Google Drive?

Warbird
May 23, 2012

America's Favorite Dumbass

Should just be a matter of automating a config backup and rsyncing it wherever. Digging out the command might be a pain though if it’s not a selectable dropdown somewhere.

tuyop
Sep 15, 2006

Every second that we're not growing BASIL is a second wasted

Fun Shoe

Warbird posted:

Should just be a matter of automating a config backup and rsyncing it wherever. Digging out the command might be a pain though if it’s not a selectable dropdown somewhere.

Yeah on a synology this stuff lives in a weird directory tree. Probably under /volume#/@appstore/[programname] but after you find it the relevant files are accessible like they are on Debian or whatever.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo
My ZFS system hit 50% cap and I think it has issues now related to anything writing.

When I attempt to do one of the below things with a large file (3 gigs for bigger, maybe less)

  • transfer over smb
  • transfer over sftp ftps or scp
  • move from a non-zfs to a zfs directory on the same system
  • doing a netcat (nc) connection and sending the file over that

the transfer will start fast at ~70MBps but after a gig or so will drop to ~10MBps but after the third gig will just go down to nothing.

Depending on the transfer method the transfer will wait at nothing and come back up to full speed (nc which is what I have been doing I have been doing for awhile) or just kill the connection because it thinks the server is unresponsive. I think scp has that stay alive ping it can do and it will not autokill the transfer

the zfs properties that matter I was changing was logbias and sync.

I think I set logbias to throughput and quickly changed it to latency (the default) since throughput reads it directly writes to the zfs blocks instead of using other methods like sending to memory or zilog. I think this is when I was noticed smb was hosed because I was relying on sonarr and radarr for all transfer.

setting logbias to default did nothing to resolve the smb/scp/transfer problem of going to nothing speed.

Recently I set sync from standard to always. Right now transferring will always be completed. Just not as fast.

I watch the transfer stats via zfs iostat and htop and the system never writes to memory anything similar to the size of the file being transferred, the zilog is untouched ,and all the hard drives are pegged on write speed.


any ideas?

BlankSystemDaemon
Mar 13, 2009



What's the free space fragmentation of the pool?

KKKLIP ART
Sep 3, 2004

Have there been any major thoughts on the TrueNAS Mini-x / Mini-x+? I like the TrueNAS software but also don't think I could buy the same hardware for less than they sell it on their site due to COVID pricing at the moment. Other options are some of the Synology systems, which seem nice for a plug and play experience.

BlankSystemDaemon
Mar 13, 2009



iXsystems use Ablecom as their ODM for the cases, and Ablecom has a list of all the products they make, including the new 5-bay ones.
However, sourcing the cases is usually quite difficult as Ablecom only sells in batches of 100 units at a minimum - the only way I know of that people have done it successfully are through the group-purchase sites which order them and then hope to sell them.

EDIT: Supermicro used to OEM the 4-bay variant as 721TQ-250B, so you may be able to find some old-new stock.

BlankSystemDaemon fucked around with this message at 16:15 on Aug 15, 2021

Nulldevice
Jun 17, 2006
Toilet Rascal

BlankSystemDaemon posted:

iXsystems use Ablecom as their ODM for the cases, and Ablecom has a list of all the products they make, including the new 5-bay ones.
However, sourcing the cases is usually quite difficult as Ablecom only sells in batches of 100 units at a minimum - the only way I know of that people have done it successfully are through the group-purchase sites which order them and then hope to sell them.

EDIT: Supermicro used to OEM the 4-bay variant as 721TQ-250B, so you may be able to find some old-new stock.

I actually have that Supermicro chassis. They're a lot more expensive on the secondhand market than when I paid for mine. ($160) Now they're around $350 last I checked on eBay. They are great cases however. I build a TrueNAS server in mine, used 4TB Toshiba drives in the drive cages (they're hot swap) and there's room for two 2.5" SSDs for the boot pool. You just need to get a power extension to get to the one on the right side of the case. I put in an HBA for the backplane and you just plug in two molex ports for power for the drives in the cage. It does come with a PSU built in, so one less thing to worry about. The board mount is slide out which makes it easier to work on. Just bear in mind you'll need an adapter to hook up most motherboards to the case switches and lights and such. That was about $8 on Amazon. Passively cooled CPUs might not be the best way to go when looking at the case fan to cooler height as the cooler has to fit under the drive cage. Overall an easy to build in and would definitely recommend for a small NAS build.

BlankSystemDaemon
Mar 13, 2009



Very recently (this week) I got a HPE Microserver Gen10+ with the ILO5 enablement kit, a Xeon 2224, and 16GB ECC memory (and I bought another 16GB ECC, for $92,42) - so I've been using that as my primary server.
Only real downside is the lack of a backplane, which I think HPE could've put in without affecting the price too much - as it is, it means that there's no hot-plug support.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

BlankSystemDaemon posted:

What's the free space fragmentation of the pool?

code:
root@nektulos ~# zpool get capacity,size,health,fragmentation
NAME    PROPERTY       VALUE     SOURCE
neriak  capacity       50%       -
neriak  size           45.4T     -
neriak  health         ONLINE    -
neriak  fragmentation  14%       -
root@nektulos ~#
Not anywhere close to 80% capacity where I hear the issues start stacking with performance

IOwnCalculus
Apr 2, 2003





My gut reaction to 14% fragmentation was "isn't that a lot" but then I pulled the same on my pool:

code:
$ zpool get capacity,size,health,fragmentation tank
NAME  PROPERTY       VALUE     SOURCE
tank  capacity       83%       -
tank  size           149T      -
tank  health         ONLINE    -
tank  fragmentation  28%       -
Welp. Still not seeing any performance issues.

Any drives throwing errors in dmesg?

BlankSystemDaemon
Mar 13, 2009



EVIL Gibson posted:

code:
root@nektulos ~# zpool get capacity,size,health,fragmentation
NAME    PROPERTY       VALUE     SOURCE
neriak  capacity       50%       -
neriak  size           45.4T     -
neriak  health         ONLINE    -
neriak  fragmentation  14%       -
root@nektulos ~#
Not anywhere close to 80% capacity where I hear the issues start stacking with performance
That 80% is something someone pulled out of their rear end once, before spacemap v2 had been added (which is what gives the option of reporting free space fragmentation).
Free space fragmentation, as the name suggests, gives an indication of how much of the free space is fragmented - ie. assuming all records are written at the maximum allowed size, how many of them will be fragmented, preventing them from being written contiguously and sequentially.
Therefore, there's quite a correlation between free space fragmentation and low free space resulting in decreased write speeds because ZFS has to work harder in order to allocate space on disks.

In your case, however, that basically can't be the reason - and the only thing I know of that can cause behavior like what you're seeing is that one of your drives may be silently timing out as a result of an internal failure.

If you're on FreeBSD, do you have kern.cam.(ada|da).retry_count=0 and kern.cam.(ada|da).default_timeout=<30?
Similarily, have you tried observing disk access patterns through gstat(8)? It provides a much lower-level overview of which disks may or may not be exhibiting numbers outside the normal/expected ranges - it'd of course be best if you had historical data through prometheus (which has a gstat exporter, but even without it, as long as you have enough disks that are performing as one might expect, you should still be able to see if a disk isn't behaving like it should.

Incessant Excess
Aug 15, 2005

Cause of glitch:
Pretentiousness
I have a Synology NAS with a storage pool in SHR, if I want to expand that I can do so by adding a drive that's as big or bigger as the smallest drive inside that pool, right?

To give a concrete example, I have a pool made up of 2x 16TB and 2x 10TB drives, can I add another 10TB drive to this pool or does it need to be 16TB? I looked at the documentation and I believe adding a 10TB drive should be possible but I just want to make absolutely certain.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
Synology have a calculator for this.


https://www.synology.com/en-us/support/RAID_calculator

Putting in your example, yes. You'll have 46TB of available space and 16TB for redundancy.

Incessant Excess
Aug 15, 2005

Cause of glitch:
Pretentiousness
Unfortunately the calculator doesn't really answer my question, as it's not about total storage space but about software restrictions on how you can expand an existing pool. For example, in the calculator you can add a 16TB drive as the first drive and then add a 10TB drive as the second drive, which is something I know first hand SHR doesn't actually let you do in a real world scenario.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
Oh right, sorry I misread. Well as you already have 10TB drives in there then yes you can add more 10GB.


Synology posted:

If an SHR storage pool is composed of three drives (2 TB, 1.5 TB, and 1 TB), we recommend that the newly-added drive should be at least 2 TB for better capacity usage. You can consider adding 1.5 TB and 1 TB drives, but please note that some capacity of the 2 TB drive will remain unused.

hogofwar
Jun 25, 2011

'We've strayed into a zone with a high magical index,' he said. 'Don't ask me how. Once upon a time a really powerful magic field must have been generated here, and we're feeling the after-effects.'
'Precisely,' said a passing bush.
I have built a server that I want to run some sort of expandable RAID on (with 3 12tb+ drives at least), I am currently running OMV, which allows for software RAID which I think is expandable (except for JBOD and 0), but I could use ZFS-based RAID? From my research using ZFS has many benefits but it's not easily growable, is that correct?

Keito
Jul 21, 2005

WHAT DO I CHOOSE ?

hogofwar posted:

I have built a server that I want to run some sort of expandable RAID on (with 3 12tb+ drives at least), I am currently running OMV, which allows for software RAID which I think is expandable (except for JBOD and 0), but I could use ZFS-based RAID? From my research using ZFS has many benefits but it's not easily growable, is that correct?

If you want a storage solution with redundancy where you can later add one disk at a time to increase capacity, ZFS is probably not (yet) for you.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

hogofwar posted:

I have built a server that I want to run some sort of expandable RAID on (with 3 12tb+ drives at least), I am currently running OMV, which allows for software RAID which I think is expandable (except for JBOD and 0), but I could use ZFS-based RAID? From my research using ZFS has many benefits but it's not easily growable, is that correct?

Yes, correct.

Main option for expanding one at a time is UnRAID, there are a few others but I haven't tried them so someone else might want to chime in.

Lowen SoDium
Jun 5, 2003

Highen Fiber
Clapping Larry

Keito posted:

If you want a storage solution with redundancy where you can later add one disk at a time to increase capacity, ZFS is probably not (yet) for you.



Matt Zerella posted:

Yes, correct.

Main option for expanding one at a time is UnRAID, there are a few others but I haven't tried them so someone else might want to chime in.

To add to this, unRAID's storage is pretty much jbod with parity disk (up to 2). So it has the advantage over other raid levels, that if you lose your parity disk and a data disk, you only lose the data that was on the disk you lost.

unRAID does not have as good of read/write performance as raid 5 type systems, but you can mitigate this with optional ssd cache drives.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
I forgot, what's the deal with Mellanox infiniband drivers on windows? iirc there was some thing where the old driver (OFED) didn't support newer versions of windows.

I have a ConnectX-2 card, I tried a while back and even in non-RDMA mode (edit: IPoIB) I couldn't get it to come up in infiniband mode at all under windows. May have been my fault (some configuration I missed?) but it worked perfectly under Linux. I ended up just using it in Ethernet mode.

if I wanted to do RDMA samba on windows (for the full 40gb/s speed) I'd obviously need an enterprise key for Windows, but as far as adapters, if I just ponied up for a connectx-3 generation card, would it actually just plug and play under windows?

realistically though I think I'm just gonna do ethernet instead and not worry too much about infiniband anymore, it's just way easier to only have one network.

Mikrotik now has a big-boy version of their desktop switches for a very reasonable price ($459 for 24 10gbe SFP+ ports, plus two 40GbE ports for uplinks or connecting between switches). With consumer gear that does leave you in the unfortunate position of needing SFP+ base-t modules at about $50 a pop for 10gbe/multi-gig modules (and of course there's some compatibility concerns) but the options for anything with native base-t ports is pretty bleak still. I'd actually like a mix of both, or at least a couple SFP links for my server and some connections between switches, but there aren't too many great options that have both. QNAP has an interesting one where it has 12 ports with 4 dedicated SFP and you can mix-n-match the other 8 between SFP and base-t, but it's $619 for 12 ports and it's unmanaged (so no connection bonding for dual 10gbe to my NAS). Netgear has a nice looking 10 port (2x SFP 10gb, 4x 10gb base-t multigig, 4x 2.5gb base-t multigig), but it's out of stock everywhere, and the TPLink alternative is 12 port but other than the 2x SFP links they're all 2.5gbit multi-gig.

Paul MaudDib fucked around with this message at 06:42 on Aug 20, 2021

Sheep
Jul 24, 2003
Here's my old Infiniband-at-home trip report post:

Sheep posted:

I just got an Infiniband network running at home using two MHGH28-XTC cards - probably one of the more painful setups I've ever had to deal with (and one card having a broken firmware didn't help), but it's nice finally having a setup where my RAID array is the new bottleneck and I can move stuff around without destroying the LAN for everyone.

Haven't bothered trying SRP or anything but IPoIB works well enough that it's not a big deal. Might get a second cable and hook up port 2 and see what kind of speeds I can get ramdisk-to-ramdisk.

:feelsgood:

Edit: here's a run down on getting IBoIP going in case anyone else is a masochist:
When using cards this old, you need to run a super old driver version - for Windows 10 I had to dig up MLNX_VPI_WinOF-3_2_0_wlh_x64 and run the installer in compatibility mode with Windows 7. No other driver version worked, period.

For CentOS, you can just yum groupinstall -y "Infiniband Support". Unless you somehow have a physical IB switch in the mix you'll need the opensm package as well. Once that's all done reboot and chkconfig both rdma and opensm on - you should be able to see the card as a normal network interface and configure it as such. Once opensm comes up and everything polls in (~60 seconds) you'll get link up on both ends (and SUBNET UP messages in /var/log/opensm.log) and are more or less good to go as far as the hard stuff.

You can get some useful info such as Port GUID and what not by running ibstat if you install the infiniband-diags package.

As you found, it "just works" on Linux, so what I wound up doing was pulling the card from the Windows box and putting it in a spare miniITX chassis with a LFF backplate and using that as a staging system instead. Much better experience all around. ConnectX-3 seem to have a native Windows 10 client so you should be good there if you were to go that route.

Sheep fucked around with this message at 06:48 on Aug 20, 2021

Impotence
Nov 8, 2010
Lipstick Apathy
Has mikrotik/qnap's security record improved any lately?

Axe-man
Apr 16, 2005

The product of hundreds of hours of scientific investigation and research.

The perfect meatball.
Clapping Larry
I know that qnap recently got hit with a zero day a few months ago.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
They also got caught in April with hardcoded admin passwords in the firmware.

BlankSystemDaemon
Mar 13, 2009



Sheep posted:

Here's my old Infiniband-at-home trip report post:

As you found, it "just works" on Linux, so what I wound up doing was pulling the card from the Windows box and putting it in a spare miniITX chassis with a LFF backplate and using that as a staging system instead. Much better experience all around. ConnectX-3 seem to have a native Windows 10 client so you should be good there if you were to go that route.
If memory serves me correctly (it's been a while), on FreeBSD it's configured by doing service opensm enable and service opensm start, then configuring the interfaces and bringing them up with ifconfig like anything else.

BlankSystemDaemon
Mar 13, 2009



Trip report with FreeBSD on my new HPE Microserver Gen10+ with a Xeon E-2224 @ 3.4GHz (boost to 4.6GHz for +30 seconds), 32GB memory, a 10G SFP+ X520 NIC and 3x 6TB + 1x 8TB (both because I didn't have four of any single size, and to prove that ZFS can expand in ways people don't seem to think it can, even if you don't have SAS backplanes and all sorts of nonsense).

While the old HP Microserver Gen7 N36L @ 1.3GHz could do wirespeed ethernet with a bit of buffer tweaking for bulk transfers, one thing I'm not sure I realized how much was affected is all the small transfers involved in listing directories and such.
The new Gen10+ is blazing fast without any noticable lag for listing even huge directories over 1Gbps RJ45, it feels indistinguishable from browsing local SSD storage on my ThinkPad T420 - and that's without any kind of tweaking.

I have my HP Proliant DL380p Gen8 connected via Intel X520 too, so when I next find the need to boot it to build something for the jails on the Microserver with Poudriere, I plan on doing some network-to-network and disk-to-disk performance tests over 10G SFP+ before and after tweaking.

EDIT: Obviously part of the reason for the speed is that, disregarding the sheer clock speed difference, the N36L is a core from 2007 while the E-2224 was released Summer 2019 - so even if we assume AMD had parity with Intel back then (which they definitely didn't), it's more than 100% improvement in instructions per clock.
Another factor is that the memory speed has gone from DDR3-800 to DDR4-2666 - so it's more than tripled the memory speed.

EDIT2: Oh, and looking at the power meter I've hooked up to it, at ACPI C2 idle levels, it uses less power than the Microserver.

BlankSystemDaemon fucked around with this message at 19:42 on Aug 20, 2021

A Bag of Milk
Jul 3, 2007

I don't see any American dream; I see an American nightmare.

Mega Comrade posted:

They also got caught in April with hardcoded admin passwords in the firmware.

Yikes. Is asustor recommended over qnap nowadays in the world of set it and forget it boxes for those not looking to pay the premium for synology?

Adbot
ADBOT LOVES YOU

Sheep
Jul 24, 2003

BlankSystemDaemon posted:

EDIT: Obviously part of the reason for the speed is that, disregarding the sheer clock speed difference, the N36L is a core from 2007 while the E-2224 was released Summer 2019 - so even if we assume AMD had parity with Intel back then (which they definitely didn't), it's more than 100% improvement in instructions per clock.
Another factor is that the memory speed has gone from DDR3-800 to DDR4-2666 - so it's more than tripled the memory speed.

EDIT2: Oh, and looking at the power meter I've hooked up to it, at ACPI C2 idle levels, it uses less power than the Microserver.

Going from the N36L to the X3421 Gen10 was a world of difference for me, I can't imagine what the Gen10 Plus is like.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply