Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BlankSystemDaemon
Mar 13, 2009



CopperHound posted:

I forgot I have a third option that gives me a high chance of putting my backups to the test: Creating a degraded pool! :v:
That's even dumber than it needs to be - if you really want to do this, load geom_nop.ko, use gnop(8) to create a gnop device, and use that.
It turns all writes into zeroes, returns zeroes on all reads, and doesn't take up actual diskspace like a truncated file would.

VostokProgram posted:

Could you in theory write a script to rebalance a zfs pool by disabling allocations on the more full vdevs and then cp-ing a bunch of files around until the vdevs are mostly balanced?
If you're going to be moving files around anyway, it's easier just to, for example zfs rename tank/freebsdisos tank/freebsdisos1, then zfs send tank/freebsdisos1 | mbuffer -m 1G -s 128k | zfs receive tank/freebsdisos.

However, there's always the issue of free/unallocated space, because what happens on vdev removal is that that space instantly disappears, which has implications for whether it's possible to even allocate if suddenly you run out of space after you've started doing vdev removal (because you can't do it before, as it needs enough free space on the remaining devices to even start the vdev removal).

So, what I'm saying is, I'm going to let someone else test it, or deliberately not use a production setup if I'm ever going to test it.

Adbot
ADBOT LOVES YOU

Devian666
Aug 20, 2008

Take some advice Chris.

Fun Shoe

Corin Tucker's Stalker posted:

It's been a few days since I got the QNAP TS-230 as my first NAS. Here are a few thoughts from a total newcomer and partial idiot.

Setup was easy. Being farted in the face with a million notices when I first launched QTS was unpleasant. I got a little lost in the weeds looking at apps I didn't need, launching things I didn't understand that created folders I didn't want, and went to bed slightly regretting the purchase.

Then the next day I thought... what specific uses did I buy this for? So I focused on those things. Three of the four uses were simply SMB shares, so I focused on setting those up. Easy. And on the first attempt I was able to share games to both my PS2 and MiSTer FPGA. Now this thing feels like magic.

So far I only have one hiccup, which is that both the PS2 and VLC on Xbox (I think) require SMB 1. I know how to enable it. It's just odd having to do so, as it's apparently a security risk. I did set a limit on login attempts and only allow connections from my local network, though, so hopefully that helps.

As you've found out it's best to focus on getting the box running then play around with apps and other features later.

DLNA and similar network features are security risks. However if you do not allow an external connections to the NAS then they are unlikely to be an issue.

CerealKilla420
Jan 3, 2014

"I need a handle man..."
So I got my DS920+ and I put one of my 14tb shucked WD Red drives in it.

I've played around with the settings but no matter what I do I can not seem to get the file transfer speed to top 80mbps. I've looked on the Synology Forums and have played with all of the different network settings as recommended to no avail.

I should also note that my router is 1000mbps, the MTU value is 1500 (1000 mbps, full duplex), and both my computer and the NAS are connected via Ethernet to the same router.

Also in resource monitor, I still have a tremendous amount of network and drive headroom. Transfer speed holds consistent at exactly 80mbps.

Any advice here? It's not the raid situation because I only have the one drive at the moment.

BlankSystemDaemon
Mar 13, 2009



First step to figuring out these kinds of issues is to begin rootcausing, by simply running tests locally on the Synology; so far as I'm aware, you should still be able to get ssh access to a regular busybox Unix-like shell.

Once you've proven that your storage can handle it, the next step is to check the Samba configuration, et cetera ad nauseum.
Things that can make a big difference are TCP_NODELAY (which ensures that Samba doesn't try to opportunistically back off if it's sensing that the client can't keep up; the heuristics for this back-off are not very good if you're trying to utilize all of your bandwidth), as well as SO_RCVBUF and SO_SNDBUF (which increases the receive and send buffer sizes, ensuring bigger chunks of disk I/O are being worked with).
EDIT: There's also IPTOS_LOWDELAY which gets mentioned in a lot of tuning guides, but I'm not completely sure means anything nowadays, as I don't know of any gear that respects the Type of Service field of the IP packet.

Also, I'm just gonna quote a nerd who posted about this previously:

BlankSystemDaemon posted:

Even if you're using NFS over UDP (SMB nor Samba offers this), Gigabit Ethernet tops out at 125MBps - and since you're probably using TCP, which has an overhead of about 7MBps at Gigabit wirespeed, that should be right around 118MBps.
Even with 9k jumboframes, wirespeed tops out at around 123MBps.

BlankSystemDaemon fucked around with this message at 17:07 on Dec 1, 2021

CerealKilla420
Jan 3, 2014

"I need a handle man..."

BlankSystemDaemon posted:

First step to figuring out these kinds of issues is to begin rootcausing, by simply running tests locally on the Synology; so far as I'm aware, you should still be able to get ssh access to a regular busybox Unix-like shell.

Once you've proven that your storage can handle it, the next step is to check the Samba configuration, et cetera ad nauseum.
Things that can make a big difference are TCP_NODELAY (which ensures that Samba doesn't try to opportunistically back off if it's sensing that the client can't keep up; the heuristics for this back-off are not very good if you're trying to utilize all of your bandwidth), as well as SO_RCVBUF and SO_SNDBUF (which increases the receive and send buffer sizes, ensuring bigger chunks of disk I/O are being worked with).
EDIT: There's also IPTOS_LOWDELAY which gets mentioned in a lot of tuning guides, but I'm not completely sure means anything nowadays, as I don't know of any gear that respects the Type of Service field of the IP packet.

Also, I'm just gonna quote a nerd who posted about this previously:

Ok so these speeds are pretty normal?

So if I wanted to improve the speed by any signifigant amount I would pretty much need to buy a 10Gbit router and PCI card right?

Dang... That's pretty rough - I guess I'll have to make the jump eventually. It's going to take 20 hours to move my 8TB of data over to this thing but I guess I'll just have to let it run its course.

BlankSystemDaemon
Mar 13, 2009



CerealKilla420 posted:

Ok so these speeds are pretty normal?

So if I wanted to improve the speed by any signifigant amount I would pretty much need to buy a 10Gbit router and PCI card right?

Dang... That's pretty rough - I guess I'll have to make the jump eventually. It's going to take 20 hours to move my 8TB of data over to this thing but I guess I'll just have to let it run its course.
That's not really what I was trying to say, no.

My point is you need to figure out what the bottleneck is, instead of guessing - but that the upper limit you're likely to reach is ~123MBps.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
I've been thinking about migrating my home NAS setup from mdadm on CentOS 7, which is still supported for a few years but seems like possibly a dead end, to ZFS on Ubuntu Server. Here are the specs:

Supermicro X8SIL-F - Intel 3420 chipset, LGA1156, uATX
Xeon L3426 - 4C/8T, turbo up to 3.2GHz
16GB (4x4) ECC 1333MHz DDR3
HP CN1100E 10Gb NIC
LSI SAS2008 RAID controller in IT mode - the whole RAID is connected to this, other drives to the onboard SATA
1x Crucial 240GB BX200 - OS disk, XFS for now
6x WD Red 10TB SATA (shucked) - currently in mdadm RAID 6 with XFS on top
1x Seagate 3TB NAS drive - standalone XFS, will be used unchanged in new system at least until RAID is restored

Currently my plan is:

0) Set up another old desktop on an Unraid trial with all my other spare HDDs, which with one parity disk should have enough space.
1) Back up the entire contents of the mdadm array to Unraid. The standalone drive will be used unchanged on the new system.
2) Shut down the NAS, take out the system SSD and back it up onto another system (or just use another SSD entirely, if I can find one).
3) Install Ubuntu fresh onto the NAS. I'm thinking Ubuntu Desktop LTS with a minimal install over Server, since I like having a DE available.
4) Set up the six drives as a RAIDZ2.
5) Copy back the contents of the Unraid system.
6) Proceed to set up Samba etc.

I assume Ubuntu doesn't have any problem using XFS drives even if ext4 is more of the default, and while the 11yo hardware might not make the most performant ZFS NAS out there it should be fine for a few Samba users. I see some old statements about wanting 1GB RAM per 1TB storage but I also thought that was discussed in this thread and isn't current advice anymore. Are there any problems I've missed here?

Eletriarnation fucked around with this message at 18:47 on Dec 1, 2021

BlankSystemDaemon
Mar 13, 2009



The defaults of ZFS still work on the 2005-era systems with a 1-1.4GHz 64-bit SPARC CPU with between 4 and 8 cores that it was built for - so I wouldn't be too worried about performance.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I've been looking to reactivate my NAS after cleaning the dust off it. One thing that always pissed me off back then was that despite having fancy Mellanox cards, I could never use RDMA for iSCSI, because Microsoft was being a bitch and cockblocked manufacturers in preference of their SMB Direct stuff, which however isn't block storage.

I've been testing NVMe-oF against a ZVOL for giggles (only via TCP so far), by using the nvmet driver on Linux and the Starwind initiator (since both supposedly do RDMA), and surprisingly enough it seems to work despite not being an actual NVMe device.

Anyhow, this shucked WD drive thing, how much of a lottery is it? I'd certainly like cheap drives for a three drive mirror for some read IOPS when hitting the disk.

Rescue Toaster
Mar 13, 2003
I've shucked eight of the 8TB WD externals and I think 6/8 were the helium-filled, and there were 4 different models. One failed in badblocks testing, and one is a cold replacement. No failures in operation after 3 years or so. I'm OK with them not being identical in the long run because that seems less likely to have weird cluster failures. I clipped the 3.3v lines on the SATA power supply cables for my PSU otherwise they wouldn't spin up. They all supported the timeout setting thing you need for RAID that escapes me at the moment.

The non-helium one definitely runs a few degrees warmer than the others but not egregiously so.

Rescue Toaster fucked around with this message at 21:49 on Dec 1, 2021

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Won't the helium ones have a shorter lifespan, since that poo poo escapes through every gasket known to man?

CopperHound
Feb 14, 2012

Combat Pretzel posted:

Won't the helium ones have a shorter lifespan, since that poo poo escapes through every gasket known to man?
I might be wrong about this, but I assume there would need to be a pressure gradient for this to happen.

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer

CopperHound posted:

I might be wrong about this, but I assume there would need to be a pressure gradient for this to happen.

Since the atmospheric partial pressure of helium is very low there would be a gradient by definition. Not sure about the measured pressure though. Interesting topic! Does seem like a waste of helium in some ways.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

CopperHound posted:

I might be wrong about this, but I assume there would need to be a pressure gradient for this to happen.

helium is actually such a small molecule that it will migrate right through the metal of the drive and leak out over time, even without a pressure gradient. It's not fast but they do lose pressure on the order of a decade or two.

one of the SMART sensors actually monitors how much helium it has left, not quite sure how that works

Paul MaudDib fucked around with this message at 22:26 on Dec 1, 2021

CopperHound
Feb 14, 2012

Paul MaudDib posted:

helium is actually such a small molecule that it will migrate right through the metal of the drive and leak out over time, even without a pressure gradient. It's not fast but they do lose pressure on the order of a decade or two.

one of the SMART sensors actually monitors how much helium it has left, not quite sure how that works
Fascinating. Does that mean the drive will have more of negative relative pressure over time?

BlankSystemDaemon
Mar 13, 2009



CopperHound posted:

Fascinating. Does that mean the drive will have more of negative relative pressure over time?
Yes, but that's not really going to hurt it.
The reason Helium is used is because it allows the ride-height of the head to be lower than the standard 0.9 nanometers, which you can't do with a normal air mix as the size of the molecules interfere with the ride-height.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

Paul MaudDib posted:

one of the SMART sensors actually monitors how much helium it has left, not quite sure how that works

You know how helium makes air go through your vocal cords faster, and gives you a higher pitched voice?

A tiny microphone in the drive can tell if the frequency has decreased from the sound of the drive motor spinning, and calculates how much helium has left

tuyop
Sep 15, 2006

Every second that we're not growing BASIL is a second wasted

Fun Shoe

BlankSystemDaemon posted:

That's not really what I was trying to say, no.

My point is you need to figure out what the bottleneck is, instead of guessing - but that the upper limit you're likely to reach is ~123MBps.

This is what my synology reads over the lan, so it’s definitely some kind of bottleneck happening.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Bob Morales posted:

You know how helium makes air go through your vocal cords faster, and gives you a higher pitched voice?

A tiny microphone in the drive can tell if the frequency has decreased from the sound of the drive motor spinning, and calculates how much helium has left

That is so cool!

CerealKilla420
Jan 3, 2014

"I need a handle man..."

BlankSystemDaemon posted:

That's not really what I was trying to say, no.

My point is you need to figure out what the bottleneck is, instead of guessing - but that the upper limit you're likely to reach is ~123MBps.

Ok that makes sense... After running a couple of local test my drive speeds were being throttled by my raid config (2 2tb slow SMR drive RAID 0'd together along with my two 14tb CMR drives) at the high end with this setup I was topping out at like 150MB/s which is about the upper end of what the slower drives can do (which makes sense). I decided to Raid 1 the 2 slow drives together so I could backup documents and all of my music and then put my two CMR 14tb drives in SHR for all of my larger video files. Local read/write operations on this faster storage pool tops out at about 285MB/s which is exactly what I expected.

I'm still only getting between 85-90MB/s even when moving files directly to the faster storage pool which, considering the limitations of my networking setup is fine.

I finally moved my largest chunks of data over so I should be good to go from here.


I just didn't realize that transferring files from my computer to the NAS would be so slow without 10 gigabit networking. I guess now I understand why most people run programs in Docker that automatically download/pull files directly to the NAS for them.

I'm having a lot of fun playing around with this thing. I got my UPS hooked up to the NAS and it appears to be working and shutting itself down after a test run. I also really like how you can set a schedule for the LEDs on the front of the NAS too because my NAS runs in my bedroom. Really cool that they thought to include that feature.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

BlankSystemDaemon posted:

Yes, but that's not really going to hurt it.
The reason Helium is used is because it allows the ride-height of the head to be lower than the standard 0.9 nanometers, which you can't do with a normal air mix as the size of the molecules interfere with the ride-height.
Isn't it still reliant on the correct amount of gas to be there to maintain correct ride height, i.e. aerodynamics performing as expected?

BlankSystemDaemon
Mar 13, 2009



Combat Pretzel posted:

Isn't it still reliant on the correct amount of gas to be there to maintain correct ride height, i.e. aerodynamics performing as expected?
I'm not a fluid dynamics expert, but I don't think so?

The primary reason, as I said, has to do with the size of the molecules that air consists of.

Rescue Toaster
Mar 13, 2003
I guess eventually the pressure of the helium inside will drop to the partial pressure of the helium outside (which is really low) and the inside will be basically a vacuum. If you pass a point there's not enough gas to keep the heads far enough off the surface well then that's all she wrote.

On the one hand I have some old drives that I can plug in and will work more than 20 years later which is cool... on the other those are just things I have sitting around not like I'm counting on the data being there. If anything I need to get better about destroying drives before they get too old to be reliable, something to keep in mind with these helium ones I guess.

VelociBacon
Dec 8, 2009

Since there's already discussion of fluid dynamics and partial pressures, you guys might find it interesting that we use helium as a blended gas (with oxygen) in respiratory critical care in order to reduce the resistance in airways that are compromised in a variety of ways (up in the throat or down in the lungs or in between). We can do that with or without intubating and can even give a hypoxic 18% O2 82% helium mixture if the situation requires that slippery of a gas.

I want to say if helium was escaping from a drive, it would have to be replaced by the room air. Unless it was being actively pumped out, you can't really get a naturally occuring vaccum from passive equilibrium type of gas movement, right?

YerDa Zabam
Aug 13, 2016



There is a SMART value for the helium level. It's mentioned here, along with lifetime data.

https://www.backblaze.com/blog/helium-filled-hard-drive-failure-rates/

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

Bob Morales posted:

You know how helium makes air go through your vocal cords faster, and gives you a higher pitched voice?

A tiny microphone in the drive can tell if the frequency has decreased from the sound of the drive motor spinning, and calculates how much helium has left

I totally made this up btw

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Bob Morales posted:

I totally made this up btw

:lol: you bastard

Teabag Dome Scandal
Mar 19, 2002


WD Red Plus are still decent, right? Newegg has 6tb on sale for ~100 and Im tired of waiting for the 8tb Easystores

Cold on a Cob
Feb 6, 2006

i've seen so much, i'm going blind
and i'm brain dead virtually

College Slice

Teabag Dome Scandal posted:

WD Red Plus are still decent, right? Newegg has 6tb on sale for ~100 and Im tired of waiting for the 8tb Easystores

WD Red Plus 6TB are CMR, if that's what you're worried about.

Teabag Dome Scandal
Mar 19, 2002


Cold on a Cob posted:

WD Red Plus 6TB are CMR, if that's what you're worried about.

I haven't bought new drives in quite a while so I don't know what I should or should not be worried about.

edit: to clarify a little more, I am assuming that WD Reds have not taken a nose dive in quality or reliability and there is no reason not to buy them if I am happy with the size and price

Teabag Dome Scandal fucked around with this message at 21:50 on Dec 3, 2021

A Bag of Milk
Jul 3, 2007

I don't see any American dream; I see an American nightmare.

Teabag Dome Scandal posted:

I haven't bought new drives in quite a while so I don't know what I should or should not be worried about.

edit: to clarify a little more, I am assuming that WD Reds have not taken a nose dive in quality or reliability and there is no reason not to buy them if I am happy with the size and price

Reds are good, cmr is good, and 100 is a great price for 6tb. No reason not to pull the trigger

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Oh wow, I have RDMA to my NAS working via this NVMe-oF stuff using the Starwind initiator. With Microsoft's own driver at that. I've been focusing on Q1T1 random 4K reads from the NAS, since I figure that represents best when a game is reading randomly small resources. According to Diskmark.

- Using 1GBit Intel NICs I get 9-10MB/s from the ZFS ARC (i.e. reads from memory).
- Using 40GBit ConnectX3 I get 37MB/s from the ZFS ARC.
- Using RDMA on the ConnectX3 I get 90MB/s from the ZFS ARC.
- I get 63MB/s from my Samsung 970 EVO instead (after letting it cool down after a full Diskmark run).

I guess a triple disk mirror for multiple actuators and an L2ARC are a good plan. Too bad this thing is old enough to still feature DDR3, because I should probably increase the memory for caching.

RDMA to my undusted NAS:



The 970 EVO:

BlankSystemDaemon
Mar 13, 2009



Combat Pretzel posted:

Oh wow, I have RDMA to my NAS working via this NVMe-oF stuff using the Starwind initiator. With Microsoft's own driver at that. I've been focusing on Q1T1 random 4K reads from the NAS, since I figure that represents best when a game is reading randomly small resources. According to Diskmark.

- Using 1GBit Intel NICs I get 9-10MB/s from the ZFS ARC (i.e. reads from memory).
- Using 40GBit ConnectX3 I get 37MB/s from the ZFS ARC.
- Using RDMA on the ConnectX3 I get 90MB/s from the ZFS ARC.
- I get 63MB/s from my Samsung 970 EVO instead (after letting it cool down after a full Diskmark run).

I guess a triple disk mirror for multiple actuators and an L2ARC are a good plan. Too bad this thing is old enough to still feature DDR3, because I should probably increase the memory for caching.

RDMA to my undusted NAS:



The 970 EVO:


The talk that mav@ gave at the FreeBSD Developer Summit touches on this:
https://www.youtube.com/watch?v=jeKLvleCQ9w&t=378s

Slide 18 of the presentation PDF is one that you might find particular interest in:

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Interesting. That actually led me to check up on the ZFS version on the Ubuntu install, and it's still at zfs-0.8.3, so I'm probably missing out on tons of stuff. Am gonna update to latest OpenZFS 2.1.1 and see if that changes anything.

Also, seems like that stupid E3C226D2I mainboard in my NAS is limited to 16GB due to chipset bullshit in combination to only two RAM slots. :(

BlankSystemDaemon
Mar 13, 2009



Most of the changes mav@ has done are to FreeBSD, iirc.

Crunchy Black
Oct 24, 2017

by Athanatos

Bob Morales posted:

I totally made this up btw

Truly masterful. :golfclap:

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

BlankSystemDaemon posted:

Most of the changes mav@ has done are to FreeBSD, iirc.
While it would probably interesting to run FreeNAS to get a decent administration interface instead of doing it entirely in the terminal, FreeBSD doesn't appear to have an NVMe-oF target driver. I'm going with that instead, because for iSCSI there isn't an RDMA capable initiator for Windows.

--edit:
After researching it some more, they're moving TrueNAS to Linux, what?

--edit:
I guess TrueNAS Scale it is. I get the Free-/TrueNAS UI and can do the NVMe-oF stuff in the shell.

Combat Pretzel fucked around with this message at 13:50 on Dec 4, 2021

CopperHound
Feb 14, 2012

Combat Pretzel posted:

--edit:
I guess TrueNAS Scale it is. I get the Free-/TrueNAS UI and can do the NVMe-oF stuff in the shell.
Share a trip report. I can't decide if I want to run the Linux version or the BSD version with a VM to do other stuff.

Rescue Toaster
Mar 13, 2003
I'm still running... I guess it's called XigmaNAS which was NAS4Free? I run nothing except samba and the ups client on it so it doesn't matter. But notification is a mess since it's tied only to email. I'd much rather setup a push notification system like pushover.

I'd also like to go to something linux-based just because I'm so much more familiar with it than BSD. Many years ago I did my own linux setup using mdadm raid and the setup wasn't too bad, but I switched for ZFS.

Adbot
ADBOT LOVES YOU

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

CopperHound posted:

Share a trip report. I can't decide if I want to run the Linux version or the BSD version with a VM to do other stuff.
Gonna take a while, still fretting over datasheets for new disks. So far I only installed in a VM, and looks like a duck, quacks like a duck, etc. If you drop to the shell, it's obviously Linux.

TrueNAS Scale seems to do everything TrueNAS Core does. If it comes out of beta and proves popular, I'm willing to bet that it's a nail in the coffin of the BSD variant. It even does appear to do ZFS Boot.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply