Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BlankSystemDaemon
Mar 13, 2009



Methylethylaldehyde posted:

Sync writes. Sync writes will dumpster your throughput no matter how huge your array is, compared to Sync=off datasets.

The little optane SSDs as a write cache can make a huge difference, but god knows why it was misbehaving that badly. At least a reboot fixed it.
That's not RAID level independent, and I have no idea what the performance characteristics of BTRFS is on the kind of gear found in a NAS, but since it's Synology it probably means Samba with SMB, which is asynchronous.
In addition to that, OP mentioned they're used as read caches.

If a reboot changed something, it only masked it.
Rebooting is never a solution to a problem in production, tracing is.

Adbot
ADBOT LOVES YOU

Flipperwaldt
Nov 11, 2011

Won't somebody think of the starving hamsters in China?



Oh good, copying 3TB over usb on my DS112j is going to take 5 days. Can't pause or cancel the job or disable scheduled tasks that might interfere with that anymore because it apparently puts such a strain on the device that these apps aren't responsive. Doesn't help that the drive I'm copying to is some portable thing that likely contains a bottom of the barrel smr drive. I guess I should start looking to replace the whole kit and caboodle with something less anemic. Just bite the bullet and throw a bunch of money at it. Eugh.

Fozzy The Bear
Dec 11, 1999

Nothing much, watching the game, drinking a bud
I see the OP is 9 years old. I'm looking for a pre-built NAS server, with at least 4 bays, going to install TrueNAS on it. It will be my Plex server, so a CPU with quick sync would be nice (no Atom cpus). Would like it to look professional "wife approved". I don't have a rack, so a smallish black box would be perfect.

Budget of $1000 or less before adding hard drives.

Rap Game Goku
Apr 2, 2008

Word to your moms, I came to drop spirit bombs


Case question. I built a new system a few months ago, and my old system has now been relegated to NAS/Plex server duty. However it is still in the S340 case I built it in and that only has 2 3.5" bays. I'm at a point where I'm going to need more than that and thought I'd ask for recommendations for a full atx case with a lot of internal space.

I'd prefer as small a footprint as possible, but I recognize I'm asking for two different things there.

Klyith
Aug 3, 2007

GBS Pledge Week

Flipperwaldt posted:

Doesn't help that the drive I'm copying to is some portable thing that likely contains a bottom of the barrel smr drive.

The SMR part doesn't make a difference for "write a TB of data all at once" -- you're writing a pure sequential pattern, which means the current write goes on top of the previous shingle track and the drive is basically 100% as fast as CMR. This is why, outside of the NAS space, you don't see that many people all :argh: about SMR drives. The normal write patterns of a HD for a single-user desktop are small random writes (handled by a landing cache) or big continuous writes (unaffected by SMR).

SMR doesn't get along with things like RAID0 or 5, or a block-based FS like ZFS, because they divide data into smaller pieces. Those chunks may not align with the SMR zones, or may make for much less sequential write patterns. Not because the drives are inherently slow and terrible. It's a bad interaction.




Dunno what's up with your bad transfer rate though, even a crummy 2.5" drive should be faster than that if it's modern enough to have 3tb capacity. I'd look at the enclosure or maybe the USB cable.

Flipperwaldt
Nov 11, 2011

Won't somebody think of the starving hamsters in China?



Klyith posted:

The SMR part doesn't make a difference for "write a TB of data all at once" -- you're writing a pure sequential pattern, which means the current write goes on top of the previous shingle track and the drive is basically 100% as fast as CMR. This is why, outside of the NAS space, you don't see that many people all :argh: about SMR drives. The normal write patterns of a HD for a single-user desktop are small random writes (handled by a landing cache) or big continuous writes (unaffected by SMR).

SMR doesn't get along with things like RAID0 or 5, or a block-based FS like ZFS, because they divide data into smaller pieces. Those chunks may not align with the SMR zones, or may make for much less sequential write patterns. Not because the drives are inherently slow and terrible. It's a bad interaction.




Dunno what's up with your bad transfer rate though, even a crummy 2.5" drive should be faster than that if it's modern enough to have 3tb capacity. I'd look at the enclosure or maybe the USB cable.
Oh I thought some buffer was involved that could be filled after which speeds were reduced. I'm probably confusing it with something else. Or maybe it is that, but it doesn't apply for sequential writes like you're saying.

The crummy 2.5" drive is brand new, but otherwise untested. I don't doubt it'll do better speeds if the port on the server wasn't usb2 and the cpu wasn't a 1ghz arm thing with downright miserable 128MB ram.

I managed to disable the scheduled tasks though, after several attempts, so I can wait this out, technically. I'm probably better off raw dog unplugging it, formatting it again and copying the files from the server through the network. I'll have to do the math on that.

BlankSystemDaemon
Mar 13, 2009



SMR can be used for RAID, and even for ZFS if you're careful about how you use them - but it's worth noting that the thing that makes consumer SMR drives lovely is that they're drive-managed, not host-managed.

There's an entire SCSI peripheral device type that was added to the specification to be able to handle zoned storage, and there's no reason to believe that'd have any off the malignancies that drive-managed SMR has.
I've only heard rumours about the hyperscalers using the kind of host-managed high-capacity drives that we were initially promised, so it's likely not something any of us will touch.

Still, one way to use SMR drives if you've accidentally bought them and can't return them (because of shucking), is to write ZFS snapshots to them. They're entirely sequential even when used for restoring from.
As I've also mentioned before, it will eventually also be useful for recovering individual parts of a dataset that get URE'd to pieces during resilvers, because of the corrective receive feature, assuming it lands in OpenZFS 3.0.

Flipperwaldt posted:

Oh I thought some buffer was involved that could be filled after which speeds were reduced. I'm probably confusing it with something else. Or maybe it is that, but it doesn't apply for sequential writes like you're saying.

The crummy 2.5" drive is brand new, but otherwise untested. I don't doubt it'll do better speeds if the port on the server wasn't usb2 and the cpu wasn't a 1ghz arm thing with downright miserable 128MB ram.

I managed to disable the scheduled tasks though, after several attempts, so I can wait this out, technically. I'm probably better off raw dog unplugging it, formatting it again and copying the files from the server through the network. I'll have to do the math on that.
Sounds like you're confusing it with hybrid harddisk+SSD variants that got sold for a while.
There's a very good reason you don't see those anymore, which is the amount of solid state was always too small to be used properly, because the rotating rust couldn't keep up if the manufacturers made the solid state storage big enough.

Getting the worst parts of SSDs and rotating rust and none of the benefits, basically.

BlankSystemDaemon fucked around with this message at 19:13 on Apr 29, 2022

Stanley Tucheetos
May 15, 2012

I'm looking for a new ups to run my plex server while I move my older one to my main pc. Is there any noticeable difference between the cyberpower and apc 1500VA models?

Motronic
Nov 6, 2009

Stanley Tucheetos posted:

I'm looking for a new ups to run my plex server while I move my older one to my main pc. Is there any noticeable difference between the cyberpower and apc 1500VA models?

Other than the price, no. The cyberpower units seem to be perfectly adequate for home use.

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit

Fozzy The Bear posted:

I see the OP is 9 years old. I'm looking for a pre-built NAS server, with at least 4 bays, going to install TrueNAS on it. It will be my Plex server, so a CPU with quick sync would be nice (no Atom cpus). Would like it to look professional "wife approved". I don't have a rack, so a smallish black box would be perfect.

Budget of $1000 or less before adding hard drives.

HPE Gen10+ sounds right up your alley minus no quicksync, but it has a PCIe slot you can add a cheap ancient GPU for hardware decode (I have a p400 in mine that was practically free)

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Rap Game Goku posted:

Case question. I built a new system a few months ago, and my old system has now been relegated to NAS/Plex server duty. However it is still in the S340 case I built it in and that only has 2 3.5" bays. I'm at a point where I'm going to need more than that and thought I'd ask for recommendations for a full atx case with a lot of internal space.

I'd prefer as small a footprint as possible, but I recognize I'm asking for two different things there.

I really like my Node 804 for 8 drives + 2 SSDs. Perhaps that one or the Node 304 would suit your needs.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Stanley Tucheetos posted:

I'm looking for a new ups to run my plex server while I move my older one to my main pc. Is there any noticeable difference between the cyberpower and apc 1500VA models?
Get the PFCLCD model of the Cyberpower ones, because they do true sine wave. PSUs with active PFC like that better.

BlankSystemDaemon
Mar 13, 2009



e.pilot posted:

HPE Gen10+ sounds right up your alley minus no quicksync, but it has a PCIe slot you can add a cheap ancient GPU for hardware decode (I have a p400 in mine that was practically free)
The HP Gen10+ with the Pentium G5420 does have a iGPU and QuickSync, and does ECC memory. It has the same number of threads as the Xeon E2224, but two of them are from SMT.

Do also note that the server has Intel Server Platform Services, which actively prevents using QuickSync even if you get a Xeon E-2124G, which does have an iGPU - though I don't know if this also applies to the Pentium G5420.

e.pilot
Nov 20, 2011

sometimes maybe good
sometimes maybe shit

BlankSystemDaemon posted:

The HP Gen10+ with the Pentium G5420 does have a iGPU and QuickSync, and does ECC memory. It has the same number of threads as the Xeon E2224, but two of them are from SMT.

Do also note that the server has Intel Server Platform Services, which actively prevents using QuickSync even if you get a Xeon E-2124G, which does have an iGPU - though I don't know if this also applies to the Pentium G5420.

Quicksync cannot be enabled on it, which is annoying.

BlankSystemDaemon
Mar 13, 2009



e.pilot posted:

Quicksync cannot be enabled on it, which is annoying.
Ah, the SPS disables the iGPU on the Pentium GF5420 too? That's disappointing and annoying, yeah.

If you're using a FreeBSD-based NAS product, and can find an AMD GPU with h265 decoding (ie. VCN 1.0 and newer), that's still a good way to go as it'll work with graphics/drm-kmod (though I can't say anything about how easy it is to get to work with TrueNAS, since it's a NAS distribution that's not meant to be able to do things like that).
Something like a XFX Speedster QICK210 Radeon RX 6400 4GB or PULSE AMD Radeon RX 6400 should be a decent choice, since it looks like it'll fit in a half-height daughterboard slot.

BlankSystemDaemon fucked around with this message at 18:51 on Apr 30, 2022

Klyith
Aug 3, 2007

GBS Pledge Week

BlankSystemDaemon posted:

If you're using a FreeBSD-based NAS product, and can find an AMD GPU with h265 decoding (ie. VCN 1.0 and newer), that's still a good way to go as it'll work with graphics/drm-kmod (though I can't say anything about how easy it is to get to work with TrueNAS, since it's a NAS distribution that's not meant to be able to do things like that).

If hardware encode is important because the use is a Plex server, keep in mind that Plex still doesn't support AMD on non-Windows OSes. Not without DIY setup effort anyways.

BlankSystemDaemon posted:

Something like a XFX Speedster QICK210 Radeon RX 6400 4GB or PULSE AMD Radeon RX 6400 should be a decent choice, since it looks like it'll fit in a half-height daughterboard slot.

Also the Radeon 6400 and 6500XT have a different video engine than the rest of the 6000 series with video decode only. It has no hardware encoder for any format.

BlankSystemDaemon
Mar 13, 2009



Klyith posted:

If hardware encode is important because the use is a Plex server, keep in mind that Plex still doesn't support AMD on non-Windows OSes. Not without DIY setup effort anyways.

Also the Radeon 6400 and 6500XT have a different video engine than the rest of the 6000 series with video decode only. It has no hardware encoder for any format.
You can't even pipe things in plex?
With tvheadend, there's always the option of doing pipe:///usr/local/bin/ffmpeg -hwaccel vaapi/vdpau et cetera ad nauseum.

Also, welp.

Hadlock
Nov 9, 2004

Finally plugged my synology NAS in after cross-country move and ~5 months

Everything working great, pushed another 16GB of photos to it, syncing with glacier, no issues :slick:

I love my toaster-simplicity, dead reliable NAS

Thanks again to whoever found my power supply on newegg

Shumagorath
Jun 6, 2001
https://documents.westerndigital.co...ed-plus-hdd.pdf

Does anyone know why the 10TB Red Plus drives are a good 10dbA louder than the rest of the lineup? I was eyeing those, but now I'll definitely go up to 12TB or get an extra drive at 8TB.

BlankSystemDaemon
Mar 13, 2009



Shumagorath posted:

https://documents.westerndigital.co...ed-plus-hdd.pdf

Does anyone know why the 10TB Red Plus drives are a good 10dbA louder than the rest of the lineup? I was eyeing those, but now I'll definitely go up to 12TB or get an extra drive at 8TB.
At a guess it could be some interaction between bit density of platters and the number of platters required to achieve a given amount of storage.
As an example, it might be that 12-18TB disks have less than the maximum number of platters at a much higher bit density than 1-8TB disks, and 10TB disks just happen to have the bit density where it works out that they need to have the maximum number of platters.

Simone Poodoin
Jun 26, 2003

Che storia figata, ragazzo!



I'm looking for a NAS device, I have done the PC with unraid thing but it just uses too much power so I want to part it out and replace it.

I almost impulse bought this because it's very attractive to me that I could mount it on my network rack https://www.qnap.com/en/product/ts-431xeu/specs/hardware

Budget: around $700 (I already have drives)

Requirements

- Low power usage
- At least 4 SATA 3.5in bays
- At least one bay for SSD cache (2.5in SATA or m.2)
- Support for running containers (I want to run my pihole there)

Nice to have
- Short depth rackmount form factor (I realize that probably only the qnap linked above fits this one)
- Security camera integration of some kind

Any suggestions? I know the QNAP OS has both things that I want but they have a million different devices and it is unclear to me which ones actually support Container Station. Also I don't need transcoding or plex or anything like that and I don't expect to ever use it.

Thanks!

Simone Poodoin fucked around with this message at 15:26 on May 3, 2022

fatman1683
Jan 8, 2004
.
Where are people getting used Supermicro chassis these days? I need to upgrade and I'd like to get out of my POS Rosewill and into a CSE-846.

Fozzy The Bear
Dec 11, 1999

Nothing much, watching the game, drinking a bud
This looks good for my Plex server needs right? It advertises that it can "live hardware transcoding of up to two concurrent 4K video transcoding".
TerraMaster F5-221 NAS 5-Bay
https://www.newegg.com/terra-master-f5-221/p/14P-006A-00014
It shows the Plex server logo, but doesn't explicitly say that it will run as a server, I can install Plex on it right?

I wish it was running TrueNAS, but at 1/4rd the price their servers, seems perfect. Anything I should look out for?
I can even add two WD Red Pro 18TB hard drives for less than the price of a diskless TrueNAS Mini, geez!

Thanks Ants
May 21, 2004

#essereFerrari


It’s a six year old dual core Celeron CPU - so it’s slow, but it does have quick sync video which is presumably how the transcoding feature can function.

I wouldn’t trust the software though, and would see if it’s possible to load something else onto the box.

Thanks Ants fucked around with this message at 00:50 on May 4, 2022

Klyith
Aug 3, 2007

GBS Pledge Week

Fozzy The Bear posted:

This looks good for my Plex server needs right? It advertises that it can "live hardware transcoding of up to two concurrent 4K video transcoding".
TerraMaster F5-221 NAS 5-Bay
https://www.newegg.com/terra-master-f5-221/p/14P-006A-00014
It shows the Plex server logo, but doesn't explicitly say that it will run as a server, I can install Plex on it right?

I wish it was running TrueNAS, but at 1/4rd the price their servers, seems perfect. Anything I should look out for?
I can even add two WD Red Pro 18TB hard drives for less than the price of a diskless TrueNAS Mini, geez!

So you should maybe read independent reviews rather than trust ads. For example:

quote:

Streaming media is a similar story. We've seen some positive results with the Intel Celeron J3355 for Plex, and the F5-221 doesn't disappoint. The RAM is a little on the slow side (and there are only 2GB), so you may encounter some stuttering as you remotely connect to the Plex Media Server, but you can easily stream 4K content from this enclosure, so long as you don't need to transcode.

Who it isn't for
If you need HDMI out
If you want a powerful Plex NAS

Like, it'll get the job done. But for commercial NAS boxes, one of the biggest movers for pricetag is bays. This one you're looking at has 5 bays for way less than most others. There's a reason for that, this one they're competing on price with lots of bays and cheaper guts.

If you are thinking about adding just two drives, you can get a 2 drive NAS with better internals. For example, many competing boxes have Celeron J4xxx CPUs, which have UHD 600 GPUs instead of UHD 500. That's a fairly big step up in video encoding, especially HEVC.

Fozzy The Bear
Dec 11, 1999

Nothing much, watching the game, drinking a bud

Klyith posted:

So you should maybe read independent reviews rather than trust ads. For example:

Like, it'll get the job done. But for commercial NAS boxes, one of the biggest movers for pricetag is bays. This one you're looking at has 5 bays for way less than most others. There's a reason for that, this one they're competing on price with lots of bays and cheaper guts.

If you are thinking about adding just two drives, you can get a 2 drive NAS with better internals. For example, many competing boxes have Celeron J4xxx CPUs, which have UHD 600 GPUs instead of UHD 500. That's a fairly big step up in video encoding, especially HEVC.

Thank you! I'm a bit out of my element when it comes to server type equipment. I'm looking for 4 bay minimum, so that I have room for future additions.

Takes No Damage
Nov 20, 2004

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far.


Grimey Drawer
So, jail migration to a different pool in TrueNAS. Everywhere I look is telling me to:
use 'iocage export jailname' to produce a zip file in the /images folder
activate iocage on the 2nd pool
move the zips over to the pool2 /images folder
do a 'iocage import jailname'
update all the fstabs to point to pool2 instead of pool1
then start them up again.

The issue I'm having is the export command is taking hours to run per jail, and I've got about a dozen to move over, and who knows if the import command would be a similar deal. If this is the 'right' way to do it then fair enough, but is it really not possible to do something like:

shutdown all jails
# iocage activate pool2
# cp /mnt/pool1/iocage/jails/* /mnt/pool2/iocage/jails/.
change all the fstabs so pool1 > pool2
start 'em up

:thunk:

BlankSystemDaemon
Mar 13, 2009



Takes No Damage posted:

So, jail migration to a different pool in TrueNAS. Everywhere I look is telling me to:
use 'iocage export jailname' to produce a zip file in the /images folder
activate iocage on the 2nd pool
move the zips over to the pool2 /images folder
do a 'iocage import jailname'
update all the fstabs to point to pool2 instead of pool1
then start them up again.

The issue I'm having is the export command is taking hours to run per jail, and I've got about a dozen to move over, and who knows if the import command would be a similar deal. If this is the 'right' way to do it then fair enough, but is it really not possible to do something like:

shutdown all jails
# iocage activate pool2
# cp /mnt/pool1/iocage/jails/* /mnt/pool2/iocage/jails/.
change all the fstabs so pool1 > pool2
start 'em up

:thunk:
I know absolutely nothing about TrueNAS and less about iocage, but it seems to me they're going about it very wrong if this is how they're solving it.

When I've migrated jails on ZFS, I've taken a snapshot, sent that over to the new machine via zfs-send and mbuffer, shut down the old jail, take a new snapshot, send it incrementally (it'll be very fast, if any records have changed, almost-instant if nothing has changed), started the new jail, and been happy.

Done this way, it avoids IP collisions, unless you're changing IP spaces which necessitates lowering DNS TTL.
It can be done with a single command using plenty of &&, so that it's semi-atomic and won't progress if it encounters an error, and the downtime is measured in seconds.

Managing plain jails on FreeBSD nowadays is as simple as described on the FreeBSD Wiki Jail/VNET article, and there's even the option of moving jails which require more configuration into /etc/jail.conf.d/name.conf.

BlankSystemDaemon fucked around with this message at 20:15 on May 4, 2022

withoutclass
Nov 6, 2007

Resist the siren call of rhinocerosness

College Slice

Takes No Damage posted:

So, jail migration to a different pool in TrueNAS. Everywhere I look is telling me to:
use 'iocage export jailname' to produce a zip file in the /images folder
activate iocage on the 2nd pool
move the zips over to the pool2 /images folder
do a 'iocage import jailname'
update all the fstabs to point to pool2 instead of pool1
then start them up again.

The issue I'm having is the export command is taking hours to run per jail, and I've got about a dozen to move over, and who knows if the import command would be a similar deal. If this is the 'right' way to do it then fair enough, but is it really not possible to do something like:

shutdown all jails
# iocage activate pool2
# cp /mnt/pool1/iocage/jails/* /mnt/pool2/iocage/jails/.
change all the fstabs so pool1 > pool2
start 'em up

:thunk:

BSD's solution is probably better, but I would probably shut down the jail, make a new one, and migrate the data I care about manually. I've deleted and recreated jails enough for things like Plex that the extra overhead of reconfiguring them is a lot lower than trying to figure out how to avoid it.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

Fozzy The Bear posted:

I see the OP is 9 years old. I'm looking for a pre-built NAS server, with at least 4 bays, going to install TrueNAS on it. It will be my Plex server, so a CPU with quick sync would be nice (no Atom cpus). Would like it to look professional "wife approved". I don't have a rack, so a smallish black box would be perfect.

Budget of $1000 or less before adding hard drives.

this is the one thing that pisses me off about long lived threads. the OP has always hosed off and gave no shits about creating another one. If a poster tries to make a new one it's always "There's already a thread about NAS. Locking".

Shumagorath
Jun 6, 2001

EVIL Gibson posted:

this is the one thing that pisses me off about long lived threads. the OP has always hosed off and gave no shits about creating another one. If a poster tries to make a new one it's always "There's already a thread about NAS. Locking".
Games and even GBS deal with this really well by locking the old thread, but SH/SC seems to have way lower volume.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

Shumagorath posted:

Games and even GBS deal with this really well by locking the old thread, but SH/SC seems to have way lower volume.

This is probably because those are the main forum traffic draws. I was looking at some other threads and there are some silly rear end old threads like the iphone thread not being updated since 2016 with the main OP focus being the hotness that is the iPhone 7.

Don't even talk about the android/ios app threads with some of them hawking dead or malware apps

Klyith
Aug 3, 2007

GBS Pledge Week

Shumagorath posted:

Games and even GBS deal with this really well by locking the old thread, but SH/SC seems to have way lower volume.

A thread about a videogame is a thread about that game. Topics in SH/SC are much more nebulous. It's way harder to write a good OP for SH/SC threads.

(And GBS threads tend to get locked for drama, not OP relevance.)

EVIL Gibson posted:

If a poster tries to make a new one it's always "There's already a thread about NAS. Locking".

Demonstrably not true: when a poster who's been active in a thread runs it by the people in the existing thread first, people are often happy to move to a new thread and request a mod lock for the existing one. Other threads (pc building, keyboards, etc) manage it. Somebody just has to put in the effort to make an OP that people look at and say "yeah that's better."



TBQH a single super-useful thing for this thread wouldn't even need a new OP, just a mod edit: a prominent arrow for the plex thread at the top. That way people asking about hardware for plex can go to a thread with better answers.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo
I found SMB on slow disks for ZFS to take awhile to sync for some reason.

Sometimes the SMB transfer will reduce to nothing and Windows will complain something wrong happened after 5 minutes doing nothing. SFTP/FTPS, rsync, and SCP will do the same thing but with the the SSH session dying because it detected no activity for a long while. If you set the clients to never timeout, it will usually fix it.

Something that never fails and handles long periods of non-activity well is netcat (nc). Installed on Apple/Linux machines by default, it is probably available to you now. For Windows, I use Windows Subsystem for Linux (WSL) with a Ubuntu install.

Here are the commands

code:
Run on Destination First: nc -q 1 -l -p {port} | tar xv
Run on Source: tar cv . | nc -q 1 {dest-ip} {port}
On the Destination, you open your terminal and goto your directory with the files you want to transfer into and run the Destination command. The command will open a listening port on whatever you choose and when the stream starts it will pipe the contents to extract the tar file. Change the port to a number higher than 1000. I use 1234.

On the source, goto the directory where the files you are located then run the command using the port you chose. The files inside the directory along with the contents of all subdirectories will all be tarred in order and then streamed over. For WSL, all the drives of your windows machine are mounted in the /mnt directory (so /mnt/c/users{...} or wherever the files are.

As each file completes it journey, the file will be extracted from tar. There will be the same waits as the sync happens, but netcat will never close the stream. The reason? It's because netcat will never receive the EOF until the tar file is fully transferred over.

-q 1 will kill the connection on both sides automatically so there is no worry of a hanging port.

This is my specific solution out of the issue with my zfs system having problems syncing.

BlankSystemDaemon
Mar 13, 2009



EVIL Gibson posted:

I found SMB on slow disks for ZFS to take awhile to sync for some reason.

Sometimes the SMB transfer will reduce to nothing and Windows will complain something wrong happened after 5 minutes doing nothing. SFTP/FTPS, rsync, and SCP will do the same thing but with the the SSH session dying because it detected no activity for a long while. If you set the clients to never timeout, it will usually fix it.

Something that never fails and handles long periods of non-activity well is netcat (nc). Installed on Apple/Linux machines by default, it is probably available to you now. For Windows, I use Windows Subsystem for Linux (WSL) with a Ubuntu install.

Here are the commands

code:
Run on Destination First: nc -q 1 -l -p {port} | tar xv
Run on Source: tar cv . | nc -q 1 {dest-ip} {port}
On the Destination, you open your terminal and goto your directory with the files you want to transfer into and run the Destination command. The command will open a listening port on whatever you choose and when the stream starts it will pipe the contents to extract the tar file. Change the port to a number higher than 1000. I use 1234.

On the source, goto the directory where the files you are located then run the command using the port you chose. The files inside the directory along with the contents of all subdirectories will all be tarred in order and then streamed over. For WSL, all the drives of your windows machine are mounted in the /mnt directory (so /mnt/c/users{...} or wherever the files are.

As each file completes it journey, the file will be extracted from tar. There will be the same waits as the sync happens, but netcat will never close the stream. The reason? It's because netcat will never receive the EOF until the tar file is fully transferred over.

-q 1 will kill the connection on both sides automatically so there is no worry of a hanging port.

This is my specific solution out of the issue with my zfs system having problems syncing.
I hate to be the bearer of bad news, but Samba doesn't really do synchronous I/O, unless you tell it to - in which case you've made a configuration error from the point of view of Windows, since basically all I/O in Windows is assumed to be asynchronous.

If netcat isn't affected but ssh is, it's possible the issue is with the netstack and or network, if netcat is doing udp whereas everything else is doing tcp. Alternatively, it could be because netcat doesn't implement any sort of NOOP timeout prevention, but ssh does.
Do you have pcap files showing this behavior you're describing? That'd really help a lot in root-causing, but you'll need to capture both egress and ingress.

BlankSystemDaemon fucked around with this message at 23:45 on May 5, 2022

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

BlankSystemDaemon posted:

I hate to be the bearer of bad news, but Samba doesn't really do synchronous I/O, unless you tell it to - in which case you've made a configuration error from the point of view of Windows, since basically all I/O in Windows is assumed to be asynchronous.

If netcat isn't affected but ssh is, it's possible the issue is with the netstack and or network, if netcat is doing udp whereas everything else is doing tcp. Alternatively, it could be because netcat doesn't implement any sort of NOOP timeout prevention, but ssh does.
Do you have pcap files showing this behavior you're describing? That'd really help a lot in root-causing, but you'll need to capture both egress and ingress.
to be clear, the pause is still there with netcat and on the file server I see zfs hard drive writes at 100%, but ssh transfer will time out waiting for the syncs to finish while netcat is left alone. You can really see it running iotop and htop.

and it's funny you bring up samba config being misconfigured; I fully uninstalled and reinstalled it and I still get the same symptoms.

I'll try to get some pcaps for you.

fatman1683
Jan 8, 2004
.
ZFS peeps, what's generally considered the disk/array size threshold for stepping up from RAIDZ to RAIDZ2/3? This tool seems to indicate that a 12-disk array of 4TB 10e15 disks would be safe on RAID-5, does the same logic apply to RAIDZ? Are there other considerations besides disk failure during rebuild that would influence the choice of RAIDZ vs a higher level?

BlankSystemDaemon
Mar 13, 2009



EVIL Gibson posted:

to be clear, the pause is still there with netcat and on the file server I see zfs hard drive writes at 100%, but ssh transfer will time out waiting for the syncs to finish while netcat is left alone. You can really see it running iotop and htop.

and it's funny you bring up samba config being misconfigured; I fully uninstalled and reinstalled it and I still get the same symptoms.

I'll try to get some pcaps for you.
You should be able to do zpool iostat -lv to get an overview of if one disk is causing a spike in latency for operations during the pause.
If one disk is an outlier, you likely have a dying disk.

I have no idea what iotop or htop report, as I'm not familiar with the tools (in FreeBSD, both are handled via top(1)), but the above should tell you more about per-device breakdowns according to ZFS itself.

fatman1683 posted:

ZFS peeps, what's generally considered the disk/array size threshold for stepping up from RAIDZ to RAIDZ2/3? This tool seems to indicate that a 12-disk array of 4TB 10e15 disks would be safe on RAID-5, does the same logic apply to RAIDZ? Are there other considerations besides disk failure during rebuild that would influence the choice of RAIDZ vs a higher level?
I wouldn't trust this calculator, as raidz is not equivalent to raid5 because raid5 will die if an URE happens during a rebuild whereas raidz will mark the file(s) as unrecoverable and keep on working.

There is a MTTDL RAID Reliability Calculator over at ServeTheHome which (ought to be called an availability calculator, and) lets you set MTBF, URE, capacity, sector size, disk quantity, number of volumes (ought to be vdevs), and expected the rebuild speed.
That should get you a much better answer, and it also includes both raidz2 and raidz3 explicitly.

BlankSystemDaemon fucked around with this message at 10:16 on May 6, 2022

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

BlankSystemDaemon posted:

You should be able to do zpool iostat -lv to get an overview of if one disk is causing a spike in latency for operations during the pause.
If one disk is an outlier, you likely have a dying disk.


I was always using that too. when I had a cache drive I would watch the cache drive fill up with all the other drives in the pool trying to write as much as possible and then everything would do nothing but write until cache was clear .

Adbot
ADBOT LOVES YOU

fatman1683
Jan 8, 2004
.

BlankSystemDaemon posted:

I wouldn't trust this calculator, as raidz is not equivalent to raid5 because raid5 will die if an URE happens during a rebuild whereas raidz will mark the file(s) as unrecoverable and keep on working.

There is a MTTDL RAID Reliability Calculator over at ServeTheHome which (ought to be called an availability calculator, and) lets you set MTBF, URE, capacity, sector size, disk quantity, number of volumes (ought to be vdevs), and expected the rebuild speed.
That should get you a much better answer, and it also includes both raidz2 and raidz3 explicitly.

Thanks! This looks like a much more comprehensive tool. Is there a good method for estimating rebuild speed? I know it's affected by a lot of factors, but is there a 'safe' number for 7.2k SAS disks I can use?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply