CopperHound posted:I forgot I have a third option that gives me a high chance of putting my backups to the test: Creating a degraded pool! It turns all writes into zeroes, returns zeroes on all reads, and doesn't take up actual diskspace like a truncated file would. VostokProgram posted:Could you in theory write a script to rebalance a zfs pool by disabling allocations on the more full vdevs and then cp-ing a bunch of files around until the vdevs are mostly balanced? However, there's always the issue of free/unallocated space, because what happens on vdev removal is that that space instantly disappears, which has implications for whether it's possible to even allocate if suddenly you run out of space after you've started doing vdev removal (because you can't do it before, as it needs enough free space on the remaining devices to even start the vdev removal). So, what I'm saying is, I'm going to let someone else test it, or deliberately not use a production setup if I'm ever going to test it.
|
|
# ? Dec 1, 2021 02:14 |
|
|
# ? May 31, 2024 06:14 |
|
Corin Tucker's Stalker posted:It's been a few days since I got the QNAP TS-230 as my first NAS. Here are a few thoughts from a total newcomer and partial idiot. As you've found out it's best to focus on getting the box running then play around with apps and other features later. DLNA and similar network features are security risks. However if you do not allow an external connections to the NAS then they are unlikely to be an issue.
|
# ? Dec 1, 2021 05:23 |
|
So I got my DS920+ and I put one of my 14tb shucked WD Red drives in it. I've played around with the settings but no matter what I do I can not seem to get the file transfer speed to top 80mbps. I've looked on the Synology Forums and have played with all of the different network settings as recommended to no avail. I should also note that my router is 1000mbps, the MTU value is 1500 (1000 mbps, full duplex), and both my computer and the NAS are connected via Ethernet to the same router. Also in resource monitor, I still have a tremendous amount of network and drive headroom. Transfer speed holds consistent at exactly 80mbps. Any advice here? It's not the raid situation because I only have the one drive at the moment.
|
# ? Dec 1, 2021 16:16 |
First step to figuring out these kinds of issues is to begin rootcausing, by simply running tests locally on the Synology; so far as I'm aware, you should still be able to get ssh access to a regular busybox Unix-like shell. Once you've proven that your storage can handle it, the next step is to check the Samba configuration, et cetera ad nauseum. Things that can make a big difference are TCP_NODELAY (which ensures that Samba doesn't try to opportunistically back off if it's sensing that the client can't keep up; the heuristics for this back-off are not very good if you're trying to utilize all of your bandwidth), as well as SO_RCVBUF and SO_SNDBUF (which increases the receive and send buffer sizes, ensuring bigger chunks of disk I/O are being worked with). EDIT: There's also IPTOS_LOWDELAY which gets mentioned in a lot of tuning guides, but I'm not completely sure means anything nowadays, as I don't know of any gear that respects the Type of Service field of the IP packet. Also, I'm just gonna quote a nerd who posted about this previously: BlankSystemDaemon posted:Even if you're using NFS over UDP (SMB nor Samba offers this), Gigabit Ethernet tops out at 125MBps - and since you're probably using TCP, which has an overhead of about 7MBps at Gigabit wirespeed, that should be right around 118MBps. BlankSystemDaemon fucked around with this message at 17:07 on Dec 1, 2021 |
|
# ? Dec 1, 2021 16:56 |
|
BlankSystemDaemon posted:First step to figuring out these kinds of issues is to begin rootcausing, by simply running tests locally on the Synology; so far as I'm aware, you should still be able to get ssh access to a regular busybox Unix-like shell. Ok so these speeds are pretty normal? So if I wanted to improve the speed by any signifigant amount I would pretty much need to buy a 10Gbit router and PCI card right? Dang... That's pretty rough - I guess I'll have to make the jump eventually. It's going to take 20 hours to move my 8TB of data over to this thing but I guess I'll just have to let it run its course.
|
# ? Dec 1, 2021 17:52 |
CerealKilla420 posted:Ok so these speeds are pretty normal? My point is you need to figure out what the bottleneck is, instead of guessing - but that the upper limit you're likely to reach is ~123MBps.
|
|
# ? Dec 1, 2021 18:04 |
|
I've been thinking about migrating my home NAS setup from mdadm on CentOS 7, which is still supported for a few years but seems like possibly a dead end, to ZFS on Ubuntu Server. Here are the specs: Supermicro X8SIL-F - Intel 3420 chipset, LGA1156, uATX Xeon L3426 - 4C/8T, turbo up to 3.2GHz 16GB (4x4) ECC 1333MHz DDR3 HP CN1100E 10Gb NIC LSI SAS2008 RAID controller in IT mode - the whole RAID is connected to this, other drives to the onboard SATA 1x Crucial 240GB BX200 - OS disk, XFS for now 6x WD Red 10TB SATA (shucked) - currently in mdadm RAID 6 with XFS on top 1x Seagate 3TB NAS drive - standalone XFS, will be used unchanged in new system at least until RAID is restored Currently my plan is: 0) Set up another old desktop on an Unraid trial with all my other spare HDDs, which with one parity disk should have enough space. 1) Back up the entire contents of the mdadm array to Unraid. The standalone drive will be used unchanged on the new system. 2) Shut down the NAS, take out the system SSD and back it up onto another system (or just use another SSD entirely, if I can find one). 3) Install Ubuntu fresh onto the NAS. I'm thinking Ubuntu Desktop LTS with a minimal install over Server, since I like having a DE available. 4) Set up the six drives as a RAIDZ2. 5) Copy back the contents of the Unraid system. 6) Proceed to set up Samba etc. I assume Ubuntu doesn't have any problem using XFS drives even if ext4 is more of the default, and while the 11yo hardware might not make the most performant ZFS NAS out there it should be fine for a few Samba users. I see some old statements about wanting 1GB RAM per 1TB storage but I also thought that was discussed in this thread and isn't current advice anymore. Are there any problems I've missed here? Eletriarnation fucked around with this message at 18:47 on Dec 1, 2021 |
# ? Dec 1, 2021 18:43 |
The defaults of ZFS still work on the 2005-era systems with a 1-1.4GHz 64-bit SPARC CPU with between 4 and 8 cores that it was built for - so I wouldn't be too worried about performance.
|
|
# ? Dec 1, 2021 19:10 |
|
I've been looking to reactivate my NAS after cleaning the dust off it. One thing that always pissed me off back then was that despite having fancy Mellanox cards, I could never use RDMA for iSCSI, because Microsoft was being a bitch and cockblocked manufacturers in preference of their SMB Direct stuff, which however isn't block storage. I've been testing NVMe-oF against a ZVOL for giggles (only via TCP so far), by using the nvmet driver on Linux and the Starwind initiator (since both supposedly do RDMA), and surprisingly enough it seems to work despite not being an actual NVMe device. Anyhow, this shucked WD drive thing, how much of a lottery is it? I'd certainly like cheap drives for a three drive mirror for some read IOPS when hitting the disk.
|
# ? Dec 1, 2021 21:00 |
|
I've shucked eight of the 8TB WD externals and I think 6/8 were the helium-filled, and there were 4 different models. One failed in badblocks testing, and one is a cold replacement. No failures in operation after 3 years or so. I'm OK with them not being identical in the long run because that seems less likely to have weird cluster failures. I clipped the 3.3v lines on the SATA power supply cables for my PSU otherwise they wouldn't spin up. They all supported the timeout setting thing you need for RAID that escapes me at the moment. The non-helium one definitely runs a few degrees warmer than the others but not egregiously so. Rescue Toaster fucked around with this message at 21:49 on Dec 1, 2021 |
# ? Dec 1, 2021 21:47 |
|
Won't the helium ones have a shorter lifespan, since that poo poo escapes through every gasket known to man?
|
# ? Dec 1, 2021 21:52 |
|
Combat Pretzel posted:Won't the helium ones have a shorter lifespan, since that poo poo escapes through every gasket known to man?
|
# ? Dec 1, 2021 22:13 |
|
CopperHound posted:I might be wrong about this, but I assume there would need to be a pressure gradient for this to happen. Since the atmospheric partial pressure of helium is very low there would be a gradient by definition. Not sure about the measured pressure though. Interesting topic! Does seem like a waste of helium in some ways.
|
# ? Dec 1, 2021 22:21 |
|
CopperHound posted:I might be wrong about this, but I assume there would need to be a pressure gradient for this to happen. helium is actually such a small molecule that it will migrate right through the metal of the drive and leak out over time, even without a pressure gradient. It's not fast but they do lose pressure on the order of a decade or two. one of the SMART sensors actually monitors how much helium it has left, not quite sure how that works Paul MaudDib fucked around with this message at 22:26 on Dec 1, 2021 |
# ? Dec 1, 2021 22:23 |
|
Paul MaudDib posted:helium is actually such a small molecule that it will migrate right through the metal of the drive and leak out over time, even without a pressure gradient. It's not fast but they do lose pressure on the order of a decade or two.
|
# ? Dec 1, 2021 22:32 |
CopperHound posted:Fascinating. Does that mean the drive will have more of negative relative pressure over time? The reason Helium is used is because it allows the ride-height of the head to be lower than the standard 0.9 nanometers, which you can't do with a normal air mix as the size of the molecules interfere with the ride-height.
|
|
# ? Dec 1, 2021 22:54 |
|
Paul MaudDib posted:one of the SMART sensors actually monitors how much helium it has left, not quite sure how that works You know how helium makes air go through your vocal cords faster, and gives you a higher pitched voice? A tiny microphone in the drive can tell if the frequency has decreased from the sound of the drive motor spinning, and calculates how much helium has left
|
# ? Dec 2, 2021 04:35 |
BlankSystemDaemon posted:That's not really what I was trying to say, no. This is what my synology reads over the lan, so it’s definitely some kind of bottleneck happening.
|
|
# ? Dec 2, 2021 06:13 |
Bob Morales posted:You know how helium makes air go through your vocal cords faster, and gives you a higher pitched voice? That is so cool!
|
|
# ? Dec 2, 2021 18:29 |
|
BlankSystemDaemon posted:That's not really what I was trying to say, no. Ok that makes sense... After running a couple of local test my drive speeds were being throttled by my raid config (2 2tb slow SMR drive RAID 0'd together along with my two 14tb CMR drives) at the high end with this setup I was topping out at like 150MB/s which is about the upper end of what the slower drives can do (which makes sense). I decided to Raid 1 the 2 slow drives together so I could backup documents and all of my music and then put my two CMR 14tb drives in SHR for all of my larger video files. Local read/write operations on this faster storage pool tops out at about 285MB/s which is exactly what I expected. I'm still only getting between 85-90MB/s even when moving files directly to the faster storage pool which, considering the limitations of my networking setup is fine. I finally moved my largest chunks of data over so I should be good to go from here. I just didn't realize that transferring files from my computer to the NAS would be so slow without 10 gigabit networking. I guess now I understand why most people run programs in Docker that automatically download/pull files directly to the NAS for them. I'm having a lot of fun playing around with this thing. I got my UPS hooked up to the NAS and it appears to be working and shutting itself down after a test run. I also really like how you can set a schedule for the LEDs on the front of the NAS too because my NAS runs in my bedroom. Really cool that they thought to include that feature.
|
# ? Dec 2, 2021 18:39 |
|
BlankSystemDaemon posted:Yes, but that's not really going to hurt it.
|
# ? Dec 2, 2021 19:55 |
Combat Pretzel posted:Isn't it still reliant on the correct amount of gas to be there to maintain correct ride height, i.e. aerodynamics performing as expected? The primary reason, as I said, has to do with the size of the molecules that air consists of.
|
|
# ? Dec 2, 2021 21:06 |
|
I guess eventually the pressure of the helium inside will drop to the partial pressure of the helium outside (which is really low) and the inside will be basically a vacuum. If you pass a point there's not enough gas to keep the heads far enough off the surface well then that's all she wrote. On the one hand I have some old drives that I can plug in and will work more than 20 years later which is cool... on the other those are just things I have sitting around not like I'm counting on the data being there. If anything I need to get better about destroying drives before they get too old to be reliable, something to keep in mind with these helium ones I guess.
|
# ? Dec 2, 2021 23:23 |
|
Since there's already discussion of fluid dynamics and partial pressures, you guys might find it interesting that we use helium as a blended gas (with oxygen) in respiratory critical care in order to reduce the resistance in airways that are compromised in a variety of ways (up in the throat or down in the lungs or in between). We can do that with or without intubating and can even give a hypoxic 18% O2 82% helium mixture if the situation requires that slippery of a gas. I want to say if helium was escaping from a drive, it would have to be replaced by the room air. Unless it was being actively pumped out, you can't really get a naturally occuring vaccum from passive equilibrium type of gas movement, right?
|
# ? Dec 2, 2021 23:31 |
|
There is a SMART value for the helium level. It's mentioned here, along with lifetime data. https://www.backblaze.com/blog/helium-filled-hard-drive-failure-rates/
|
# ? Dec 2, 2021 23:44 |
|
Bob Morales posted:You know how helium makes air go through your vocal cords faster, and gives you a higher pitched voice? I totally made this up btw
|
# ? Dec 3, 2021 01:32 |
Bob Morales posted:I totally made this up btw you bastard
|
|
# ? Dec 3, 2021 01:38 |
|
WD Red Plus are still decent, right? Newegg has 6tb on sale for ~100 and Im tired of waiting for the 8tb Easystores
|
# ? Dec 3, 2021 21:02 |
|
Teabag Dome Scandal posted:WD Red Plus are still decent, right? Newegg has 6tb on sale for ~100 and Im tired of waiting for the 8tb Easystores WD Red Plus 6TB are CMR, if that's what you're worried about.
|
# ? Dec 3, 2021 21:15 |
|
Cold on a Cob posted:WD Red Plus 6TB are CMR, if that's what you're worried about. I haven't bought new drives in quite a while so I don't know what I should or should not be worried about. edit: to clarify a little more, I am assuming that WD Reds have not taken a nose dive in quality or reliability and there is no reason not to buy them if I am happy with the size and price Teabag Dome Scandal fucked around with this message at 21:50 on Dec 3, 2021 |
# ? Dec 3, 2021 21:44 |
|
Teabag Dome Scandal posted:I haven't bought new drives in quite a while so I don't know what I should or should not be worried about. Reds are good, cmr is good, and 100 is a great price for 6tb. No reason not to pull the trigger
|
# ? Dec 3, 2021 22:11 |
|
Oh wow, I have RDMA to my NAS working via this NVMe-oF stuff using the Starwind initiator. With Microsoft's own driver at that. I've been focusing on Q1T1 random 4K reads from the NAS, since I figure that represents best when a game is reading randomly small resources. According to Diskmark. - Using 1GBit Intel NICs I get 9-10MB/s from the ZFS ARC (i.e. reads from memory). - Using 40GBit ConnectX3 I get 37MB/s from the ZFS ARC. - Using RDMA on the ConnectX3 I get 90MB/s from the ZFS ARC. - I get 63MB/s from my Samsung 970 EVO instead (after letting it cool down after a full Diskmark run). I guess a triple disk mirror for multiple actuators and an L2ARC are a good plan. Too bad this thing is old enough to still feature DDR3, because I should probably increase the memory for caching. RDMA to my undusted NAS: The 970 EVO:
|
# ? Dec 3, 2021 22:38 |
Combat Pretzel posted:Oh wow, I have RDMA to my NAS working via this NVMe-oF stuff using the Starwind initiator. With Microsoft's own driver at that. I've been focusing on Q1T1 random 4K reads from the NAS, since I figure that represents best when a game is reading randomly small resources. According to Diskmark. https://www.youtube.com/watch?v=jeKLvleCQ9w&t=378s Slide 18 of the presentation PDF is one that you might find particular interest in:
|
|
# ? Dec 4, 2021 00:09 |
|
Interesting. That actually led me to check up on the ZFS version on the Ubuntu install, and it's still at zfs-0.8.3, so I'm probably missing out on tons of stuff. Am gonna update to latest OpenZFS 2.1.1 and see if that changes anything. Also, seems like that stupid E3C226D2I mainboard in my NAS is limited to 16GB due to chipset bullshit in combination to only two RAM slots.
|
# ? Dec 4, 2021 01:44 |
Most of the changes mav@ has done are to FreeBSD, iirc.
|
|
# ? Dec 4, 2021 03:28 |
|
Bob Morales posted:I totally made this up btw Truly masterful.
|
# ? Dec 4, 2021 06:24 |
|
BlankSystemDaemon posted:Most of the changes mav@ has done are to FreeBSD, iirc. --edit: After researching it some more, they're moving TrueNAS to Linux, what? --edit: I guess TrueNAS Scale it is. I get the Free-/TrueNAS UI and can do the NVMe-oF stuff in the shell. Combat Pretzel fucked around with this message at 13:50 on Dec 4, 2021 |
# ? Dec 4, 2021 13:01 |
|
Combat Pretzel posted:--edit:
|
# ? Dec 4, 2021 17:03 |
|
I'm still running... I guess it's called XigmaNAS which was NAS4Free? I run nothing except samba and the ups client on it so it doesn't matter. But notification is a mess since it's tied only to email. I'd much rather setup a push notification system like pushover. I'd also like to go to something linux-based just because I'm so much more familiar with it than BSD. Many years ago I did my own linux setup using mdadm raid and the setup wasn't too bad, but I switched for ZFS.
|
# ? Dec 4, 2021 17:28 |
|
|
# ? May 31, 2024 06:14 |
|
CopperHound posted:Share a trip report. I can't decide if I want to run the Linux version or the BSD version with a VM to do other stuff. TrueNAS Scale seems to do everything TrueNAS Core does. If it comes out of beta and proves popular, I'm willing to bet that it's a nail in the coffin of the BSD variant. It even does appear to do ZFS Boot.
|
# ? Dec 5, 2021 13:14 |