|
I've been running FreeNAS on ESXi for years and don't see a reason not to.
|
# ? Aug 2, 2021 21:13 |
|
|
# ? May 25, 2024 22:19 |
|
phosdex posted:I've been running FreeNAS on ESXi for years and don't see a reason not to. To each their own, but honestly don't want the headache, especially since ESXi is not my hypervisor I used anyways.
|
# ? Aug 2, 2021 21:39 |
|
Smashing Link posted:What are the advantages of that approach over TrueNAS? I prefer command line management of many things anyway and never found FreeNAS / NAS4Free particularly useful for any situation other than one project where I needed to present some iSCSI storage.
|
# ? Aug 3, 2021 04:02 |
|
i have no idea how to mount my NAS as a network drive on linux but itd be pretty cool if i could i know i can open an SSH connection to it in the file explorer but id rather not have to do that manually every time
|
# ? Aug 3, 2021 04:12 |
|
hbag posted:i have no idea how to mount my NAS as a network drive on linux but itd be pretty cool if i could Sshfs is a thing (and can use fstab) but NFS is always going to be faster/better if performance matters. But there you can run into locking/permissions issues. SMB is always an option. If it's just for storage.
|
# ? Aug 3, 2021 04:15 |
|
hbag posted:i have no idea how to mount my NAS as a network drive on linux but itd be pretty cool if i could What model NAS do you have? It should be as simple as creating an NFS export on your NAS to either your network subnet or a single host, mapping the appropriate NAS user to the share on the NAS, then deciding if you want read/write or read only access to the share from the client. Linux side you'd need to install nfs-common or nfs-utils depending on your distro, then for a quick test, create the directory where you want to mount the share to and do 'mount x.x.x.x:/full/path/of/share mountpoint'. If this works you're in business. Word of warning, NFS gets a little funky about the share should the NAS become unavailable, so even rebooting the server might be problematic. There are ways around that however. You can set retry and timeout values in your fstab which should allow for normal operation if the NAS is down for whatever reason. something like this: x.x.x.x:/full/path/to/mount /path/to/mountpoint nfs rw,vers=3,timeo=300,retrans=2,hard,rsize=1048576,wsize=1048576 0 0 Optionally you can switch the 'hard' option for 'soft' but this comes with the risk of data loss if the connection gets interrupted. However it will not lock up the system. The good thing is once you reestablish an NFS connection things tend to go back to normal without much intervention on the client side. If you want something that doesn't affect the boot option as much, look at autofs. Mount on demand basically. You change directory into the mount and the system automounts.
|
# ? Aug 3, 2021 10:39 |
|
BackBlaze Q2 2021 drive report is out.
|
# ? Aug 3, 2021 16:42 |
|
Saukkis posted:A coworker is running OpenVPN in the DNS port, 53 UDP. Supposedly this often gets him pass captive portals on airport wifi and such. Am I right in thinking that you could just set up 2 firewall rules, one that takes 53 and one that takes 443, and have them both forward to the same internal IP/port for the VPN server? That way you can switch between the two and it should be functionally identical?
|
# ? Aug 4, 2021 05:22 |
|
Thanks for posting this. By the way how the gently caress do you have 11,000 posts and a clean rap sheet.
|
# ? Aug 4, 2021 06:40 |
|
Nam Taf posted:Am I right in thinking that you could just set up 2 firewall rules, one that takes 53 and one that takes 443, and have them both forward to the same internal IP/port for the VPN server? That way you can switch between the two and it should be functionally identical? You could, but generally you want to use UDP rather than TCP for a VPN link. If you direct 53/udp to your server's UDP port, and 443/tcp (and maybe also 53/tcp) to its TCP port, you'll probably have good results.
|
# ? Aug 4, 2021 12:41 |
|
Axe-man posted:it will be fine as long as your Storage pool and volume is not crashed. Right now, my Synology DS918+ is only accessible via the web GUI for a few minutes after rebooting, after that it becomes completely unresponsive (terminal access continues to work). The second issue is that my storage pool is critical and in read only, I've already tried to repair it but was not able to remove the warning about System Partition Failed from every drive that had it (3 showed it, 1 did not). I assume that the affected volume is not recoverable from it's current state and I have to backup that data and create a new volume in it's place. The thing is, since I can't access the Synology GUI I can't run an extended SMART test, only the short one which all drives passed, so I'm not even certain which drives need replacing. I've described in more detail the problems I've faced and what I've tried to do here: https://community.synology.com/enu/forum/1/post/145820?page=2&reply=456536 I'm wondering if there is a way to run an extended SMART test on a Synology NAS via the terminal interface, to hopefully identify the drives that need replacing.
|
# ? Aug 4, 2021 13:13 |
|
Mofabio posted:Thanks for posting this. By the way how the gently caress do you have 11,000 posts and a clean rap sheet. I had a night job in '00/'01ish when it was free to create accounts and got all the shitposting out of my system back then, left entirely for a year-ish, and came back with this account. Saukkis posted:A coworker is running OpenVPN in the DNS port, 53 UDP. Supposedly this often gets him pass captive portals on airport wifi and such. I run wireguard on UDP 53 for this exact purpose when traveling. Sheep fucked around with this message at 23:42 on Aug 4, 2021 |
# ? Aug 4, 2021 23:39 |
|
Some Synology devices are under attack from a botnet, so it's a good idea to have some basic security measures enabled: https://old.reddit.com/r/synology/comments/oy8rbr/psa_check_for_weak_credentials_enable_auto_block/
|
# ? Aug 5, 2021 20:22 |
|
For everyone else looking at urgent replacements for hard drives, there's little reason to bother with a 12 TB Easystore when you can get a 12 TB Toshiba from Newegg for the same price ($280 now) with a more reliable, faster drive and significantly longer warranty. I've had zero of my 6+ Toshiba drives fail but after tallying up how many of these Easystores have failed in the same chassis over a period of 3 years I'm not going to bother with Easystores anymore because they keep failing exactly outside of the warranty period and are thus higher TCO than even a normally 40%+ more expensive Toshiba drive given my replacement and failure rates. I know that Toshiba's RMA process basically sucks compared to WD, but I'd rather not have to need an RMA in the first place than to have one of the best I will need and get a replacement maybe 50% of the time.
|
# ? Aug 6, 2021 15:13 |
|
hogofwar posted:I'm still doing software raid, just using the HBA in IT mode. I'm really only doing it this way cos someone said to do it (plus more SATA slots can't go wrong), i'm not entirely sure the difference between this and just passing through the drives instead. I played around with OMV, but in the end I decided to just go plain debian for the downloads VM, and gave it a 7TB virtual drive. OMV seemed to just provide a pretty web interface to what can be done easier in the config files. Installed all the programs (indexers, downloaders, media managers, whatever) on it and just exported the folder via SMB and NFS. The speed seems fine, 135+ MB/s over the network. When I need more space I plan to just add more virtual drives and add them to the VMs LVM. I'll have to see if this will pose problems in the future, but for now it works.
|
# ? Aug 6, 2021 15:27 |
Over 135MBps on 1000BaseT Ethernet?
|
|
# ? Aug 6, 2021 16:18 |
|
BlankSystemDaemon posted:Over 135MBps on 1000BaseT Ethernet? That's what it was reported, yes. Over scp and over smb. At the end of the day ... whatever, it's not the piddling bytes I have flowing from the NAS box. I have not done actual measurements (time transfer of X GB file 10 times and see what's the lowest/highest throughput). It's not slower than expected, let's put it this way. Which means that playing a movie from that share should be fine.
|
# ? Aug 7, 2021 03:56 |
Even if you're using NFS over UDP (SMB nor Samba offers this), Gigabit Ethernet tops out at 125MBps - and since you're probably using TCP, which has an overhead of about 7MBps at Gigabit wirespeed, that should be right around 118MBps. Even with 9k jumboframes, wirespeed tops out at around 123MBps.
|
|
# ? Aug 7, 2021 12:31 |
|
Yeah you are not getting more than that. Even with flash caching and pure SSD storage.
|
# ? Aug 7, 2021 16:09 |
|
BlankSystemDaemon posted:Even if you're using NFS over UDP (SMB nor Samba offers this), Gigabit Ethernet tops out at 125MBps - and since you're probably using TCP, which has an overhead of about 7MBps at Gigabit wirespeed, that should be right around 118MBps. You are certainly correct. Thats what should be the speed. Since the reported number (135) is close enough to theoretical maximum, it's perfect. We're not breaking or setting records here, close enough is what I'm looking for. A reported speed of 10MB/s or 1000MB/s would have been worrying, anything between that is fine. Edit: I did some small tests. /mnt/downloads is mounted over smb /mnt/test_nfs is mounted with nfs4, same directory as the samba export No idea why I chose /dev/urandom over /dev/zero. pre:dd if=/dev/urandom of=test.dat bs=10M count=1000 1000+0 records in 1000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 46.257 s, 227 MB/s rsync --progress test.dat /mnt/downloads/ test.dat 10,485,760,000 100% 169.78MB/s 0:00:58 (xfr#1, to-chk=0/1) pv -pra test.dat > /mnt/downloads/test.dat [ 163MiB/s] [ 163MiB/s] [==========================================================================================================================>] 100% scp test.dat downloads:~/ test.dat 100% 10GB 46.5MB/s 03:34 pv -pra test.dat > /mnt/test_nfs/test.dat [81.5MiB/s] [81.5MiB/s] [==========================================================================================================================>] 100% rsync --progress test.dat /mnt/test_nfs/ test.dat 10,485,760,000 100% 83.60MB/s 0:01:59 (xfr#1, to-chk=0/1) iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 128 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.1.88 port 5001 connected with 192.168.1.2 port 48660 (peer 2.1.3) [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 879 MBytes 735 Mbits/sec iperf -s -u ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 208 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.1.88 port 5001 connected with 192.168.1.2 port 37907 (peer 2.1.3) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 0.0-10.0 sec 1.25 MBytes 1.05 Mbits/sec 0.041 ms 2147481859/2147482753 (1e+02%) [ 4] local 192.168.1.88 port 5001 connected with 192.168.1.2 port 49468 (peer 2.1.3) [ 4] 0.0-10.0 sec 1.11 GBytes 955 Mbits/sec 0.045 ms 2145858831/2146670829 (1e+02%) Volguus fucked around with this message at 18:11 on Aug 7, 2021 |
# ? Aug 7, 2021 16:52 |
There's only one thing computers are any good at, and that's various forms of counting - if you're seeing higher than that, whatever you're using to measure is lying to you, or something is fundamentally broken.
|
|
# ? Aug 7, 2021 18:06 |
|
Yeah, samba is definitely lying here. NFS and iperf are saner. We're definitely not breaking any theoretical maximums. With that being said, the speeds are fine.
|
# ? Aug 7, 2021 18:21 |
So my janky NAS setup with a laptop running OMV and two USB drives mounted to it, the second of which was having it stuff rsyc'd to it from the first every day as a quasi raid 1 is falling apart as the first drive unmounted itself in the middle of an rsync and now all my docker images are borked, and even SSH into OMV itself is broken, so I can't go tinkering around in there to fix it. I was hoping to wait for TrueNAS scale to be production ready and expandable pools to make it into the open ZFS mainline, but alas. I went ahead a got a Synology 920+. I'm going to drop some of the 6 8gig SMR drives I have sitting around in it. I know SMR is going to have bad performance, but then again my OMV setup was using two USB SMR drives and it was okay as long as I wasn't moving a ton of data at once. Later on, when harddrive prices have dropped, can I start swapping out the SMR drives for CMR drives in the Synology? I know people will say just get the CMR drives, but I literally have 6 shuckable drives sitting around that gives me free storage, versus $800+ to fill up this unit with proper CMR drives right now. Would I dramatically improve performance with a cache NVMe on big writes for these SMR drives in the Synology in the meantime? Edit: does Synology do the whole resilvering thing when you swap out drives, that I imagine would be a nightmare with SMR ones? Nitrousoxide fucked around with this message at 12:29 on Aug 9, 2021 |
|
# ? Aug 9, 2021 12:24 |
|
It's been a while and this isn't the best deal, but here's a WD Elements 14TB for $260 at B&H: https://www.bhphotovideo.com/c/prod...6bc129f10c70INT According to https://shucks.top/ the best price on 14TB externals ever was $189.99 but that was 254 days ago.
|
# ? Aug 9, 2021 14:08 |
|
If SMR is going to be 'acceptable' or absolute trash depends on your use case, it's writes that it's poor at. My use case is backups (largish writes once a week, fine as it happens in the middle of the night), a media centre (all reads so fine) and torrenting (terrible) so I have an old 500GB CMR the torrents first download to before moving as a complete file to the pool with SMR drives in. Adding in new drives to a pool with SMR is tediously slow but you just kinda slot it in, click a button and leave it. So you just live without raid redundancy for a day while it rebuilds. As long as you have a decent 321 backup system (which you should) then it's an acceptable risk. So while I'd never recommend someone go out and buy SMR drives for a NAS, if you've got them hanging around then the price of 'free' trumps the poor performance in many cases imo. Mega Comrade fucked around with this message at 14:58 on Aug 9, 2021 |
# ? Aug 9, 2021 14:53 |
It's still usable while it's rebuilding the array?
|
|
# ? Aug 9, 2021 15:17 |
Writes aren't the big issue with DM-SMR drives, in so far as while they're worse than PMR drives they're not that much worse. The issue with DM-SMR drives is when it comes to any kind of I/O that isn't strictly sequential - and even then, PMR drives are better at sequential I/O than SMR). There is, in theory, one use-case where DM-SMR drives would make sense, if DM-SMR was used to make the biggest capacity drives: Using the drives as a sequential access tape drive, where you write the standard I/O stream from zfs send to the character device itself, and then either use zfs receive or zfs corrective receive (once that lands) to restore data seems like it'd get you better-than-tape density for the biggest drives, all the while making use of the only workload where DM-SMR has any chance. If we had HM-SMR, things might look different - but even then it'd require a pretty substantial amount of code-addition to ZFS, the various device drivers, I/O schedulers (such as in CAM on FreeBSD), and other places, for it to make sense. BlankSystemDaemon fucked around with this message at 15:24 on Aug 9, 2021 |
|
# ? Aug 9, 2021 15:22 |
|
BlankSystemDaemon posted:Writes aren't the big issue with DM-SMR drives, in so far as while they're worse than PMR drives they're not that much worse. Here's a question - let's say you do this zfs send/receive once, fully overwriting the drive. Then you go to do it again - won't the drive go crazy trying to rewrite all the shingles, because it still thinks that is useful information?
|
# ? Aug 9, 2021 17:51 |
|
If you're truly treating it like tape, you'd want the drive to ignore anything on it when writing again. Do SMR drives support TRIM?
|
# ? Aug 9, 2021 18:11 |
VostokProgram posted:Here's a question - let's say you do this zfs send/receive once, fully overwriting the drive. Then you go to do it again - won't the drive go crazy trying to rewrite all the shingles, because it still thinks that is useful information? IOwnCalculus posted:If you're truly treating it like tape, you'd want the drive to ignore anything on it when writing again. Do SMR drives support TRIM? Incidentally, it's quite apparent when one reads all of the caveats that're outlined in the document, why DM-SMR is such a bad idea. My favorites include needing to buffer 1GiB in memory before writing anything to the disk, always issuing a TRIM before doing any writes, not using all of the disk, having the filesystem need to reserve diskspace at the beginning of the disk for metadata, not doing any kind of write pattern that isn't forward sequential, never letting the filesystem control when data should be force-written (using the flush command, which is an integral part of shutdown sequences on any Unix-like, as well as an integral part of ZFS knowing the on-disk layout), and finally having to take care of overruns and underruns in software.
|
|
# ? Aug 10, 2021 19:10 |
|
BlankSystemDaemon posted:The entire point of WORM is to write once and read many. If you end up overwriting anything, you're not using it as WORM media. I mean there's WORM, and then there's "mostly" WORM. I can't imagine a use case where you would literally only write to a drive once in it's entire life.
|
# ? Aug 10, 2021 20:23 |
VostokProgram posted:I mean there's WORM, and then there's "mostly" WORM. I can't imagine a use case where you would literally only write to a drive once in it's entire life.
|
|
# ? Aug 10, 2021 22:29 |
Whew, yeah these SMR drives are something. Getting about 50-60 gigs transferred an hour off my old setup. It's going to take about 3-4 days to transfer all the stuff off it at this rate. Edit: Depending on how well things settle down once I finish the transfer I may go out and purchase a $50 NMVe drive or two to slot in the bottom, to act as a cache or a download hopper. Obviously won't help with initial downloads or anything, but if I can direct high IO stuff like torrents to the download hopper, and let the cache get filled up with whatever frequent requests end up getting made by the docker containers, it may make it acceptable once initial setup is complete. I'll still have three spare unshucked drives too that I can use for drive failures, so barring running out of space, these drives will probably cover me for the better part of a decade if it works out okay. Nitrousoxide fucked around with this message at 23:07 on Aug 10, 2021 |
|
# ? Aug 10, 2021 22:40 |
|
BlankSystemDaemon posted:My point is, that's all DM-SMR is good for - and given that tape comes out to about ~110USD for 12TB, the price difference isn't as big as you'd expect. If the drives had some factory reset command so you could tell it "everything on this is useless pretend you never wrote anything at all", they would be so so much more useful
|
# ? Aug 10, 2021 23:12 |
|
VostokProgram posted:If the drives had some factory reset command so you could tell it "everything on this is useless pretend you never wrote anything at all", they would be so so much more useful Does an ATA Secure Erase or SCSI Sanitize not do this? Would seem obvious that it should.
|
# ? Aug 11, 2021 00:05 |
VostokProgram posted:If the drives had some factory reset command so you could tell it "everything on this is useless pretend you never wrote anything at all", they would be so so much more useful
|
|
# ? Aug 11, 2021 09:08 |
|
Anyone here know a good resource for NAS reviews? I'm looking to replace my seemingly broken Synology DS918+ and am wondering if the Qnap TS653D is a good pick, mainly interested in running various Docker containers as well as Plex transcoding.
|
# ? Aug 12, 2021 06:15 |
For a 3-2-1 backup of stuff where the NAS copy of the data IS the production copy, like for a Plex server, what do you do? Do you need to get an external HDD that's as big as your RAID array and backup to that in addition to your off-site backup? What if the RAID array is bigger than an external HDD could get affordably? Would you have to build a second NAS just to get that second local backup? Or just have to go with a 2-1-1 backup instead and deal with the bigger risk?
Nitrousoxide fucked around with this message at 17:22 on Aug 12, 2021 |
|
# ? Aug 12, 2021 17:19 |
|
Nitrousoxide posted:For a 3-2-1 backup of stuff where the NAS copy of the data IS the production copy, like for a Plex server, what do you do? Do you need to get an external HDD that's as big as your RAID array and backup to that in addition to your off-site backup? What if the RAID array is bigger than an external HDD could get affordably? Would you have to build a second NAS just to get that second local backup? Or just have to go with a 2-1-1 backup instead and deal with the bigger risk? You only need to 3-2-1 the data that you give a poo poo about, though. My NAS is roughly 80TB of data. The amount I care about, that would be difficult to replace, is ~3TB. For the media, just periodically export a list of filenames to your properly backed up directories so you can fetch it again when you rebuild your server. For example, my NAS backs up the data I care about to a different computer with a 8TB HDD, and also backs up to a good cloud backup.
|
# ? Aug 12, 2021 17:31 |
|
|
# ? May 25, 2024 22:19 |
|
SolusLunes posted:My NAS is roughly 80TB of data. The amount I care about, that would be difficult to replace, is ~3TB. You had me at first there. I'm just rocking a couple of 3tb Reds mirrored in a lovely old Buffalo Linkstation but the vast majority are movies & tv. I'd say my essential data might only total inside a TB
|
# ? Aug 12, 2021 17:39 |