|
Matt Zerella posted:He can run the server on his gaming rig. Sorry, I don't know how I missed that. Yes, that will work. If you find yourself being annoyed by using your gaming computer as a Plex server you can always upgrade that aspect of things while not having to replace your NAS or TV.
|
# ? Jan 10, 2018 20:09 |
|
|
# ? May 11, 2024 09:36 |
|
The Gunslinger posted:Any reason I shouldn't go with a Ryzen 1700 for a new custom NAS/VM build? I have some credit on my Newegg account so I can get the processor for like $100. I'm looking at the specs and benchmarks, seems pretty nice. 8 cores, 16 threads, decent clocks, good feature set, etc. I know it's massive overkill but I want it to survive the 4k era if possible and I really want to more with VMs at home. I also run a Plex server for 8-10 friends/family and its been choking a bit lately. RAM is expensive as heck right now, which is why for my new VM box I went with a Xeon E5-2650 v2 (which is 8c, 16t like the Ryzen 1700) with 48GB of ECC RAM. It was a used Lenovo S30 workstation. Cost me like $430 shipped.
|
# ? Jan 10, 2018 20:27 |
|
My pair of 2011 vintage Hitachi 2TB drives have started making the occasional ominous clicking noise for no apparent reason, and while SMART's still reporting no problems I figure it's about time to replace them. In my new computer build I've noticed that the drives are usually louder than the fans and in a fairly obnoxious way too, so I'm thinking I'll get a small NAS with a pair of RAID1 4TB drives to keep my bulk storage on and have the computer be SSD only. So... Synology DS218+ seems to be the cool thing at the moment, yes? I don't really have any old hardware sitting around I could build my own from and I don't need that much functionality, I just want to hoard a bunch of music/photos/scanned documents/other junk and maybe back the super important irreplaceable stuff up to an external drive occasionally for extra safety.
|
# ? Jan 10, 2018 23:46 |
TheFluff posted:back the super important irreplaceable stuff up to an external drive occasionally for extra safety. I wouldn't consider that a backup. A proper backup should protect you from theft, fire, ransomware, hardware failures, etc. For the irreplaceable stuff, definitely use one or more cloud backup solutions (in addition to your NAS + external drive). External drives in particular are notoriously unreliable.
|
|
# ? Jan 11, 2018 01:31 |
|
fletcher posted:I wouldn't consider that a backup. A proper backup should protect you from theft, fire, ransomware, hardware failures, etc. For the irreplaceable stuff, definitely use one or more cloud backup solutions (in addition to your NAS + external drive). External drives in particular are notoriously unreliable. I already have cloud backup, I just want to add suspenders to my belt by adding more places to keep the truly irreplaceable stuff.
|
# ? Jan 11, 2018 09:30 |
|
So there's a lot of talk about using a NAS vs desktop-class drive in a NAS box, and I'd like to cut through some of it. I have a 4-Bay Synology 918+, with 3 4tb NAS drives in it. I'm upgrading my desktop storage to a 6tb internal drive, and now have a perfectly good spare 4tb WD Black drive. Am I realistically going to have any problems just throwing it into my spare bay in the Synology?
|
# ? Jan 11, 2018 17:26 |
|
fletcher posted:I wouldn't consider that a backup... External drives in particular are notoriously unreliable. Yet we're all notoriously/ironically shucking WD Easy Store External drives to throw into our NAS
|
# ? Jan 11, 2018 18:58 |
|
eightysixed posted:Yet we're all notoriously/ironically shucking WD Easy Store External drives to throw into our NAS To be fair, that's because a specific line was stuffing high-end WD Reds in them instead of the usual poo poo-tier Green rejects or white-label rando drives that usually inhabit externals.
|
# ? Jan 11, 2018 19:10 |
|
Yeah, I get it - I'm with you. But I had to tongue-in-cheek that one
|
# ? Jan 11, 2018 19:31 |
|
Could someone tell me what an ideal layout would be for a personal Windows storage server? I've looked at FreeNAS but I'm not ready to make that leap yet (especially when their bhive virtualization is still being worked on.) My thought has been:
I imagine the biggest issue here (aside from using Windows) would be the pass-through of the disks to the VM. This would make the physical disks and Storage Space unavailable to any other VM or the physical host, correct? Is this a sensible approach for a home lab?
|
# ? Jan 15, 2018 19:51 |
|
Out of curiosity - anyone moving data off their NAS at better than Gig-E? Borderline question between here and the networking thread, but I figure the people here would be more relevant to what kind of performance you can expect from ebay'd gear. Specifically looking at 40gb infiniband gear because it's so cheap, and since the use is linux to linux use NFS/RDMA.
|
# ? Jan 18, 2018 07:49 |
|
Harik posted:Out of curiosity - anyone moving data off their NAS at better than Gig-E? Borderline question between here and the networking thread, but I figure the people here would be more relevant to what kind of performance you can expect from ebay'd gear. Specifically looking at 40gb infiniband gear because it's so cheap, and since the use is linux to linux use NFS/RDMA. Yeah - I use a 40GbE link between my NAS and my gaming PC (pair of Mellanox ConnectX-3); and a 10GbE link between my NAS and my MacBook Pro (using Chelsio T520-CR - has great Mac drivers; there's no cheap used market for 40GbE cards for Mac). All eBay gear. I'm using Ethernet not Infiniband. If you're just streaming media for consumption, Gigabit is totally sufficient, even for 4k. But if you want to access project files (e.g. video editing, video game assets, etc), run games, run VMs, etc directly from your NAS, 10GbE minimum is much nicer. I run all of my games (and large projects) directly off of the NAS, and it's pretty great overall. My pool consists of 2x raid-z2 of 6x 8TB WD Red Drives. FWIW, CrystalDiskMark numbers over 40GbE look like this: (SMB) (iSCSI) I access most of my stuff via SMB to gain the benefits of having the files directly on zfs. I keep a few things on an iSCSI NTFS volume that don't like being on a network share (certain games in particular). This mostly works well, except that Windows (10) doesn't always cleanly reconnect to the network share, particularly at startup. iSCSI is never a problem. Not sure why admiraldennis fucked around with this message at 15:14 on Jan 18, 2018 |
# ? Jan 18, 2018 15:09 |
|
admiraldennis posted:Yeah - I use a 40GbE link between my NAS and my gaming PC (pair of Mellanox ConnectX-3); and a 10GbE link between my NAS and my MacBook Pro (using Chelsio T520-CR - has great Mac drivers; there's no cheap used market for 40GbE cards for Mac). All eBay gear. I'm using Ethernet not Infiniband. Agreed there, most of my streaming is over wireless to fire sticks so obviously gig-e is sufficient for that. It's the project work (huge datasets/video editing/etc) that I want to speed up. admiraldennis posted:My pool consists of 2x raid-z2 of 6x 8TB WD Red Drives. FWIW, CrystalDiskMark numbers over 40GbE look like this: Holy hell, those sequential speeds approach my NVMe. Gets killed on random, as expected, but that's some massive bandwidth. Thanks for confirming it's viable.
|
# ? Jan 19, 2018 12:18 |
|
Can anyone recommend a backup solution that allows for scheduled backups and works with a Synology NAS? Right now I'm using Cloud Station Backup to backup some folders in "C:\Users\MyName..." but CS Backup won't let me filter out subfolders that I don't need and I feel like it also writes to the NAS constantly which I'm also not a big fan of as I imagine that wear and tear can't be great.
|
# ? Jan 19, 2018 15:31 |
|
Incessant Excess posted:Can anyone recommend a backup solution that allows for scheduled backups and works with a Synology NAS? Right now I'm using Cloud Station Backup to backup some folders in "C:\Users\MyName..." but CS Backup won't let me filter out subfolders that I don't need and I feel like it also writes to the NAS constantly which I'm also not a big fan of as I imagine that wear and tear can't be great. I use Duplicati 2 to just backup to a folder on the NAS I've got mapped as a network drive, and it's great. The user interface is kinda clunky, but the actual backing up is really smooth compared to my previous solution (CrashPlan). Crashplan would re-scan all 500GB of backup sources all the time, it was really slow in general, the interface got quite a bit worse over time and it was written in Java so it ate a bunch of memory and was generally obnoxious. I've also heard good things about Arq, which looks a bit friendlier. TheFluff fucked around with this message at 16:12 on Jan 19, 2018 |
# ? Jan 19, 2018 16:09 |
|
Incessant Excess posted:Can anyone recommend a backup solution that allows for scheduled backups and works with a Synology NAS? Right now I'm using Cloud Station Backup to backup some folders in "C:\Users\MyName..." but CS Backup won't let me filter out subfolders that I don't need and I feel like it also writes to the NAS constantly which I'm also not a big fan of as I imagine that wear and tear can't be great. Assuming you're on Windows 8 or later, the built-in File History works fine for me. Just make sure to set a quota on the share you point it at, because it'll happily fill your entire NAS if you don't.
|
# ? Jan 19, 2018 16:54 |
|
I feel like File History is a thing that more non-nerds should know about and use. It's just kind of hidden, which is a shame, because it's great.
|
# ? Jan 19, 2018 17:15 |
|
Its OK, it does back up files. It isn't Time Machine. MS should take backups more seriously.
|
# ? Jan 20, 2018 00:25 |
|
Thermopyle posted:I feel like File History is a thing that more non-nerds should know about and use. It's just kind of hidden, which is a shame, because it's great. In the meantime I guess I'll hunt for a white paper on it.
|
# ? Jan 20, 2018 00:34 |
|
It just copies the user profile folders to a 2nd backup hard drive on a schedule. Thats about it.
|
# ? Jan 20, 2018 00:35 |
|
It's just automatic backup, but if people would just know to turn it on it's easy and a no brainer.
|
# ? Jan 20, 2018 00:42 |
|
What's the best way to go about cleaning up periodic ZFS snapshots on FreeNAS? I just realized that apparently something like 50% of my used space is due to overly-aggressive snapshots that I set up when I built this 2.5 years ago.
lurksion fucked around with this message at 07:58 on Jan 20, 2018 |
# ? Jan 20, 2018 07:54 |
|
lurksion posted:What's the best way to go about cleaning up periodic ZFS snapshots on FreeNAS? I just realized that apparently something like 50% of my used space is due to overly-aggressive snapshots that I set up when I built this 2.5 years ago. Farm it out to a separate application (google "ZFS snapshot management"), no comment on which you should prefer but this is one of those problems that is solved already and you just need to define your retention rules.
|
# ? Jan 20, 2018 08:14 |
|
I'd like to add that the space used is almost entirely about the retention length, not density of snapshots. I haven't used ZFS, but most snapshot systems have very little overhead, it's almost entirely taken up by files you've deleted since then.
|
# ? Jan 20, 2018 17:46 |
|
Thermopyle posted:It's just automatic backup, but if people would just know to turn it on it's easy and a no brainer. It really only protects against hardware failure, though, so it's not going to help with minimally competent cryptolockers, which I assume are the most common source of data loss now that most new machines ship with an SSD. You can't even point it at a network share unless you dig down into the Windows 7 version, which I just tried to do on my work laptop and it told me it couldn't use the network path for the usual cryptic non-reasons. Harik posted:I'd like to add that the space used is almost entirely about the retention length, not density of snapshots. I haven't used ZFS, but most snapshot systems have very little overhead, it's almost entirely taken up by files you've deleted since then. Do you really belong in this thread if you delete stuff? I mean, come on
|
# ? Jan 22, 2018 16:33 |
|
Munkeymon posted:It really only protects against hardware failure, though, so it's not going to help with minimally competent cryptolockers, which I assume are the most common source of data loss now that most new machines ship with an SSD. I'm backing up to a network share with file history right now on Windows 10. For the average consumer I feel like the most common source of data loss is accidentally deleting stuff, but maybe I'm wrong.
|
# ? Jan 22, 2018 17:12 |
|
Munkeymon posted:It really only protects against hardware failure, though, so it's not going to help with minimally competent cryptolockers, which I assume are the most common source of data loss now that most new machines ship with an SSD. Couldn't you just re-format your disk and then copy the files back from the NAS?
|
# ? Jan 22, 2018 17:38 |
|
Incessant Excess posted:Couldn't you just re-format your disk and then copy the files back from the NAS? Well the cryptolocker will likely use your own saved network credentials to also gently caress up your backups. That's pretty common these days, so the use of a network share isn't all that safe.
|
# ? Jan 22, 2018 18:30 |
|
Nulldevice posted:Well the cryptolocker will likely use your own saved network credentials to also gently caress up your backups. That's pretty common these days, so the use of a network share isn't all that safe. Yeesh, I didn't realize that could happen. Anything I can do to protect my NAS from that?
|
# ? Jan 22, 2018 18:36 |
|
Incessant Excess posted:Yeesh, I didn't realize that could happen. Anything I can do to protect my NAS from that? If you're on ZFS, snapshots are read-only to all clients, so you can just roll back to the closest one you set before the event. In other news, I used MediaTidy for the first time on my NAS collection after many years of mediocre management. It installs on Linux (ran it through a VM, mounted the shares) and uses ffmpeg and a host of other logical criteria to actually go through your music/movies/tv shows and look for structure issues and actual duplicates. I reclaimed over 400GB of storage from deleting a ton of this.avi vs. this-bettercopy.mkv accumulated from over the years (also, Sonarr going nuts grabbing propers or higher res items). Great little tool. insularis fucked around with this message at 18:45 on Jan 22, 2018 |
# ? Jan 22, 2018 18:40 |
|
Thermopyle posted:I'm backing up to a network share with file history right now on Windows 10. Yeah, I tried to add a network path through that and it fails with a uselessly generic error code, but it is a work machine+network so IDK what the deal is with the server on the other end. There's also a lot of digging to find that option. Main point is, anything Windows can write to without demanding a password from the user is probably toast if you get ransomwared, so it's probably only good for hardware failure*. That's what the recycling bin is for? *unless you're backing up to a NAS with regular snapshots it can roll back to but this scenario is way outside of 'average user' territory e: the way I did it because I was hand-rolling my own snapshot management 11 years ago and am too lazy to update it was to give the NAS (solaris/openindiana) access to a share of my backup snapshots on my desktop so it can actively pull them and the desktop doesn't need write access. The script stores them in part of the file tree that's shared read-only to all clients. Never been tested with a cryptolocker, but that's what I came up with when I heard about them. Munkeymon fucked around with this message at 18:50 on Jan 22, 2018 |
# ? Jan 22, 2018 18:42 |
|
Unless snapshots are visible via SMB, CIFS, or NFS I don’t see how it’d be possible for a client to delete them. Hell, given a snapshot is immutable I’d presume the worst thing to happen from a client could be deleting snapshots even if tons of access was granted, but I’ve never seen a UI or CLI command from a client that would let me see, create, or delete all these serverside snapshots. I have different logins to my servers as admin than I do as clients so even if I got infected I’d just lose local data and for the past maybe 24 hours. Hell, I’d be more concerned with cloud filesystems like via Google Drive or iCloud than my local files because I dunno how to recover cloud data effectively from providers.
|
# ? Jan 22, 2018 20:13 |
|
Nulldevice posted:Well the cryptolocker will likely use your own saved network credentials to also gently caress up your backups. That's pretty common these days, so the use of a network share isn't all that safe. Set the network share you're backing up to to be non-writable from your normal credentials, and configure your backups to run under a separate set of credentials.
|
# ? Jan 22, 2018 21:17 |
|
necrobobsledder posted:Unless snapshots are visible via SMB, CIFS, or NFS I don’t see how it’d be possible for a client to delete them. Hell, given a snapshot is immutable I’d presume the worst thing to happen from a client could be deleting snapshots even if tons of access was granted, but I’ve never seen a UI or CLI command from a client that would let me see, create, or delete all these serverside snapshots. I have different logins to my servers as admin than I do as clients so even if I got infected I’d just lose local data and for the past maybe 24 hours. Hell, I’d be more concerned with cloud filesystems like via Google Drive or iCloud than my local files because I dunno how to recover cloud data effectively from providers. The laziness is that to deal with a daily image and associated file additions+deletions 'correctly' without wasting a bunch of space would involve the snapshot configuration mirroring the image creation cadence, so I'd have to gently caress with snapshot scripts if I hosed with the backup config and well This system has outlasted a couple of desktops and I haven't had to mess with snapshot scripts, so I'm OK with it, but this is part of why I'm eying this year to do a clean rebuild of the controlling system with modern tooling and maybe kick the legacy Solaris stuff to the curb. Especially now that I don't mind paying for some software that'll save me a weekend (lol it's never just one weekend) of hot hot manpage action.
|
# ? Jan 22, 2018 23:22 |
|
Farmer Crack-rear end posted:Set the network share you're backing up to to be non-writable from your normal credentials, and configure your backups to run under a separate set of credentials. What I was referring to is that it can access all of your network credentials. If you've saved any network credentials in Windows, it's highly likely a competent cryptolocker will be able to use them. Everything is stored in the same place. You can see this in the credential manager in the control panel, locations and credentials are stored there. The malware will simply mount the share using the credentials stored and wipe out the backups if possible. Things like file history would be easily wiped out. The way I've gotten around this is a little different. I share out my directories to my server and the server mounts the directory using automount and does a rsync diff of the home directory and anything else important (directory is also read only as a share) and keeps it on the server. All of my NAS shares are read only with one exception which is just scratch space/drop off location. All of my download work is done directly on the server. Using CentOS as a base and various programs to handle the downloads (rtorrent/rutorrent/nzbget) then i log into the server and use custom scripts to manage downloaded content. I have a second server that is used as a backup target which has no shares. Everything is loaded via ssh/rsync nightly with 12 days worth of backups on auto rotation. Also have on demand mounted external drives. Final backup is Backblaze B2/rclone for catastrophic failures. I put a lot of thought into 'what if', probably to the level of extreme paranoia. However with all of this I make no assertion that I'm bulletproof, anything can happen. I think I'm pretty well protected as is, but I'm always looking for ways to improve the situation.
|
# ? Jan 23, 2018 12:32 |
|
Or...use crashplan!
|
# ? Jan 23, 2018 17:21 |
|
Thermopyle posted:Or...use crashplan! ...which actively tries to gently caress up NAS installs, or at least makes zero effort to support them. There's not a great solution, sadly.
|
# ? Jan 23, 2018 17:25 |
|
They also killed off their reasonably cheap Home program, so there's basically no reason to use them anymore. Backblaze is the next-best option.
|
# ? Jan 23, 2018 17:29 |
|
DrDork posted:They also killed off their reasonably cheap Home program, so there's basically no reason to use them anymore. Backblaze is the next-best option. I'm trying iDrive on my Synology. Been OK so far. The only annoying thing is that they don't seem to have the latest updates in the Update Center, and you have to do them manually.
|
# ? Jan 23, 2018 17:49 |
|
|
# ? May 11, 2024 09:36 |
|
Sorry, I just meant crashplan-esque solutions, not specifically crashplan. I tentatively plan on sticking with Crashplan small business because I just back up my NAS now since my PCs back up to my NAS. This makes CP 10/month which is actually less than the $149/year I was paying for the old Crashplan. I might switch over to B2 at some point, but I've been happy-ish with Crashplan and already have it setup and restore procedures tested with them.
|
# ? Jan 23, 2018 17:50 |