|
I'm using Duplicati 2 on my clients (Windows/etc) pointed at a Minio server (Ubuntu VM running on unRAID) and I replicate that offsite once a month. For my "media" files I just have a shell script that dumps a list of the folders in the shares once in awhile, I don't care if I lose that stuff as its easily replaced later.
|
# ? Jan 23, 2018 18:39 |
|
|
# ? May 25, 2024 15:17 |
|
I noticed an update notification for NZB Hydra which I run as a docker container, I don't have to do anything there, right? That's something that Docker takes care of automatically?
|
# ? Jan 23, 2018 20:03 |
|
Depends on how your container is configured. The ones I use fall into one of three categories. 1) Restart the container and it updates as part of booting up (plex) 2) Use the web interface of the containerized app to update it (sonarr) 3) Need to grab the latest container and re-deploy it.
|
# ? Jan 23, 2018 20:46 |
|
So assuming I want a basic file server (like freenas or Linux w/ zfs or unraid), is 8gb of ram enough for 4x 8tb drives? Would doubling to 16gb make a noticeable difference in performance?
|
# ? Jan 24, 2018 06:32 |
|
8 will do you good, but if you can afford the 16 now go for it or just wait and upgrade to 16 later.
|
# ? Jan 24, 2018 15:09 |
|
Thermopyle posted:Sorry, I just meant crashplan-esque solutions, not specifically crashplan. What are you using to backup to NAS? I just added a lot more storage to my NAS and only have 6 months left of CrashPlan, so starting to think about what changes I need to make. I really use crashplan’s restore from any point in time as I run into my own stupidity regularly like, “Oh hey, my .gitconfig was deleted sometime in the last 3 months and i just noticed that my custom aliases are gone!”
|
# ? Jan 24, 2018 15:56 |
|
Hughlander posted:What are you using to backup to NAS? I just added a lot more storage to my NAS and only have 6 months left of CrashPlan, so starting to think about what changes I need to make. I really use crashplan’s restore from any point in time as I run into my own stupidity regularly like, “Oh hey, my .gitconfig was deleted sometime in the last 3 months and i just noticed that my custom aliases are gone!” I just recently switched to Windows File History, but I haven't yet done an in-depth analysis of how well backing that folder up to Crashplan works when it comes to restoring, and I'm not super excited about "nesting" my backups like that. Though, it's nice that File History backups aren't completely opaque data blobs to Crashplan because the File History destination isn't some proprietary thing...it's just a mirror of your folder structure with all your files ever. I'm not exactly sure yet how it handles different versions of the same file in this scheme...it doesn't look like it's got the different versions living side by side with incrementing file names, so I need to look into that more. FWIW, crashplan doesn't even have to get involved in the scenario you described...you just use File History to go back in time.
|
# ? Jan 24, 2018 16:12 |
|
sharkytm posted:...which actively tries to gently caress up NAS installs, or at least makes zero effort to support them. There's not a great solution, sadly. I struggled with CrashPlan on my Synology for a while, but then setup a Crashplan docker container and have had no problems since. It works very well and I'd highly recommend it- let me know if you have any questions.
|
# ? Jan 24, 2018 16:22 |
|
quote:I'm not exactly sure yet how it handles different versions of the same file in this scheme...it doesn't look like it's got the different versions living side by side with incrementing file names, so I need to look into that more. All it does its add another version of a modified file and appends the backup time and date in like UTC or something. You can directly grab a file out and rename by removing the appended date if you want.
|
# ? Jan 24, 2018 16:40 |
|
So I have redundant FreeNAS machines in my house now. Cause I'm paranoid and dumb. I have a PrimaryFreeNAS box. And BackupFreeNAS box. I setup rsync in the GUI to nightly backup the PrimaryFreeNAS to the BackupFreeNAS. I used this guide: http://thesolving.com/storage/how-to-sync-two-freenas-storage-using-rsync/ Did some quick tests with smaller. Everything was working great. Files were moved to Backup. And deleted when they were removed from Primary. Didn't pay attention to speed too much. Threw ~1TB at it and set rsync run at 4am and went to bed. I noticed that network utilization is ~130Mbit/s. Which is fairly miserable. This is transferring video files that are over 40gb each. So it isn't a small file problem. I notice people complain about rsync speeds, but none seem to complain about it being this bad. If I use CIFS and drag and drop between the two on my windows box I get a bit over 50MB/s. Which makes sense as the data has to come and go from the Windows machine. You would think a direct rsync between the two machines would be much faster. Both machines can be written and read from at ~100MB/s from my Windows box. So it isn't a link problem. They exist on the same switch. Any ideas?
|
# ? Jan 24, 2018 16:42 |
|
dox posted:I struggled with CrashPlan on my Synology for a while, but then setup a Crashplan docker container and have had no problems since. It works very well and I'd highly recommend it- let me know if you have any questions. I used that one but it had a problem with every time CrashPlan updates you had to jump through GUI hoops to reset memory usage. I use gfjardim/crashplan which has a built in NoVNC server so you just point a web browser at it for the UI and it never reset the memory.
|
# ? Jan 24, 2018 16:43 |
|
Ziploc posted:So I have redundant FreeNAS machines in my house now. Cause I'm paranoid and dumb. One comment is don’t use rsync at all. Use zfs send and zfs receive. It sends snapshots instead. I just used it to move 16Tb from one pool to another.
|
# ? Jan 24, 2018 16:47 |
|
redeyes posted:All it does its add another version of a modified file and appends the backup time and date in like UTC or something. You can directly grab a file out and rename by removing the appended date if you want. Ahh, so that's not too bad when it comes to Crashplan backing up that folder. I mean, obviously (i guess?) it'd be better to have Crashplan versioning instead, but this seems OK for now.
|
# ? Jan 24, 2018 17:18 |
|
Thermopyle posted:Ahh, so that's not too bad when it comes to Crashplan backing up that folder. I mean, obviously (i guess?) it'd be better to have Crashplan versioning instead, but this seems OK for now. It actually should work perfectly and easily with whatever online backup system you want. Just the fact it only adds a few files at a time (based on what you modify) would seem work great with incremental online stuff.
|
# ? Jan 24, 2018 17:20 |
|
Hmm. Ok. I read some things. Mainly this: http://doc.freenas.org/9.10/storage.html#replication-tasks Sounds like a worthwhile solution. I'll give it a try.
|
# ? Jan 24, 2018 17:24 |
|
redeyes posted:It actually should work perfectly and easily with whatever online backup system you want. Just the fact it only adds a few files at a time (based on what you modify) would seem work great with incremental online stuff. Yeah, definitely. I was just saying that Crashplan already has a file version system that you can filter by date/time, so now this isn't integrated with that.
|
# ? Jan 24, 2018 17:41 |
|
Something I haven't been easily able to google: What happens when a snapshot is created in the middle of large file being written to the server? I just get a snapshot of a half transferred file?
|
# ? Jan 24, 2018 18:04 |
|
Crossposting from the Intel thread: If you own a Synology DS415+/DS1515+/DS1815+ or some other Intel Atom C2000 based variant of it please make sure you backup your config. Lots of customers are reporting dead units due to the Intel Atom bug: https://forum.synology.com/enu/viewtopic.php?t=127839 or search twitter. Mine started randomly shutting down almost exactly two years after purchase, it looked like a faulty PSU so I replaced it with a different appliance and now the Synology unit won't power up at all. The expected RMA turnaround time is over three weeks. more info: https://www.servethehome.com/intel-atom-c2000-series-bug-quiet/
|
# ? Jan 24, 2018 18:19 |
|
eames posted:Crossposting from the Intel thread: If you own a Synology DS415+/DS1515+/DS1815+ or some other Intel Atom C2000 based variant of it please make sure you backup your config. God damnit. Thanks for posting this.
|
# ? Jan 24, 2018 18:28 |
|
Hughlander posted:I used that one but it had a problem with every time CrashPlan updates you had to jump through GUI hoops to reset memory usage. I use gfjardim/crashplan which has a built in NoVNC server so you just point a web browser at it for the UI and it never reset the memory. Seconding this. I don't know what dark magic this container uses but it's the only Crashplan solution I've had that doesn't reset the max RAM value every time it updates.
|
# ? Jan 24, 2018 18:47 |
|
Hughlander posted:I used that one but it had a problem with every time CrashPlan updates you had to jump through GUI hoops to reset memory usage. I use gfjardim/crashplan which has a built in NoVNC server so you just point a web browser at it for the UI and it never reset the memory. Well hell, that NoVNC thing is cool. I get tired of setting up an SSH tunnel everytime I want to admin Crashplan on my server. Guess I'll be setting that image up...
|
# ? Jan 24, 2018 19:28 |
|
eames posted:Crossposting from the Intel thread: If you own a Synology DS415+/DS1515+/DS1815+ or some other Intel Atom C2000 based variant of it please make sure you backup your config. We also had this failiure two weeks ago.
|
# ? Jan 24, 2018 19:31 |
|
Thermopyle posted:Well hell, that NoVNC thing is cool. I get tired of setting up an SSH tunnel everytime I want to admin Crashplan on my server. NoVNC is cool in general. I set up a letsencrypt nginx reverse proxy for everything and made / published a NoVNC container that lets you point it at any machine on the network. Like 6 containers running are X+Firefox+NoVNC to some other container / machine.
|
# ? Jan 24, 2018 19:43 |
|
I have two RS2416RP+ at work holding some camera footage (bought with budget fluff money). Looking forward to those making GBS threads the bed. Will they do an RMA before they actually die, or do I get to wait to panic?
|
# ? Jan 24, 2018 19:44 |
|
Moey posted:I have two RS2416RP+ at work holding some camera footage (bought with budget fluff money). Looking forward to those making GBS threads the bed. Reading the tail end of that thread linked, it looks like they'll do it proactively.
|
# ? Jan 24, 2018 19:52 |
|
Internet Explorer posted:Reading the tail end of that thread linked, it looks like they'll do it proactively. Yeeehaw. I'll give it a shot and post some results.
|
# ? Jan 24, 2018 19:55 |
|
Perhaps they're just waiting to see how bad the numbers really are before issuing a voluntary recall? There's a board fix that involves soldering a simple resistor to two header pins on the mainboard. My understanding is that this basically slows down or nearly stops the decay of the chip but only works while the CPU isn't damaged yet. Accepting RMAs for working units would allow them to apply that fix and send them out as refurbished replacements. This posting has pictures of the resistor: https://forum.synology.com/enu/viewtopic.php?f=106&t=127839&start=660#p505505 AFAIK all 1517+ shipping now still have the same boards with the same affected CPU stepping (B0) just with the one extra resistor on the board. What a mess, I feel sorry for Synology. It seems like they're not even allowed to talk about it. eames fucked around with this message at 21:18 on Jan 24, 2018 |
# ? Jan 24, 2018 21:14 |
|
Ziploc posted:Something I haven't been easily able to google: My understanding is that yes, this is exactly what happens. Blocks that change after the snapshot are not part of that snapshot, even if they're appended to an existing file.
|
# ? Jan 24, 2018 23:11 |
|
So I've got a bunch of old SCSI HDDs hanging around that I wouldn't mind putting to use for home storage (enough to build a decently-sized RAID), but I'm also aware that SCSI is pretty drat old at this point. Is it even worth trying to find something that these drives will work in, or would I be better off trashing them? My initial thought is that I shouldn't even bother because I'm going to be in trouble if I run out of spare drives to swap. At the same time, I'm already planning on building something within the next year or two using current technology and this is more-or-less intended as a stopgap so I can move my media off my desktop/gaming PC drives.
|
# ? Jan 25, 2018 00:05 |
|
Actual SCSI or SAS? What sort of capacities are these disks? I'm 99% sure you should launch them into the trash.
|
# ? Jan 25, 2018 00:06 |
|
I just hammer-smashed some Ultra 320 SCSI 18GB 15k drives. It actually made me sad. Back in the day I wanted some of them.
|
# ? Jan 25, 2018 01:07 |
|
Ziploc posted:So I have redundant FreeNAS machines in my house now. Cause I'm paranoid and dumb. Lol. I'm a dumbass. I expanded the graph to include when it actually started. And the rsync speeds were much more respectable. But it turns out, rsync didn't like that one of my 40gb files was not finished transfering when rsync started. And it's been stuck on the same loving file all loving day. zfs snapshot send/receive it is!
|
# ? Jan 25, 2018 01:50 |
|
If both NASes are on the same LAN, you can increase the speed of your zfs send/recv by piping it through nc instead of ssh, which avoids all the encryption overhead.
|
# ? Jan 25, 2018 05:39 |
|
redeyes posted:I just hammer-smashed some Ultra 320 SCSI 18GB 15k drives. It actually made me sad. Back in the day I wanted some of them.
|
# ? Jan 25, 2018 05:48 |
|
Yo when are these fuckin WD Easy Stores gonna go back on sale? Or does anyone have an extra they'd sell me? I need at least 4TB right now but I can wait a bit if it means I can get an 8TB for a decent price. I saw that they just went on sale at Best Buy on the 8th of this month but they're back to regular price now.. Anyone know how often they drop them back down? The last drop was last month but that was for the holidays, so I'm hoping it's sooner rather than later.
|
# ? Jan 25, 2018 07:37 |
|
Anime Schoolgirl posted:you didn't diassemble them to get at the toy platters smh And magnets! You can get some crazy powerful magnets out of old server drives
|
# ? Jan 25, 2018 14:47 |
|
Romulux posted:Yo when are these fuckin WD Easy Stores gonna go back on sale? Or does anyone have an extra they'd sell me? I have 2x 6 TB recertified Reds for sale in SAMart. Not easystore rips but cheaper than retail if you're interested...
|
# ? Jan 25, 2018 15:29 |
|
So I think I setup the replication task properly? It's definitely running. So that's nice. I was hoping to have a single level dataset on my destination. In my head, my storage managers would end up like this: Sender PrimaryVolume -PrimaryVolume Receiver BackupVolume -BackupVolume -PrimaryVolume Instead the receiver looks like this. BackupVolume -BackupVolume --PrimaryVolume ---PrimaryVolume Is this... normal? I attempted to not have so many children datasets by setting the remote dataset to "BackupVolume/PrimaryVolume" The documentation doesn't really indicate what the dataset structure should look like when completed. So I really can't tell what the "Remote ZFS Volume/Dataset" setting actually does in the replication task. EDIT: Oh. The FreeNAS 11 documentation show a example where you don't identify a dataset. Just a volume. Testing that. Ziploc fucked around with this message at 21:20 on Jan 25, 2018 |
# ? Jan 25, 2018 16:10 |
|
One quirk I don't quite understand at the moment. I have two servers with the following hostnames: primaryfreenas.local backupfreenas.local I have primary making a snapshot every night with a 4am to 5am start window. I have primary doing a replication task to backup with a 3am to 6am start window. I seem to be getting these errors periodically. This one came shortly after 3am. "Replication PrimaryVolume -> backupfreenas.local:BackupVolume failed: Failed: ssh: Could not resolve hostname backupfreenas.local: hostname nor servname provided, or not known" They're sitting on the same LAN. Everything goes back to normal like 10 minutes later. And when it comes time to do the replication, which typically happens just after the snapshot is done, completes successfully. I haven't found much about this while googling. Not sure if this is due to the way I have my windows setup or what.
|
# ? Jan 27, 2018 10:09 |
|
|
# ? May 25, 2024 15:17 |
|
Ziploc posted:One quirk I don't quite understand at the moment. To avoid dragging you down into a rabbit hole of DNS, PTR records, and your router to get that fixed, just change the task to go to each other's direct IP address.
|
# ? Jan 27, 2018 14:33 |