Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
ssb
Feb 16, 2006

WOULD YOU ACCOMPANY ME ON A BRISK WALK? I WOULD LIKE TO SPEAK WITH YOU!!


Thanks to everyone for the help. Here's what I ultimately ended up getting based on price/availability etc.

Mainboard/CPU: Supermicro A2SDi-8C+-HLN4F Motherboard

Memory: Supermicro Certified MEM-DR416L-SL06-ER24 Samsung 16GB DDR4-2400 LP ECC REG

m.2 SSD for OS: Western Digital 250GB WD Blue 3D NAND Internal PC SSD - SATA III 6 Gb/s, M.2 2280, Up to 550 MB/s - WDS250G2B0B (I agree about NVME being far superior but it didn't matter for this use case)

Case: Fractal Design Node 304 - Black - Mini Cube Compact Computer Case - Small form factor - Mini ITX – mITX - High Airflow - Modular interior - 3x Fractal Design Silent R2 120mm Fans Included - USB 3.0

PSU: EVGA 550 B5, 80 Plus BRONZE 550W, Fully Modular, EVGA ECO Mode, 5 Year Warranty, Compact 150mm Size, Power Supply 220-B5-0550-V1 as finding a decent 250-300W PSU is actually pretty hard and not all that cost effective as best as I can tell, or they only have like 2 SATA connectors or whatever. This one had 6 which is what I need.

And then I'll just bring over my 6 drive ZFS array and get it going after I redo Debian on the m.2 drive.

Seriously, appreciate the help. Some of this is probably overkill but whatever.

Adbot
ADBOT LOVES YOU

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

BlankSystemDaemon posted:

Maybe if you're on an inferior grid. 230V grid means higher efficiency than 110V.

80 Bronze simply isn't about maximum efficiency at all, its only focus is ensuring minimum efficiency - but that doesn't mean it's impossible to get something that's +90% effective at 50% load.

Yeah, it'd be nice if we could get 230v in the US, but...no.

While it's not impossible to get better efficiencies out of a Bronze unit, it's unlikely to get them that high--more likely at that point that they'd bump up certification to Silver or Gold, and price it accordingly.

Hell, even a 10% efficiency difference for a sub-200W system like that only works out to like $3-4/yr. That's a looooong time to make back spending $40 extra for a more efficient unit.

DrDork fucked around with this message at 20:50 on Dec 9, 2020

H110Hawk
Dec 28, 2006

TraderStav posted:

They're booting off of the exact same USB stick, so all of the settings and programs should be the same. Same card for the JBOD (LSI) also. When I switch machines I unplug the JBOD, pull out the LSI card and the Cache drive (nVME on a PCI card) and then drop them in the other. Plug in the USB and boot, so very little differences.

It's really strange.

Yeah that's an odd one. I would email them with the list of part make+model's with a description of the problem and see what they say.

Wild EEPROM
Jul 29, 2011


oh, my, god. Becky, look at her bitrate.
Re 2.5” hddchat:

Look up the seagate rosewood if you wanna see some real fun. It has a load bearing drywall sticker.

sharkytm
Oct 9, 2003

Ba

By

Sharkytm doot doo do doot do doo


Fallen Rib

Wild EEPROM posted:

Re 2.5” hddchat:

Look up the seagate rosewood if you wanna see some real fun. It has a load bearing drywall sticker.

Those drives are such pieces of poo poo. Seagate really is the second/bottom tier in drives.

Not Wolverine
Jul 1, 2007
I want a proper web interface on my NAS. I am currently using an old PC with Ubuntu 20.04 installed, and I'm considering switching it to the gold standard Green NAS, but they are now TrueNAS Core. Is moving to TrueNAS Core a good idea is there something else better out there?

The biggest issue I will run into is that I have ~5Tb of data on an MDADM array, I believe Free/Grey NAS will not import an MDADM array. For what it's worth, I know ZFS is God's file system or whatever, but I am not very familiar with BSD and I don't think ZFS would benefit me much in a home use setting. What features am I missing out on with my current MDADM + ext4 setup compared to BSD + ZFS?

BlankSystemDaemon
Mar 13, 2009



Not Wolverine posted:

I want a proper web interface on my NAS. I am currently using an old PC with Ubuntu 20.04 installed, and I'm considering switching it to the gold standard Green NAS, but they are now TrueNAS Core. Is moving to TrueNAS Core a good idea is there something else better out there?

The biggest issue I will run into is that I have ~5Tb of data on an MDADM array, I believe Free/Grey NAS will not import an MDADM array. For what it's worth, I know ZFS is God's file system or whatever, but I am not very familiar with BSD and I don't think ZFS would benefit me much in a home use setting. What features am I missing out on with my current MDADM + ext4 setup compared to BSD + ZFS?
EXT (whichever version) and ZFS can't really be directly compared because one is just a filesystem (based, in part, on UFS which is still actively maintained in FreeBSD, by many people including its original creator who made it in the early 1980s) whereas the other combines volume management (for example MDADM in Linux or GEOM in FreeBSD) with a filesystem and features like copy-on-write atomicity, meaning no block is ever overridden and the data on-disk moves from one consistent state to the next (ie. if something ever goes wrong, you can rewind to the last-known good state). ZFS also keeps checksums for every single record, stored in the parent block, in a hashtree-like structure, which ensures data integrity.
But maybe you don't care about any of that, which is fine, as all it really boils down to is this: ZFS was designed to keep data safe irrespective of what harddrives try to do to it (and they will do a lot of things which no other filesystem can deal with).

As for 'BSD', it's FreeBSD which is descended from BSD - and you're absolutely right, it won't work with mdadm.
There's a couple of reasons for this: namely that nobody has made a GEOM module for it, and that Linux doesn't have a stable KBI - so the FreeBSD release schedule combined with a release model that tends to work in bursts, means that there's potentially big gap between feature parity if someone had implemented it.

BlankSystemDaemon fucked around with this message at 13:14 on Dec 12, 2020

Not Wolverine
Jul 1, 2007

BlankSystemDaemon posted:

EXT (whichever version) and ZFS can't really be directly compared because one is just a filesystem (based, in part, on UFS which is still actively maintained in FreeBSD, by many people including its original creator who made it in the early 1980s) whereas the other combines volume management (for example MDADM in Linux or GEOM in FreeBSD) with a filesystem <snip>
I am aware that ZFS is both RAID and file system, I think thats part of the reason it's so difficult to ask Google about the differences, the search results I have are found are simply "shutup ZFS is different because it does the RAID too".

quote:

and features like copy-on-write atomicity, meaning no block is ever overridden and the data on-disk moves from one consistent state to the next (ie. if something ever goes wrong, you can rewind to the last-known good state). ZFS also keeps checksums for every single record, stored in the parent block, in a hashtree-like structure, which ensures data integrity.

But maybe you don't care about any of that, which is fine, as all it really boils down to is this: ZFS was designed to keep data safe irrespective of what harddrives try to do to it (and they will do a lot of things which no other filesystem can deal with).As for 'BSD', it's FreeBSD which is descended from BSD - and you're absolutely right, it won't work with mdadm.
There's a couple of reasons for this: namely that nobody has made a GEOM module for it, and that Linux doesn't have a stable KBI - so the FreeBSD release schedule combined with a release model that tends to work in bursts, means that there's potentially big gap between feature parity if someone had implemented it.
This is the part that I don't know about, and I'm pretty sure I don't care about it either. I am not entirely sure what copy-on-write atomicity means or whether or not it matters to me. I am using my NAS because all my PCs have tiny SSDs and a local 1TB spinner, so I usually have my data saved in multiple spots and the NAS just helps periodically collect everything into one spot, it's basically a large, overly complex thumb drive. For me, I don't think I care about making sure every write goes through perfectly, if anything I expect data loss through my cables might be a problem, but my ethernet cables have so far been reliable. Similarly, being able to rewind to the last known good state sounds fun, but I don't know exactly what that means, and I don't think that is a feature I would actually use much at home. I assume rewinding the file system is not the same as like reverting a document to an older version? I sounds like good protection in case poo poo goes wrong during a file copy, but that has not been a problem for me.

In a real work environment with an IT budget and someone dedicated to making sure the NAS is OK ZFS sounds awesome, but as a home user, I think I prefer the simplicity of MDADM + Ext4 over ZFS. But regardless of my opinion on what I need, I think I still need BSD + ZFS if only because the easy option to setting up a NAS is not Ubuntu + MDADM + Ext4 + SSH + etc, but instead TrueNAS Core. If anything, I think my only fear now is that I kinda dislike the idea of an all in one solution simply because if something ever did break (like with the OS itself, not the file system) I think it would be more difficult to troubleshoot a broken FreeNAS than a broken Linux server. With my current setup, I know that so long as my disk drives are intact, I can replace the whole operating system with anything else Linux based and be able to use MDADM to import the array. I could switch my NAS to Ahego Linux tomorrow if I wanted to, I'm not sure BSD + ZFS could do that.

ssb
Feb 16, 2006

WOULD YOU ACCOMPANY ME ON A BRISK WALK? I WOULD LIKE TO SPEAK WITH YOU!!


Honestly once you get ZFS set up it can basically be forgotten about, and the setup isn't really difficult in the first place. If you can move your data somewhere and then put it back after recreating the array, I can't really come up with any good arguments *not* to use ZFS especially because most of your arguments are "I don't understand why it's better" which is fair, and "I don't think I'll benefit from it really" which is also fair, but you probably will benefit without realizing it and it makes it a bit more portable since mdadm effectively locks you into Linux.

You are not going to have data loss through your cables, it would retry the packet(s). At worst with that you'll either lose link or just have pretty lovely speeds.

You don't need to use snapshots or dedup at home unless you feel like it. ZFS is going to make it easier to not lose data in general as it's far more fault tolerant than other solutions you've mentioned.

ZFS also works fine on Linux - I use it on Debian personally.

Really the only ongoing work you should do "dedicated to making sure the NAS is OK" is set up a cron job to check the array for disk failures that runs once a week or so and fires an email off if it detects any (this is easy and I can just give you a script to do it, once I get my replacement hardware and can access it again), and maybe a monthly scrub in a cron job as well. You do not ever have to touch it beyond OS security updates which you should be doing regardless of whatever choice you make.

If you don't want to use ZFS for whatever reasons that's fine, but so far your reasons seem more uninformed about how much effort it is than anything else.

ssb fucked around with this message at 15:02 on Dec 12, 2020

Not Wolverine
Jul 1, 2007

shortspecialbus posted:

Honestly once you get ZFS set up it can basically be forgotten about, and the setup isn't really difficult in the first place. If you can move your data somewhere and then put it back after recreating the array, I can't really come up with any good arguments *not* to use ZFS especially because most of your arguments are "I don't understand why it's better" which is fair, and "I don't think I'll benefit from it really" which is also fair, but you probably will benefit without realizing it and it makes it a bit more portable since mdadm effectively locks you into Linux.

You are not going to have data loss through your cables, it would retry the packet(s). At worst with that you'll either lose link or just have pretty lovely speeds.

You don't need to use snapshots or dedup at home unless you feel like it. ZFS is going to make it easier to not lose data in general as it's far more fault tolerant than other solutions you've mentioned.

ZFS also works fine on Linux - I use it on Debian personally.

Really the only ongoing work you should do "dedicated to making sure the NAS is OK" is set up a cron job to check the array for disk failures that runs once a week or so and fires an email off if it detects any (this is easy and I can just give you a script to do it, once I get my replacement hardware and can access it again), and maybe a monthly scrub in a cron job as well. You do not ever have to touch it beyond OS security updates which you should be doing regardless of whatever choice you make.

If you don't want to use ZFS for whatever reasons that's fine, but so far your reasons seem more uninformed about how much effort it is than anything else.
You're absolutely right my argument is mainly that I am not informed enough about why I need ZFS, but also the fact that I think ZFS is supported better on BSD than Linux. Until Linus pulls his head out of his rear end Oracle changes the ZFS license (:lol: ) I would prefer to avoid ZFS. My main reason for preferring Linux is simply because there is soo much more variety, with BSD the choices are dedicated NAS distros, or Free, Net, Open BSD, or a Hackintosh NAS.

quote:

Really the only ongoing work you should do "dedicated to making sure the NAS is OK" is set up a cron job to check the array for disk failures that runs once a week or so and fires an email off if it detects any (this is easy and I can just give you a script to do it, once I get my replacement hardware and can access it again), and maybe a monthly scrub in a cron job as well. You do not ever have to touch it beyond OS security updates which you should be doing regardless of whatever choice you make.
I think I could be happy if i just setup a cron script on my Ubuntu NAS but :effort: it hasn't toasted my data yet so it's on the back burner, same for security updates. For security updates, if I could do them via a web console I would do them on a regular basis. Hell, I think I would be totally fine with using a cron script to automatically do security updates weekly, is this possible?

ssb
Feb 16, 2006

WOULD YOU ACCOMPANY ME ON A BRISK WALK? I WOULD LIKE TO SPEAK WITH YOU!!


Not Wolverine posted:

You're absolutely right my argument is mainly that I am not informed enough about why I need ZFS, but also the fact that I think ZFS is supported better on BSD than Linux. Until Linus pulls his head out of his rear end Oracle changes the ZFS license (:lol: ) I would prefer to avoid ZFS. My main reason for preferring Linux is simply because there is soo much more variety, with BSD the choices are dedicated NAS distros, or Free, Net, Open BSD, or a Hackintosh NAS.

I've been using ZFS on Debian for like 57 (edit: checked) years now and it works fantastically. I have had zero issues with it aside from a crapton of Seagate drives failing and I didn't lose any data from that. Very easy instructions are HERE.

What specifically do you believe won't work properly with it on Linux?

quote:

I think I could be happy if i just setup a cron script on my Ubuntu NAS but :effort: it hasn't toasted my data yet so it's on the back burner, same for security updates. For security updates, if I could do them via a web console I would do them on a regular basis. Hell, I think I would be totally fine with using a cron script to automatically do security updates weekly, is this possible?

You can absolutely set up a cron job to automatically do security upgrades, and I think ubuntu might have something that does that automatically if you enable it - unattended-upgrades I think?

Not Wolverine
Jul 1, 2007

shortspecialbus posted:

What specifically do you believe won't work properly with it on Linux?
I have not yet read your link (I plan to) but my specific fear is that ZFS on Linux might be replaced or moved into the kernel in the future. Even if it's "better" I don't want things to break. It's an irrational fear, but that's the main reason I don't want to use Z on Linux right now.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I manage both ZFS and LVM with mdraid at work on Linux and I prefer ZFS both professionally and for personal use. Other professionals may disagree with me but my point is more that ZFS is really not a big deal at home as long as you can deal with its propensity to gobble up RAM for some specific scenarios for performance and how you may need to actually plan your array expansions a little.

ZFS on Linux for most use cases even professionally is fine to run if you feed your array lots of RAM and actually think about your storage IO workload and do some measurements. I was on the “be wary of ZoL for a while” train and then it passed the number of years that it was available on OpenIndiana and other OSS Solaris distributions before I stopped being obstinate.

Zorak of Michigan
Jun 10, 2006


Now that ZFS on Linux has become OpenZFS, I don't think it's going to just go away. It's an important part of too many projects. It's also the technology I'd outright prefer if I was concerned about making my disk array work even if I had to transplant it into a new build. ZFS is mature enough today that I'd have no fear whatsoever taking the array out of my current FreeNAS box, installing them in any current Linux build, and importing the pool.

I admit that I am lightly biased. I used to be a professional Solaris toucher and learned ZFS at work. Whenever I have to touch mdadm or lvm, I feel a deep longing for the simplicity and power of ZFS.

IOwnCalculus
Apr 2, 2003





shortspecialbus posted:

You don't need to use ... dedup at home unless you feel like it.

I would bump this from "don't need to" to "DO NOT ENABLE UNDER ANY CIRCUMSTANCE" because holy gently caress you will need a lot of RAM to make a deduplicated pool not perform like poo poo. I've got to imagine dedup is only worth it in very specific workloads where more/larger drives simply aren't possible but RAM/cache SSDs are. For the home user, there's always bigger drives.

Other than that, ZFS is worth it over mdraid because mdraid will poo poo your entire array over a single read error during a rebuild process. ZFS won't, and in most cases will be able to tell you which exact files are corrupted and need to be restored from the backups you have.

Speaking of ZFS and resilvering, is there any way to tell ZFS to do multiple drive replacements simultaneously? I'm doing some drive replacements in four-drive raidz vdevs and even if I stick them in immediately following commands:

code:
zpool replace tank olddrive1 newdrive1 && zpool replace tank olddrive2 newdrive2 && zpool replace tank olddrive3 newdrive3...
It seems to queue the second+ resilver to occur after the resilver on the first disk has completed. It's not the end of the world but when a full resilver takes ~36 hours, it'd still be nice to get this done sooner. Using ZOL 0.8.5 on Ubuntu 18.04.

Yaoi Gagarin
Feb 20, 2014

Not Wolverine posted:

I have not yet read your link (I plan to) but my specific fear is that ZFS on Linux might be replaced or moved into the kernel in the future. Even if it's "better" I don't want things to break. It's an irrational fear, but that's the main reason I don't want to use Z on Linux right now.

You know the on-disk format won't change right? If the module is replaced or put into the kernel tree the new one will still import your pool just fine.

ssb
Feb 16, 2006

WOULD YOU ACCOMPANY ME ON A BRISK WALK? I WOULD LIKE TO SPEAK WITH YOU!!


I use lvm professionally (not mdadm though, we do hardware raid on things that aren't vms) as well as for the non-zfs portion of the server and its excellent.

I was going to write a long effort post on your concerns and all but other posters have hit on some of it pretty well and I'll leave it at "your concerns are wholly unfounded"

RE: dedup - that's a valid point, probably not ideal for most home setups in general.

BlankSystemDaemon
Mar 13, 2009



Not Wolverine posted:

with BSD the choices are dedicated NAS distros, or Free, Net, Open BSD, or a Hackintosh NAS.
I think you might've missed the distinction I was trying to make. It isn't various flavours of BSD.
FreeBSD, OpenBSD, NetBSD, and DragonFlyBSD for that matter, are all completely separate OS' - they're not just different versions of the same kernel shipped with different libraries and userlands, like Linux is.
Yes, they have common heritage (the eponymous BSD, and through that, the original UNIX), and all try to implement against the POSIX standard - but unlike Linux distributions, they completely different in scope.

Also, macOS isn't a BSD, and while it's true that Darwin, the opensource part of macOS that includes the XNU kernel from Mach, implements some code (the VFS, process model, netstack, and some command-line line utilities), that code is there because Apple wanted macOS to be a certified UNIX according to the Single Unix Specification.
The XNU kernel, all of the libraries in the OS (libc, libc++, Metal, CoreAudio, CoreVideo, et cetera ad nauseum), the compiler (LLVM fronted with clang), every single one of the drivers, and and every part of the user experience that 99.95% of the userbase interacts with, all of that is entirely done by Apple (or subcontracted, as is the case with LLVM, OpenBSM, MAC, and the audit framework - although to make everything more complicated, all of those also exist in FreeBSD - and Robert Watson who wrote OpenBSM, MAC, and the audit framework made them for FreeBSD while being paid to work on them for Apple).

As for NAS stuff, FreeNAS/TrueNAS Core is based on FreeBSD, and is really more of an appliance OS.


Not Wolverine posted:

I have not yet read your link (I plan to) but my specific fear is that ZFS on Linux might be replaced or moved into the kernel in the future. Even if it's "better" I don't want things to break. It's an irrational fear, but that's the main reason I don't want to use Z on Linux right now.
OpenZFS, the project that is now a unification of the Linux kernel module and the FreeBSD implementation, may end up in the kernel (though I find this extremely unlikely given how much of a zealot Linus Torvalds can be), but if it does so, there's no reason to expect it'll be incompatible with everything else.
The hope is that OpenZFS will eventually also have support for macOS, NetBSD (which has a fork of the old FreeBSD implementation, and should therefore be easy to move to the new one), Illumos, and Windows.


Zorak of Michigan posted:

Now that ZFS on Linux has become OpenZFS, I don't think it's going to just go away. It's an important part of too many projects. It's also the technology I'd outright prefer if I was concerned about making my disk array work even if I had to transplant it into a new build. ZFS is mature enough today that I'd have no fear whatsoever taking the array out of my current FreeNAS box, installing them in any current Linux build, and importing the pool.

I admit that I am lightly biased. I used to be a professional Solaris toucher and learned ZFS at work. Whenever I have to touch mdadm or lvm, I feel a deep longing for the simplicity and power of ZFS.
Considering LLNL who does massive HPC workloads that they store the data from on OpenZFS, and who gets funding from US Department of Energy, including for implemeting things like draid, as well as the many many other big companies who store their data on it; no, it probably isn't going away, ever.


VostokProgram posted:

You know the on-disk format won't change right? If the module is replaced or put into the kernel tree the new one will still import your pool just fine.
On-disk format changes periodically, but it does so rarely and it's based on feature-flags like everything else - for example, if you enable OpenZFS per-dataset encryption, you'll be unable to import that pool on anything that doesn't implement that feature flag. The same, I believe, is true for draid. It'll probably also be the case for raidz expansion.


shortspecialbus posted:

RE: dedup - that's a valid point, probably not ideal for most home setups in general.
Well, dedup is in a weird place right now, because as it stands, there's very little reason to actually try and use it for anyone. Matt Ahrens has commented on it a few times, and at one point even said that he had some thoughts and bar-napkin sketches for how to do a proper deduplication implementation - but as it stands, there appears to be nobody working on it.
Maybe once he's done with the raidz expansion?

IOwnCalculus
Apr 2, 2003





BlankSystemDaemon posted:

Well, dedup is in a weird place right now, because as it stands, there's very little reason to actually try and use it for anyone. Matt Ahrens has commented on it a few times, and at one point even said that he had some thoughts and bar-napkin sketches for how to do a proper deduplication implementation - but as it stands, there appears to be nobody working on it.
Maybe once he's done with the raidz expansion?

Yeah. Drives are cheap, drive enclosures are cheap-ish, RAM and fast SSDs aren't. You'd need one hell of a weird corner case for dedup to make more sense, at least the way ZFS does it.

Ruggan
Feb 20, 2007
WHAT THAT SMELL LIKE?!


Just got a synology, getting started with all this backup stuff. Looking for advice on remote backup options.

First of all, assuming I should avoid some cloud-sync type setup because that isn't a backup (i.e. deletions and corruptions get replicated)? Or am I ok to do something like this and set it up to sync on a weekly frequency or whatever? To be honest I'm more worried about accidental destruction of the drives (failure/fire/theft/etc) than I am about something like an encryption scam so perhaps this is my best option.

Second, is there a recommended provider to use for this? I am planning for a capacity of 2TB which looks like it ends up being somewhere in the ballpark of $100/yr.

Warbird
May 23, 2012

America's Favorite Dumbass

iirc Backblaze is popular as an offsite, but you have to do some dumb stuff like using the desktop client to back up shares mounted on your desktop machine so you don't have to shell out for the business tier.

H110Hawk
Dec 28, 2006

Ruggan posted:

Just got a synology, getting started with all this backup stuff. Looking for advice on remote backup options.

First of all, assuming I should avoid some cloud-sync type setup because that isn't a backup (i.e. deletions and corruptions get replicated)? Or am I ok to do something like this and set it up to sync on a weekly frequency or whatever? To be honest I'm more worried about accidental destruction of the drives (failure/fire/theft/etc) than I am about something like an encryption scam so perhaps this is my best option.

Second, is there a recommended provider to use for this? I am planning for a capacity of 2TB which looks like it ends up being somewhere in the ballpark of $100/yr.

B2 integrates nicely and supports versioning, which helps with accidental deletion and cryptolockering. It does not help with corruption without a huge amount of work on your part.

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
Synology also has a first party app that does cloud sync of backups, it used to be Cloud Drive Sync but I think they redid it and it's just Cloud Drive now. I just started investigating it and plan to get it going soon.

From what I can tell, BackBlaze B2 and Wasabi both have similar pricing structures but Wasabi doesn't charge for egress traffic. The fine print on that is that if you download more from them in a month than total data you've got stored with them then you're "not a good fit" - meaning if you have 100TB stored with them in total but you download more than 100TB in a single month then that's a no-no. That seems fine to me versus having to pay for transfer when I need to restore a VM/PC/etc. They also assume a minimum 3 month lifecycle for items stored in their buckets which again shouldn't be a big deal for an offsite backup repository, just depends on your planned retention.

Not Wolverine
Jul 1, 2007
I am almost convinced to take the plunge and install FreeNAS, however I'm concerned about memory. Is ECC required for ZFS? My NAS has a GA-770T-USB3 motherboard and Phenom II x4 840 with 12GB of whatever non-ECC DDR3 I could find lying around. The manual for my motherboard states that it supports ECC RAM if you install an ECC capable CPU, but can I even physically fit ECC DDR3 in this motherboard? I thought ECC RAM was always keyed differently. Even if I did find some unicorn DDR3 ECC keyed as non-ECC, are there any AM3 CPUs in the Phenom/Phenom II line that support ECC? If ECC is required, I think that will be convincing enough for me to keep using Linux with MDraid and EXT4.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Not Wolverine posted:

I am almost convinced to take the plunge and install FreeNAS, however I'm concerned about memory. Is ECC required for ZFS? My NAS has a GA-770T-USB3 motherboard and Phenom II x4 840 with 12GB of whatever non-ECC DDR3 I could find lying around. The manual for my motherboard states that it supports ECC RAM if you install an ECC capable CPU, but can I even physically fit ECC DDR3 in this motherboard? I thought ECC RAM was always keyed differently. Even if I did find some unicorn DDR3 ECC keyed as non-ECC, are there any AM3 CPUs in the Phenom/Phenom II line that support ECC? If ECC is required, I think that will be convincing enough for me to keep using Linux with MDraid and EXT4.

ECC is optional. Definitely a nice to have but optional, and shouldn’t really be a deciding factor. Backups are more important if you’re concerned about being able to recover from the rare flipped bit.

spincube
Jan 31, 2006

I spent :10bux: so I could say that I finally figured out what this god damned cube is doing. Get well Lowtax.
Grimey Drawer

Ruggan posted:

Just got a synology, getting started with all this backup stuff. Looking for advice on remote backup options.

First of all, assuming I should avoid some cloud-sync type setup because that isn't a backup (i.e. deletions and corruptions get replicated)? Or am I ok to do something like this and set it up to sync on a weekly frequency or whatever? To be honest I'm more worried about accidental destruction of the drives (failure/fire/theft/etc) than I am about something like an encryption scam so perhaps this is my best option.

Second, is there a recommended provider to use for this? I am planning for a capacity of 2TB which looks like it ends up being somewhere in the ballpark of $100/yr.

Synology Hyper Backup is their turnkey backup application; it'll make versioned, compressed backups of anything on your NAS either to its local storage (so you can arrange your own offsite backups if you like) or it'll plug straight into Google Drive, Onedrive, Amazon S3, or a few other cloud services, and it'll back up directly to the cloud. You can either restore files straight to the NAS from within the Hyper Backup app, or if a meteor falls on your house or something you'll need to retrieve the entire backup set and use a desktop client to restore your stuff.

Synology Cloud Sync is a slightly simpler app that does exactly what it says on the tin; it syncs stuff between your NAS and cloud services. You can get as fiddly as you like with scheduling, two-way or one-way sync etc, but it won't do versioning or compression by itself.

[e] me personally, I use Synology Drive to mirror My Documents etc. between my PC and my NAS, then I use Hyper Backup to backup from the NAS to Google Drive at 1am nightly. Your needs might be different though :)

spincube fucked around with this message at 15:47 on Dec 13, 2020

Not Wolverine
Jul 1, 2007

Matt Zerella posted:

ECC is optional. Definitely a nice to have but optional, and shouldn’t really be a deciding factor. Backups are more important if you’re concerned about being able to recover from the rare flipped bit.

I still think it's cute you guys think I have a budget large enough for backing up my NAS. I know I'm living on the edge, and I fully expect this post to be quoted when I inevitably come rage posting about loosing all my Data. I actually plan to implement backups someday if I ever finish re-encode all my video files to a reasonable size, 5TB of porn is a little excessive even by my standards.

That said, I am a little concerned because my system has 12GB RAM and I believe ZFS needs about 1GB per TB, so if I upgraded to 4x 4TB drives I would be at the limit for my RAM, which I can upgrade to 16GB but I don't want to spend the money unless I have to. Or is this a case of diminishing returns, like if I put in 4x 18TB drives I wouldn't actually need 54GB of RAM? I am still curious to know if this motherboard (GA-770T-USB3) actually can somehow support DDR3 ECC RAM.

ssb
Feb 16, 2006

WOULD YOU ACCOMPANY ME ON A BRISK WALK? I WOULD LIKE TO SPEAK WITH YOU!!


It does not need that much ram. I ran a 21TB RAIDz2 on 4GB ram and an old i5-750 until the hardware died and had no performance issues with normal home usage.

That said, are you hitting your NAS crazy hard with nonstop transfers from multiple endpoints at a time? What actually is your use profile?

Edit: checked, actually had 4gb on that old thing. Also you don't really need ECC for normal home use but it's nice to have. I've never had it in a zfs NAS until my new build which parts should show up for next week.

ssb fucked around with this message at 16:13 on Dec 13, 2020

Less Fat Luke
May 23, 2003

Exciting Lemon
ZFS doesn't need 1GB RAM per TB, that might be recommended for deduplication but in normal usage it seems happy with eight GB or higher. I've run >100TB pools on 8GB and 16GB of memory (non-ECC) with no issue.

freeasinbeer
Mar 26, 2015

by Fluffdaddy
Also chiming in on the ram requirements not being that high for datasets.


As for ECC, it’s devolved into so much confusion because as far as I can tell there were a few influential posters on the freenas forums who were assholes about it, because it’s something they ran at work. However I don’t think anyone of them were zfs devs, so it came across as BOFH FUD.


As far as I understand it, ECC on ZFS is a nice to have if you’re paranoid about bit flips. However what’s lost in the argument often is that ZFS is already way more resistant to bit flips then something like EXT4 due to design, which is often lost in the discussion. So as a terrible analogy it’s like someone saying you’re gonna die if you don’t have a roll cage in your car, overlooking the crumple zones, and side airbags. For most people that’s still a massive step over a land yacht that maybe didn’t have seatbelts.

Edit: https://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/

A link to specifically address/debunk the ECC issue. Cyber jock is the guy who screamed it at everyone on the freenas forums.

freeasinbeer fucked around with this message at 16:36 on Dec 13, 2020

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Not Wolverine posted:

I still think it's cute you guys think I have a budget large enough for backing up my NAS. I know I'm living on the edge, and I fully expect this post to be quoted when I inevitably come rage posting about loosing all my Data. I actually plan to implement backups someday if I ever finish re-encode all my video files to a reasonable size, 5TB of porn is a little excessive even by my standards.

That said, I am a little concerned because my system has 12GB RAM and I believe ZFS needs about 1GB per TB, so if I upgraded to 4x 4TB drives I would be at the limit for my RAM, which I can upgrade to 16GB but I don't want to spend the money unless I have to. Or is this a case of diminishing returns, like if I put in 4x 18TB drives I wouldn't actually need 54GB of RAM? I am still curious to know if this motherboard (GA-770T-USB3) actually can somehow support DDR3 ECC RAM.

You’re really overthinking things here. What I meant was the money you’d spend On trying to find some weird ecc ram is better spend on a service like backblaze to back your poo poo up offsite.

ECC is a nice to have, it’s in no way required. Others have answered your ram question.

Not Wolverine
Jul 1, 2007

shortspecialbus posted:

It does not need that much ram. I ran a 21TB RAIDz2 on 4GB ram and an old i5-750 until the hardware died and had no performance issues with normal home usage.

That said, are you hitting your NAS crazy hard with nonstop transfers from multiple endpoints at a time? What actually is your use profile?

Edit: checked, actually had 4gb on that old thing. Also you don't really need ECC for normal home use but it's nice to have. I've never had it in a zfs NAS until my new build which parts should show up for next week.
My actual use profile is minimal, just me and couple PCs occasionally shoveling over video files every once in a while, "Linux ISOs", etc. I'm over-estimating mainly because I just don't know any better. But I have no plans to enable de-dup so I think I'm fine as far as RAM goes.

freeasinbeer posted:

As far as I understand it, ECC on ZFS is a nice to have if you’re paranoid about bit flips. However what’s lost in the argument often is that ZFS is already way more resistant to bit flips then something like EXT4 due to design, which is often lost in the discussion. So as a terrible analogy it’s like someone saying you’re gonna die if you don’t have a roll cage in your car, overlooking the crumple zones, and side airbags. For most people that’s still a massive step over a land yacht that maybe didn’t have seatbelts.
What I read this morning on Google claimed that ZFS actually needed it more because a RAM error while calculating hashes was the one way to hose your data on ZFS therefore my data would be safter on EXT4 without ECC. . . but the consensus here seems to agree with what you are saying, that I need to go ahead and switch to ZFS.

freeasinbeer
Mar 26, 2015

by Fluffdaddy

Not Wolverine posted:

My actual use profile is minimal, just me and couple PCs occasionally shoveling over video files every once in a while, "Linux ISOs", etc. I'm over-estimating mainly because I just don't know any better. But I have no plans to enable de-dup so I think I'm fine as far as RAM goes.

What I read this morning on Google claimed that ZFS actually needed it more because a RAM error while calculating hashes was the one way to hose your data on ZFS therefore my data would be safter on EXT4 without ECC. . . but the consensus here seems to agree with what you are saying, that I need to go ahead and switch to ZFS.

I’m double posting this but that was FUD spread by some rear end in a top hat: https://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/

Moey
Oct 22, 2010

I LIKE TO MOVE IT
I've been doing some testing over the past week or two with a TrueNAS 12.0 VM (HBA passthrough) with 14x4tb RAID-Z and 16gb (non-ECC ram). So far so good, can easily saturate a gigabit connection.

H110Hawk
Dec 28, 2006

H2SO4 posted:

From what I can tell, BackBlaze B2 and Wasabi both have similar pricing structures but Wasabi doesn't charge for egress traffic. The fine print on that is that if you download more from them in a month than total data you've got stored with them then you're "not a good fit" - meaning if you have 100TB stored with them in total but you download more than 100TB in a single month then that's a no-no.

Wasabi will let you pay for bandwidth, just like B2. At least their reps tell me that on the commercial side. You just need to have an estimate up front and it helps if you're hooked up to aws us-east-1. For a consumer though downloading from Wasabi or B2 should be a once or never thing, fingers crossed.

sharkytm
Oct 9, 2003

Ba

By

Sharkytm doot doo do doot do doo


Fallen Rib

freeasinbeer posted:

few influential posters on the freenas forums who were assholes about it, because it’s something they ran at work. However I don’t think anyone of them were zfs devs, so it came across as BOFH FUD.

I see you've read anything that CyberJock has EVER posted. Good god he's annoying and pedantic. He knows what he's doing, but if you don't set things up the way HE insists, he'll just blame that for the problem.

BlankSystemDaemon
Mar 13, 2009



Less Fat Luke posted:

ZFS doesn't need 1GB RAM per TB, that might be recommended for deduplication but in normal usage it seems happy with eight GB or higher. I've run >100TB pools on 8GB and 16GB of memory (non-ECC) with no issue.
1GB per 1TB is a general recommendation for production workloads where it's expected that a big working set will be kept in memory to speed things up.
The recommendation for deduplication is 5GB per 1TB of storage, but a more useful way of thinking about it is "number of blocks on disk times 330" (or 70, depending on if you're using Illumos-derived ZFS or OpenZFS), and then divide that by 1024 a couple of times until you get a number that makes sense.

Less Fat Luke
May 23, 2003

Exciting Lemon
Well two things, first we're talking about home usage, and second "citation needed". Recommended by whom? The ZFS on Linux team or docs? Because at this point ECC and RAM sizing seems to be very much tribal knowledge at best or an old wives tale at worst.

ZFS is very flexible and you can adapt it in various ways with SLOGs, L2ARC, and so on to make it fit your workload. I don't think any blanket "X per Y" makes sense for the filesystem anymore.

Coxswain Balls
Jun 4, 2001

FreeNAS/TrueNAS is still running solid for me with 12GB of ECC on a 4x3TB RAIDZ2 array. I tossed an SSD in there a few months ago to run a VM for game servers and maybe once in a while I have to reboot to free up some RAM to start it up but it works great. One of the drives has been giving me errors during the monthly long smart test, but I've been too lazy/poor to get around to replacing it. It's been like that for a year or two with no obvious problems so I've been putting it off until I can afford or get a deal on a whole new array with larger drives. I've got an offsite backup of important stuff, and I try not to be a data hoarder so it hasn't become a pressing issue yet.

Adbot
ADBOT LOVES YOU

IOwnCalculus
Apr 2, 2003





Less Fat Luke posted:

ZFS is very flexible and you can adapt it in various ways with SLOGs, L2ARC, and so on to make it fit your workload. I don't think any blanket "X per Y" makes sense for the filesystem anymore.

I don't even have anything but spinning disks in my ZFS and I'm well under 1GB/TB with no performance impacts when I'm not replacing an entire vdev at a time. I might throw some more RAM at my system eventually but it's not ZFS driving that.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply