Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Knifegrab posted:

Any suggestions?

(To clarify I mean cloud backup)

Crashplan is well liked and a long standing favorite. SpiderOak apparently has some draw if you're super worried about security concerns. Amazon Glacier can potentially be cheaper, but they have an odd price structure and last I dabbled with it all their interfaces and the usability of their apps were a bit poo poo.

Adbot
ADBOT LOVES YOU

Knifegrab
Jul 30, 2014

Gadzooks! I'm terrified of this little child who is going to stab me with a knife. I must wrest the knife away from his control and therefore gain the upperhand.

DrDork posted:

Crashplan is well liked and a long standing favorite. SpiderOak apparently has some draw if you're super worried about security concerns. Amazon Glacier can potentially be cheaper, but they have an odd price structure and last I dabbled with it all their interfaces and the usability of their apps were a bit poo poo.

Crashplan does seem almost too good to be true. They do encrypt the data, which is nice, I don't have that much I am super concerned about but encryption is a nice little bonus. Also unlimited storage is a huge huge plus. Is there any reason I should be wary? Someone recommended carbonite or carbon copy to me, are those not that well received?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I don't know, I ditched FreeNAS a few days ago for Fedora 26. Sure, some hands-on with vim for configuration files, but it went rather fast. That said, it currently only does Samba and iSCSI duties, and doesn't have a fancy front-end. So I'd still rather run my own stuff.

hifi
Jul 25, 2012

Combat Pretzel posted:

I don't know, I ditched FreeNAS a few days ago for Fedora 26. Sure, some hands-on with vim for configuration files, but it went rather fast. That said, it currently only does Samba and iSCSI duties, and doesn't have a fancy front-end. So I'd still rather run my own stuff.

You can try cockpit out, go to yourserver:9090 in a browser. There's a VM plugin that I didn't have installed by default too, as well as docker support

Twlight
Feb 18, 2005

I brag about getting free drinks from my boss to make myself feel superior
Fun Shoe

hifi posted:

The original post guy said he wanted a linux system on top of his nas as well as monitoring via cacti

Cacti is antiquated for sure but i dont feel like setting anything else up and I know I can get graphs doing that w/o much trouble.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

hifi posted:

You can try cockpit out, go to yourserver:9090 in a browser. There's a VM plugin that I didn't have installed by default too, as well as docker support
I know of Cockpit. Didn't know there's plugins for it. Nice.

evol262
Nov 30, 2010
#!/usr/bin/perl

Combat Pretzel posted:

I know of Cockpit. Didn't know there's plugins for it. Nice.

The libvirt plugin is very much a work in progress, but there are subpackages/plugins for docker, kubernetes, libvirt, selinux, entitlements, ovirt, and some remote filesystems (the ones storaged supports)

Most of these are part of the actual cockpit project, others are not

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Knifegrab posted:

Crashplan does seem almost too good to be true. They do encrypt the data, which is nice, I don't have that much I am super concerned about but encryption is a nice little bonus. Also unlimited storage is a huge huge plus. Is there any reason I should be wary? Someone recommended carbonite or carbon copy to me, are those not that well received?

Crashplan is quite nice. The biggest detraction is that the app is java-based for some reason, but it works fine. Upload speeds can be mediocre (I usually get in the 10-20Mbps range), but that's true for most of the services.

I don't know of any particular reason to avoid Carbonite or Carbon Copy. But in that I haven't had anyone ever come back with a bad story about CrashPlan, and it's usually also one of the cheaper options if you've got more than a few hundred GB to back up, I've never seen any reason to use anyone else.

EL BROMANCE
Jun 10, 2006

COWABUNGA DUDES!
🥷🐢😬



They always used to do very good sales at thanksgiving too. The software was my only bugbear, it was an absolute nightmare getting it off my Mac for some reason. It might be better now.

PraxxisParadoX
Jan 24, 2004
bittah.com
Pillbug

Combat Pretzel posted:

I don't know, I ditched FreeNAS a few days ago for Fedora 26. Sure, some hands-on with vim for configuration files, but it went rather fast. That said, it currently only does Samba and iSCSI duties, and doesn't have a fancy front-end. So I'd still rather run my own stuff.

This is me over the coming weekend. Kinda excited to go full Docker via https://linuxserver.io (I work with an early Docker employee, and have been giving him poo poo for all of the things Docker doesn't do that my FreeNAS jails do do)

redeyes
Sep 14, 2002

by Fluffdaddy
Say you had 6x4TB HDs. Would you use RAID of some type? ZFS? I've been using RAID1 but that is obviously wasting too much space.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

redeyes posted:

Say you had 6x4TB HDs. Would you use RAID of some type? ZFS? I've been using RAID1 but that is obviously wasting too much space.

raidz2 is what I'd do.

Internet Explorer
Jun 1, 2005





RAID1 on 6 drives seems crazy.

And yeah, raidz2 / RAID6 / SHR2. Something with 2 parity drives. 1 parity drive isn't enough for drives that large.

jawbroken
Aug 13, 2007

messmate king
1 parity drive is fine for drives that large.

Nulldevice
Jun 17, 2006
Toilet Rascal

jawbroken posted:

1 parity drive is fine for drives that large.

:stonklol:

sharkytm
Oct 9, 2003

Ba

By

Sharkytm doot doo do doot do doo


Fallen Rib

jawbroken posted:

1 parity drive is fine for drives that large.

You forgot the /s at the end of that statement.

jawbroken
Aug 13, 2007

messmate king
If you have ZFS pools and you ever scrub them then you know those URE-spec-based doomsday articles are obviously incorrect and you'll also hopefully know that a single URE wouldn't stop a RAIDZ rebuild anyway. And that you'll often have the ability to replace a drive when it starts failing checksum on scrub, while the failing-but-not-failed drive is still present, giving you even more margin since such an error would have to occur in the same “place”. And you would hopefully understand a lot of other obvious things as well that I won't enumerate about other forms of failure that aren't drive stat related. If you want to be really dogmatic about some poorly estimated failure probabilities then I don't mind, anyway.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
The counter-argument to the above is that if you've got enough stuff to justify 6+ large disks, losing it all would presumably be a giant loving hassle. Also, drive failures are frequently not exactly independent, random occurrences--particularly if you got all the drives in one batch, there's a possibility of them all suffering from a similar defect (or all suffering the same hot and poorly ventilated case, or vibrations, or whatever) and dying at a similar time, vice the purely random distribution you'd expect. This can substantially increase the possibility of multiple drive failures in a short period of time, at which point dropping an extra $120 for a second parity drive starts to sound a lot like a reasonable fee to pay for better insurance against having to re-constitute your 24TB porn array from the backups you probably aren't keeping.

Especially since, by the time you've already paid for your NAS box and a half-dozen drives, one more doesn't even really move the final bill of sale by much. It's the same argument about ECC vs non-ECC: sure, non-ECC probably will be fine, and if all you're doing is saving your favorite Stoya clips or whatever and you've already got all the hardware from an old build, it's probably not worth upgrading. But if you're pricing out a new system, "saving" $50 off a $800+ project when it opens you up to potential data loss seems like a case of penny-wise and pound-foolish.

Internet Explorer
Jun 1, 2005





I mean, it's a home NAS and you can do anything you want with it. Run it in RAID-0 for all I care. Presumably you have all the data backed up anyways and parity drives just mean uptime. But if someone asks for advice, the correct answer for drives of that size is RAID-6 or RAID-10 (or their equivalents, obviously.)

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
6TB drives, 1 parity drive, no ECC, no backups because idgaf if I lose my Linux isos.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Matt Zerella posted:

6TB drives, 1 parity drive, no ECC, no backups because idgaf if I lose my Linux isos.

Then why have even one parity drive!?!

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Why the hell are the Mellanox drivers so kneecapped in Windows? Doesn't do SRP anymore, also doesn't do connected mode. That sweet 65520 bytes MTU :(

--edit:
Holy gently caress, the simple upgrade to QDR Infiniband bumped my single thread 4K random IO iSCSI throughput to near what my local SSD does. And I'm not even using RDMA (yet).

Combat Pretzel fucked around with this message at 15:03 on Jul 19, 2017

THF13
Sep 26, 2007

Keep an adversary in the dark about what you're capable of, and he has to assume the worst.

Thermopyle posted:

Then why have even one parity drive!?!

I'm running my home storage with no parity with this logic. I'm using stable bit's Drive Pool though so if a drive fails the entire thing doesn't go down, just what was on that disk.

This is made a lot more practical with how good a deal crash plan is, if I wasn't backing up all of my data for practically nothing off-site I would probably do raid 6 and only cloud back up important/personal things.

jawbroken
Aug 13, 2007

messmate king
I'm not sure those are counterarguments because they seemingly don't refute anything I said (my argument was that the estimates for probabilities are clearly empirically incorrect, not that the probabilities are independent), and don't take into account various possible cost overheads and requirements. You don't need to argue that they're more likely to fail simultaneously, because the “death of RAID” math already says that they are almost sure to fail. I think people should make reasoned tradeoffs between cost, capacity and covered failure scenarios, but you can continue to think there's one “correct” answer, whatever that could possibly mean. I'm happy for you to think of me as the luckiest person in the world.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Thermopyle posted:

Then why have even one parity drive!?!

Eh, it's nice to have. The bulk of my time was spent figuring out how to properly set up my dockers for radarr/sonarr/deluge/usenet. I can always add a second if I want, but its working.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

It's been a long time since I read any of those articles, but my feeling is that the ones I read did not claim a drive was "almost sure to fail"...only that it was much more likely. Am I mis-remembering?

I mean, I guess it could just depend on which articles we're talking about.

Of course, none of that matters, what actually matters is the actual increased risk, not whatever some article claims.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Matt Zerella posted:

The bulk of my time was spent figuring out how to properly set up my dockers for radarr/sonarr/deluge/usenet.

Yes, setting up ZFS or any competing solution is not a hard thing to do. The work in creating/maintaining a home-built NAS does not lie in creating a NAS, but adding all the extra features that (usually/often) makes you want to build your own in the first place.

Droo
Jun 25, 2003

Thermopyle posted:

It's been a long time since I read any of those articles, but my feeling is that the ones I read did not claim a drive was "almost sure to fail"...only that it was much more likely. Am I mis-remembering?

There was an article/online calculator that everyone cited for many years that showed an array of like 4 4TB drives* in raid 5 had a huge chance of failing (>50%) every year. They came up with this based on the published bad sector rate of hard drives (whatever statistic is like 1 in 10^13, 1 in 10^14 etc), distributed evenly over a drive over time, and calculated it from that somehow.

All of the calculators that I can find now are way less ridiculous. For example, https://www.servethehome.com/raid-calculator/raid-reliability-calculator-simple-mttdl-model/ shows a 12 drive 8TB raid 5 array has less than a 2% chance of failing in 1 year. I can guarantee the old calculator everyone used to cite would have shown a 90%+ failure rate for that setup. I can't find that old one anymore, maybe the author finally felt ridiculous enough to take it down.

* I can't remember exact numbers, but the chance of failure it showed was seriously massive

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Droo posted:

There was an article/online calculator that everyone cited for many years that showed an array of like 4 4TB drives* in raid 5 had a huge chance of failing (>50%) every year. They came up with this based on the published bad sector rate of hard drives (whatever statistic is like 1 in 10^13, 1 in 10^14 etc), distributed evenly over a drive over time, and calculated it from that somehow.

All of the calculators that I can find now are way less ridiculous. For example, https://www.servethehome.com/raid-calculator/raid-reliability-calculator-simple-mttdl-model/ shows a 12 drive 8TB raid 5 array has less than a 2% chance of failing in 1 year. I can guarantee the old calculator everyone used to cite would have shown a 90%+ failure rate for that setup. I can't find that old one anymore, maybe the author finally felt ridiculous enough to take it down.

* I can't remember exact numbers, but the chance of failure it showed was seriously massive

That calc would assume that an URE on rebuild would fault the array out, like older, more stupid raid controllers would actually do. So it calculated the chance of a URE as a function of bytes read, with the chance of failure approaching 1 as data read approached infinity, based on the URE rate. So you ended up with a huge chance of getting turbo boned on rebuild when you had zero parity drives and drive 3 faulted with a read error.

EL BROMANCE
Jun 10, 2006

COWABUNGA DUDES!
🥷🐢😬



Matt Zerella posted:

6TB drives, 1 parity drive, no ECC, no backups because idgaf if I lose my Linux isos.

This is kinda what I want to do in a QNAP box with around a $1,000 or so budget. 18TB online, 6TB parity. All the talk of '6TB volumes are too big for RAID5' has me a little worried, but I've never seen any articles using hard data. Does such a thing exist for failure rates when restoring from parity in this kind of array?

As you, it's data I'd like to keep because it's a hassle to sort it all out again when replacing a drive, but it's not crucial to my life enough that spending even more on hardware (ie something that holds more than 4 bays with just as little CJing, more than 4 drives to create more parity) doesn't seem a good investment.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Droo posted:

There was an article/online calculator that everyone cited for many years that showed an array of like 4 4TB drives* in raid 5 had a huge chance of failing (>50%) every year. They came up with this based on the published bad sector rate of hard drives (whatever statistic is like 1 in 10^13, 1 in 10^14 etc), distributed evenly over a drive over time, and calculated it from that somehow.

I think their "calculation" was just the chance of suffering at least 1 URE somewhere in the array during a rebuild (assuming a completely filled array), and assuming that even a single URE would bring the entire array down. Which, while true for some versions of basic RAID, is not true for numerous more advanced RAID-like setups available these days. It's also been shown that most HDDs are actually more reliable than their published numbers, so that's a factor, as well.

eames
May 9, 2009

THF13 posted:

I'm running my home storage with no parity with this logic. I'm using stable bit's Drive Pool though so if a drive fails the entire thing doesn't go down, just what was on that disk.

This is made a lot more practical with how good a deal crash plan is, if I wasn't backing up all of my data for practically nothing off-site I would probably do raid 6 and only cloud back up important/personal things.

I have a similar setup with one parity drive + n data drives. If one drive fails the parity will reconstruct it, if a second one fails during that process then only the data on the failed drives is lost. Healthy drives remain intact and JBOD readable.
All the really important data, which isn't a lot, is rsynced across all drives once a day (and of course backed up) and the probability of 5 drives dying at the same time is fairly low. Neat side effect is that you don't have to spin up all drives to read from the array.

IOwnCalculus
Apr 2, 2003





ZFS has saved my rear end on a URE during rebuild before.

THF13
Sep 26, 2007

Keep an adversary in the dark about what you're capable of, and he has to assume the worst.

eames posted:

I have a similar setup with one parity drive + n data drives. If one drive fails the parity will reconstruct it, if a second one fails during that process then only the data on the failed drives is lost. Healthy drives remain intact and JBOD readable.
All the really important data, which isn't a lot, is rsynced across all drives once a day (and of course backed up) and the probability of 5 drives dying at the same time is fairly low. Neat side effect is that you don't have to spin up all drives to read from the array.

One extra thing for anyone using or planning to use CrashPlan as a way to deal with a drive failure I should have mentioned in my original post is that Crashplan doesn't have an option to restore only missing files, so it can be difficult to figure out what specific files were stored on the failed drive and actually need to be restored.

I like drive pool a lot but was thinking about trying flexraid soon so I could get 1 parity disk and mostly avoid that issue.

Internet Explorer
Jun 1, 2005





jawbroken posted:

I'm not sure those are counterarguments because they seemingly don't refute anything I said (my argument was that the estimates for probabilities are clearly empirically incorrect, not that the probabilities are independent), and don't take into account various possible cost overheads and requirements. You don't need to argue that they're more likely to fail simultaneously, because the “death of RAID” math already says that they are almost sure to fail. I think people should make reasoned tradeoffs between cost, capacity and covered failure scenarios, but you can continue to think there's one “correct” answer, whatever that could possibly mean. I'm happy for you to think of me as the luckiest person in the world.

what

Greatest Living Man
Jul 22, 2005

ask President Obama
I'm in a little bit of a pickle. I have Sonarr running on a jail in FreeNAS 9.10 with mirrored drives. I was trying to make it recognize when my torrent program finished a download of a season pack so it could import it but keep seeding. Of course, after changing a setting, I got the import to work, but everything else in my TV subfolder was deleted. Fun. I now have something like 1.5 TB of data just floating around in my NAS, unrecognizable by the filesystem. I realized I had set up snapshots incorrectly, only keeping track of jails. My Data (everything) snapshot I think is from after I lost the TV folder. It refers to 5+ TB, and is now 2+ TB in size after I moved some stuff around, making separate datasets so this won't happen again. I'm assuming that I'm screwed on the data aspect of it---even if it was still "on" the filesystem it was probably overwritten with the 2+ TB of snapshot data. So, I now have daily recursive snapshots of my main dataset which has sub-datasets for each major data containing folder as well as the jails. Is there something else I'm missing or is this how it should be set up? I'm aware some people put their jails on a separate main dataset as well.

hampig
Feb 11, 2004
...curioser and curioser...
I've got a 5 year old all-purpose gaming PC that I want to retire into the living room as a plex server but still be able to play games on. It's got 5 disks on it ranging from 500GB to 3TB. 3 of them are throwing SMART errors that I think I need to pay attention to, so I want to replace them and get some kind of drive pool with redundancy going. Right now I'm leaning towards a Storage Spaces pool with a parity space for media and a striped simple space for games. The main advantages that I see are that it's going to cost me nothing (beyond the drives i'm replacing anyway), which is money towards a new gaming PC/PS4, and that I can continue to run Windows 10 for steam/games/plex/kodi.

Looking for any recommendations as to why this would be reasonable, or terrible, or whether there's something else much better. Drivepool + SnapRAID gets a lot of talk, but I don't have strong feelings about using microsoft products, I guess it gives me more flexibility in the case of one drive failure? My experience with storage other than 'plug in another drive' is pretty much zero. Any opinions appreciated :)

EssOEss
Oct 23, 2006
128-bit approved
As long as all you need is pooled storage with some redundancy for when a disk goes rotten, you might be happy with Storage Spaces. It is what I use and I have not run into any issues so far - it works and integrates seamlessly into Windows without any app being the wiser.

I was also recommended Drivepool in this thread but I decided against it - the technical complexity seemed comparatively greater and as it is basically a one-man company that is barely surviving, I do not have a high confidence in their product being both actively maintained and free of compatibility issues.

Mr Shiny Pants
Nov 12, 2012

Combat Pretzel posted:

Why the hell are the Mellanox drivers so kneecapped in Windows? Doesn't do SRP anymore, also doesn't do connected mode. That sweet 65520 bytes MTU :(

--edit:
Holy gently caress, the simple upgrade to QDR Infiniband bumped my single thread 4K random IO iSCSI throughput to near what my local SSD does. And I'm not even using RDMA (yet).

That is good to hear, so I take it you finally have your cable? ;)

Just for fun, try turning off Async on your Zpool and be amazed. Do put it back on afterwards though.

Mr Shiny Pants fucked around with this message at 12:37 on Jul 20, 2017

Adbot
ADBOT LOVES YOU

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Hmmm, nice.

Interestingly, Samba had the biggest bump in read performance. For that specific metric, on iSCSI it went from 14MB/s to 31MB/s (12MB/s before on FreeNAS), on Samba it went from 12MB/s to 50MB/s. Hope they'll get cracking on SMB Direct in the very near future. That said, this is performance from ARC to the workstation, since Diskmark creates a file filled with zeroes to test on. That was the idea anyway, to maximize performance reading from ARC and L2ARC.

Also, I want to move the NAS eventually to a further place. FDR transceivers and MPO fiber is $$$ :[

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply