Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Hadlock posted:

The price spread between the i3 and i5 is about $60, it definitely depends on what you're going to use it for and how much ECC matters to you. If you know you'll never want more than a file server, then the i3 absolutely makes sense, however it's not much more effort to put a hypervisor on the thing (at no cost) and run prod freenas + test freenas + whatever hobby system, and now you can take advantage of all that raw computing power down the road, rather than being locked in to a single purpose PC with a 1990s era mindset. The roughly 50% performance bump, vt-d and future-proofyness make sense (to me) if you're a hobbyist, but if you just need a bare bones file server and don't mind being locked in to a single purpose machine, saving $60 and going with the i3 is probably a better option.
If you're going to use it for ZFS, you should probably be using ECC, full stop. Which means if you want to pinch pennies, you end up with an i3 with ECC support but no VT-d (which honestly is the correct choice for 90% of home users anyhow). If you have visions of ESXi or the like and want to virtualize and need VT-d, I see zero reason to get an i5: if you get a <$300 one, you can get VT-d but again lose ECC--not a great option if you still want to throw important data on it. If you get one of those $300 unicorn ones that have both, great, but now you've paid an extra $100 or so for literally no reason. You can get a E3-1220v3 Xeon for $200 from NewEgg, and the cheapest i5-4xxx of any sort is $190. So it's not like you'd really be saving money going the i5 route even on the super-low-end.

Home user who only wants a NAS? i3 + ECC + Intel NIC
Power-user or someone who wants to experiment with virtualization? Xeon 12X0 + ECC + Supermicro w/IMPI (because IMPI is awesome)

UndyingShadow posted:

Wouldn't you need separate SATA controller cards for this? One for each Freenas VM?
Unless you want to live dangerously, you need to be giving ZFS direct pass-through access to the disks, which yes, will necessitate a different controller if you want to play with multiple VMs. Happily you can still get M1015's for $100 most of the time, which work wonderfully.

Adbot
ADBOT LOVES YOU

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Does ZFS on FreeNAS even have direct access to the hardware, since it's funneled through GELI, if you create/add to the pool via the UI? I actually went to the command line and created a full disk pool manually, instead of that encrypted pool in a partition.

Mr Shiny Pants
Nov 12, 2012
The Xeon 1230v3 would be my goto Cpu if I were to rebuild my server.

Too bad the Microserver still runs swimmingly.

evol262
Nov 30, 2010
#!/usr/bin/perl

DrDork posted:

If you're going to use it for ZFS, you should probably be using ECC, full stop.
We've had this out, and it's just not as concrete as this. Especially for home use.

DrDork posted:

Which means if you want to pinch pennies, you end up with an i3 with ECC support but no VT-d (which honestly is the correct choice for 90% of home users anyhow).

If you have visions of ESXi or the like and want to virtualize and need VT-d
VT-d is absolutely not required. VT-x is, but that's a gimme even on Atoms now.

DrDork posted:

I see zero reason to get an i5: if you get a <$300 one, you can get VT-d but again lose ECC--not a great option if you still want to throw important data on it. If you get one of those $300 unicorn ones that have both, great, but now you've paid an extra $100 or so for literally no reason. You can get a E3-1220v3 Xeon for $200 from NewEgg, and the cheapest i5-4xxx of any sort is $190. So it's not like you'd really be saving money going the i5 route even on the super-low-end.
*if you need vt-d

And I guess this is my whole thing. I want to be clear about the fact that you only need VT-d if you want to run your NAS inside VMware (or KVM or Xen or whatever) and you want to directly passthrough a controller (probably for ZFS).

If you also feel like you "need" ECC, then an e3 Xeon is the right choice.

I realize that ZFS and ZFS on ESXi with a passed-through m1015 is a common setup, but everything you're saying only makes sense in that context.

DrDork posted:

Power-user or someone who wants to experiment with virtualization? Xeon 12X0 + ECC + Supermicro w/IMPI (because IMPI is awesome)

"Power user" is a misnomer in almost all cases, unless you're using it to mean "knows enough to be dangerous, not enough to be competent or knowledgeable".

Xeon+ecc+supermicro is much more expensive than a microserver, nuc, or anything else. You don't need to buy workstation kit to have a functional/good setup. Optiplex 9020s are the devkits used by most openstack developers. Or NUCs.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

evol262 posted:

If you also feel like you "need" ECC, then an e3 Xeon is the right choice.
Given how long ZFS keeps your data cached, if there isn't much IO flushing it out of there, it'd be wise to use ECC. The L1ARC isn't checksummed and presumed to be always correct. I guess it doesn't matter if it's just serving pirated/ripped videos. But if you're using it to store your projects, pictures and what not... If data stayed long enough in ARC to get a bitflip, then served to your app, which then stores it again, the error will slip through and hit the disk. There's no point using ZFS, if you only go half the way. Might as well pick some solution based on mdadm, which gives you more flexibility when wanting to dick around with array geometry (I mean, the inability to extend an existing RAIDZ array by a single disk is a common complaint).

--edit:
Also, lately I've been wondering why disks don't have a hot standby mode, where the spindle rotates at 1/2 or 1/4th of the original speed, to save power, but get back up and running in a fraction of the time.

Combat Pretzel fucked around with this message at 22:20 on Oct 6, 2014

evol262
Nov 30, 2010
#!/usr/bin/perl

Combat Pretzel posted:

Given how long ZFS keeps your data cached, if there isn't much IO flushing it out of there, it'd be wise to use ECC. The L1ARC isn't checksummed and presumed to be always correct. I guess it doesn't matter if it's just serving pirated/ripped videos. But if you're using it to store your projects, pictures and what not... If data stayed long enough in ARC to get a bitflip, then served to your app, which then stores it again, the error will slip through and hit the disk. There's no point using ZFS, if you only go half the way. Might as well pick some solution based on mdadm, which gives you more flexibility when wanting to dick around with array geometry (I mean, the inability to extend an existing RAIDZ array by a single disk is a common complaint).
I use it for data stores of various types (iscsi and NFS, mostly), backups, object storage, and :files:, of course.

The data just doesn't back up these paranoid bit rot theories. It can happen. It can also happen to Windows and Linux and SQL Server's memory cache. ZFS isn't good about fixing it, and it can get progressively worse. If you set your fileserver next to an active radiation source. The odds of it happening naturally are pretty fleeting for an extra $200 and not at all limited to ZFS.

With this logic you should use ECC everywhere, all the time.

sleepy gary
Jan 11, 2006

evol262 posted:

I use it for data stores of various types (iscsi and NFS, mostly), backups, object storage, and :files:, of course.

The data just doesn't back up these paranoid bit rot theories. It can happen. It can also happen to Windows and Linux and SQL Server's memory cache. ZFS isn't good about fixing it, and it can get progressively worse. If you set your fileserver next to an active radiation source. The odds of it happening naturally are pretty fleeting for an extra $200 and not at all limited to ZFS.

With this logic you should use ECC everywhere, all the time.

Your comparisons are meaningless because the point is that if you're using ZFS in the first place, you probably care about your data more than someone with a single disk windows laptop. If you're going to the trouble of setting up a ZFS array, why not go the final 10% of the way and get ECC so you can take full advantage of the magic of ZFS? It's cheap insurance even if it does cost $200.

That said, I won't begrudge you your choice to run non-ECC, especially if you have looked at some data and decided the added risk is not worth the cost of mitigation in your case.

edit: If I could I WOULD use ECC everywhere, all the time. Because why the heck not?

evol262
Nov 30, 2010
#!/usr/bin/perl

DNova posted:

Your comparisons are meaningless because the point is that if you're using ZFS in the first place, you probably care about your data more than someone with a single disk windows laptop. If you're going to the trouble of setting up a ZFS array, why not go the final 10% of the way and get ECC so you can take full advantage of the magic of ZFS? It's cheap insurance even if it does cost $200.

That said, I won't begrudge you your choice to run non-ECC, especially if you have looked at some data and decided the added risk is not worth the cost of mitigation in your case.

edit: If I could I WOULD use ECC everywhere, all the time. Because why the heck not?

The point is kind of that you use a NAS and back up because your data is important. You use ZFS because it's tested, performs well, management is easy, and it's not a split hodgepodge like mdraid/lvm or whatever's cool on Windows in 2014. If you used it because your data is important (more important than other filesystems that don't "require" ECC), you made the wrong choice, and you should be using a distributed replicated store like lustre or swift. And/or have backups on another device. That you keep a copy of offsite.

But you're not, because they don't have the easy, approachable, "can do it cheap on a microserver" advantages of ZFS.

Complexity is an issue. And cost somewhat. All things being equal, I'd use ECC. But it's not equal. It's 20% or more of the base hardware cost. And I can eat that.

It's not that I "didn't use ECC for my case". I used ECC. But given the incredibly marginal odds of anything really bad happening to your data without putting it in Marie Curie's lab, do we need to tell people they "need" ECC?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

evol262 posted:

The data just doesn't back up these paranoid bit rot theories. It can happen. It can also happen to Windows and Linux and SQL Server's memory cache.
Google had a study that said it happens. So did IBM. gently caress knows where I can find them.

The point of ECC is to correct them. So that it actually does not happen to Windows, Linux and SQL Server's cache. Or straight out crash the box to prevent faulty data to be stored or processed downwards (it does that on two or more faulty bits).

evol262 posted:

ZFS isn't good about fixing it, and it can get progressively worse. If you set your fileserver next to an active radiation source. The odds of it happening naturally are pretty fleeting for an extra $200 and not at all limited to ZFS.
How isn't ZFS good about it? The whole point of the filesystem is to prevent bitrot. Literally everything on disk is checksummed to boot. Fileblock checksums are stored upstream in the metadata, metadata node checksums are stored in the parent nodes. Said metadata is even stored with three copies, spread across vdevs if there are multiple. And the metadata tree is only valid if the new uberblocks makes it to the disk, which gets written last, so that the system doesn't get hosed over mid-write in a power outage. COW be hailed.

If there's redundancy, errors on disk can be fixed easily. However if data that's in memory gets hosed over, be it cosmic rays or crappy memory cells, you're poo poo out of luck. Electronics can go bad after a while of use, too. If it happens to be the memory modules that usually just host the cache, you won't notice it for a while.

I mean, hell, I've copied my video files between disks so goddamn many times, macroblock errors became noticable on some of them. Given that all buses are error-corrected, guess what might be responsible.

evol262 posted:

With this logic you should use ECC everywhere, all the time.
A long while ago, before everything had to become cheap, computers did always use it. The first thing I had to do after getting my new 486SX was to get new memory modules because of parity errors of a faulty one.

sleepy gary
Jan 11, 2006

evol262 posted:

The point is kind of that you use a NAS and back up because your data is important. You use ZFS because it's tested, performs well, management is easy, and it's not a split hodgepodge like mdraid/lvm or whatever's cool on Windows in 2014. If you used it because your data is important (more important than other filesystems that don't "require" ECC), you made the wrong choice, and you should be using a distributed replicated store like lustre or swift. And/or have backups on another device. That you keep a copy of offsite.

But you're not, because they don't have the easy, approachable, "can do it cheap on a microserver" advantages of ZFS.

Complexity is an issue. And cost somewhat. All things being equal, I'd use ECC. But it's not equal. It's 20% or more of the base hardware cost. And I can eat that.

It's not that I "didn't use ECC for my case". I used ECC. But given the incredibly marginal odds of anything really bad happening to your data without putting it in Marie Curie's lab, do we need to tell people they "need" ECC?

If you use a NAS and your data is already corrupted because you cheaped out on hardware and got unlucky about a bitflip event, then all your backups are hosed too.

It's not common but it is also not as uncommon as you might think.

I like how I frame it in my posts when people ask - that they should use ECC. I would never say it is required, because it isnt.


edit: you seem to be hung up on the idea that you need to put your server in a lab full of unshielded gamma sources in order to ever have a bit flipped in DRAM. I recommend you take a trip to your city's science museum and watch their cloud chamber for a while.

sleepy gary fucked around with this message at 23:22 on Oct 6, 2014

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

evol262 posted:

The data just doesn't back up these paranoid bit rot theories. It can happen. It can also happen to Windows and Linux and SQL Server's memory cache. ZFS isn't good about fixing it, and it can get progressively worse. If you set your fileserver next to an active radiation source. The odds of it happening naturally are pretty fleeting for an extra $200 and not at all limited to ZFS.

As many above me have said, if you're already looking into something like a NAS or especially ZFS, the assumption is that you give a poo poo about the integrity of your data to a greater extent than the average user, who would just have thrown another 4TB drive into their computer and called it good. To say that you should also have offsite and other backup plans for critical data is absolutely correct--ECC or no, RAID(Z) Is Not A Backup and you are 100% right. But you're pretty much wrong about the rest of it; ZFS is a great system that (assuming you don't subvert it by doing strange poo poo like virtualizing its drives) provides superior data integrity protection than many other file systems. It's silly to say that Joe User with 5TB of poo poo he'd like to keep pretty safe is wrong for picking ZFS over setting up a lustre cluster or whatever, so he should just say gently caress it and live dangerously. That'd be like saying your mom is wrong for wanting a van with a 5-star crash-test rating for the kids, because if she was serious about safety she'd buy a tank, but that's expensive so she should just toss them on the back of a motorcycle.

It's also erroneous to say that it "costs $200 more." You should already be buying some sort of server-type motherboard for the Intel NIC and various other reasons, so that's not a factor. Even if for some reason you're considering a consumer board, decent ones are still ~$100 (+$20 for the NIC to replace the inevitable RealTek onboard), and you can get a X10 of various flavors for ~$150. Solid 1333 8GB RAM kits are ~$75 right now, while similar specced ECC RAM is ~$100. So the only way you pay $200 more is if you're buying 64GB of RAM, in which case best of luck to you. The reality is that for most "normal" home servers you're talking ~$50 more.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
According to my Intel BIOS I've had something like 30 bit flips in the past 3 years or so on my workstation with most corrected and some uncorrectable. I don't live in Richland, WA or in Denver, CO, I'm like 5 miles from the Amazon AWS datacenter in Ashburn. I do some actual work on my machines, and the extra costs of keeping PAR files lying around (both from a management and capital basis) are kind of lame to protect those constantly. I partly got ECC as an experiment to see whether I need it in hindsight rather than finding out that I should have and having no idea what messed up. I don't know what kind of insurance policy out there exists that tells you every time it kept you out of an accident or just plain annoyance, but there's some value to end users possibly there.

I don't compare development lab environments to production ones aside from configuration management because development is kind of meant to be a bit cheap and less reliable. You wouldn't want an in-memory database to have errors much in prod but during dev for distributed systems I generally would like to see some errors once in a while to test the reliability of my availability layer that I'm trying to poke holes in. Google has a ton of redundancy in their memory systems and storage and boasts an architecture built around cheap, maximally cost effective units of work, so I don't think they'd pick ECC RAM for no good reason either. Same goes for Facebook.


I'm a developer but when I put my ops hat on I have enough things going wrong I just don't want to care about whether hardware is reliable or not. It's kind of a bitch if it isn't obviously just like at home. It's possible for bad RDIMMs to keep running, but they'll bluescreen / purplescreen instead of causing a bunch of odd checksum problems randomly that shows up in monitoring as needing to recalculate or retransmit. I use ECC at home because I'd rather just pay a little more to be able to glance at my BIOS when things randomly segfault to find out that a stick is going bad when I did zero to my system. There's other benefits beyond the reliability / safety arguments.

My Macbook Pro uses standard SO-DIMMs but I don't use my laptop as a single source of truth for my data across several years that's on 24 / 7 and in a kind of warm enclosure either. My HTPC doesn't write anything of meaningful value either, it's mostly for reading.

I might argue that a number of problems end users experience with flakey tech products have to do with RAM just sucking, moving on, and being a pain elsewhere instead of just flat out telling the BIOS that there's a RAM problem and making it possible to recover. I could use that feedback from defective units, but devs are so removed from ops these days they're just supposed to keep pumping out more inane features than to go through existing problems with a fine toothed comb..

evol262
Nov 30, 2010
#!/usr/bin/perl

DNova posted:

edit: you seem to be hung up on the idea that you need to put your server in a lab full of unshielded gamma sources in order to ever have a bit flipped in DRAM. I recommend you take a trip to your city's science museum and watch their cloud chamber for a while.

And I suggest you read the study. Nobody's denying that there's background radiation, but the likelihood of it hitting a memory cell is low.

The whole argument here is that ECC is good. Nobody is saying it isn't. But it's an added cost for an extremely minor risk. Datacenters (and orgs with more risk and income than the average person reading this thread) pick ECC. But the assertion that home users need ECC (and only for one thing -- it's almost always "ZFS needs ECC", nobody says it about UnRAID or whatever) is just cargo cult.

DrDork posted:

As many above me have said, if you're already looking into something like a NAS or especially ZFS, the assumption is that you give a poo poo about the integrity of your data to a greater extent than the average user, who would just have thrown another 4TB drive into their computer and called it good. To say that you should also have offsite and other backup plans for critical data is absolutely correct--ECC or no, RAID(Z) Is Not A Backup and you are 100% right. But you're pretty much wrong about the rest of it; ZFS is a great system that (assuming you don't subvert it by doing strange poo poo like virtualizing its drives) provides superior data integrity protection than many other file systems. It's silly to say that Joe User with 5TB of poo poo he'd like to keep pretty safe is wrong for picking ZFS over setting up a lustre cluster or whatever, so he should just say gently caress it and live dangerously. That'd be like saying your mom is wrong for wanting a van with a 5-star crash-test rating for the kids, because if she was serious about safety she'd buy a tank, but that's expensive so she should just toss them on the back of a motorcycle.
I'm a fan of the slippery slope and false equivalence, but no. The assertion was that people who use ZFS care more. Mine was that people who use NASes care more. But again, nobody tells people using mdraid or unraid they need ECC. And I know ZFS doesn't have a hard fsck, but do you see the inconsistency?

You're telling the mother buying the van that she should get someone to add a roll cage and rally harnesses just in case. Using a NAS is already the 5-star rating.

DrDork posted:

It's also erroneous to say that it "costs $200 more." You should already be buying some sort of server-type motherboard for the Intel NIC and various other reasons, so that's not a factor.

I honestly just think you're in love with datacenter/workstation gear. Why not buy $20 NICs?

DNova posted:

If you use a NAS and your data is already corrupted because you cheaped out on hardware and got unlucky about a bitflip event, then all your backups are hosed too.

You should be creating a checksummed incremental backup at the same time you put it on the NAS in the first place. Copying directly off means it could be hosed. A good backup strategy creates multiple copies from known-good data right from the start.

Combat Pretzel posted:

Google had a study that said it happens. So did IBM. gently caress knows where I can find them.
The study I linked was Google saying that 92% of their systems had never experienced an event.

Combat Pretzel posted:


The point of ECC is to correct them. So that it actually does not happen to Windows, Linux and SQL Server's cache. Or straight out crash the box to prevent faulty data to be stored or processed downwards (it does that on two or more faulty bits).
I know what ECC does. My point (as above) is that people don't say you need ECC for fileserver builds or home labs on technologies other than ZFS.

Combat Pretzel posted:

How isn't ZFS good about it? The whole point of the filesystem is to prevent bitrot. Literally everything on disk is checksummed to boot. Fileblock checksums are stored upstream in the metadata, metadata node checksums are stored in the parent nodes. Said metadata is even stored with three copies, spread across vdevs if there are multiple. And the metadata tree is only valid if the new uberblocks makes it to the disk, which gets written last, so that the system doesn't get hosed over mid-write in a power outage. COW be hailed.
The whole point is to minimize the administrative costs of fscking and worrying about managing your filesystem. And there are checksums all over the place. And other filesystems suffer the same potential problems. I'm not saying they're better, but that ZFS isn't good.

So, riddle me this: why do you "need" ECC with ZFS but not UnRAID? That's the whole question.

Combat Pretzel posted:

A long while ago, before everything had to become cheap, computers did always use it. The first thing I had to do after getting my new 486SX was to get new memory modules because of parity errors of a faulty one.
Optional parity memory was already a thing with 486es. It's less about everything getting cheap and more technology advancing past the point of likely errors in wire-wrapped cells and slow buses with giant transistors (relatively)

E: off to Turkey. I'll catch up on this when I have data service and phonepostin' time

evol262 fucked around with this message at 08:21 on Oct 7, 2014

sleepy gary
Jan 11, 2006

evol262 posted:

And I suggest you read the study. Nobody's denying that there's background radiation, but the likelihood of it hitting a memory cell is low.

You wrote a ton of poo poo and I'm way too tired to read all of it, but the chances of a cosmic ray hitting a memory cell in your DRAM is 100%. Whether it causes a bit flip is very low. However, there's lots of time for lots of cosmic rays to hit lots of memory cells and those small chances add up and eventually you might win the lottery.

Have fun in Turkey. Get some ice cream from a street vendor. It has a unique consistency and is very good.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

evol262 posted:

I honestly just think you're in love with datacenter/workstation gear. Why not buy $20 NICs?
If you've already got a motherboard you're trying to use, the sure, by all means pick up an Intel NIC. But if you're asking me what should you buy to make a new system I'm going to tell you to get a decent server board that's built with an eye on stability and reliability and is only a few bucks more than your generic one built for cost efficiency + a NIC.

evol262 posted:

So, riddle me this: why do you "need" ECC with ZFS but not UnRAID? That's the whole question.
Any time anyone drops in here and says "Hey guys I want to put together a purpose-built server to safely store all my stuff and I'd like to use X for the OS/FS, what should I get?" most of us will recommend ECC. A lot of people toying with UnRAID and the like are doing so on re-purposed excess gear, so they've already made their choice about ECC.

I think you keep missing that part: ZFS and the like will run on non-ECC, and there's a decent chance you'll never have an issue. If you've got a spare machine laying around and want to use it for such, sure, we'll try to help you out with it. If you're making a purpose-made, built-from-scratch machine to house all your poo poo you don't want to lose, though, it seems pretty penny-wise and pound foolish to skimp like that, regardless of whatever filesystem you opt for.

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.

DrDork posted:

If you're making a purpose-made, built-from-scratch machine to house all your poo poo you don't want to lose, though, it seems pretty penny-wise and pound foolish to skimp like that, regardless of whatever filesystem you opt for.

This is the part that makes most sense to me. If you're talking about dropping a significant amount of money on hardware already ($1000+, even more for a good number of people in this thread), spending the extra 10-20% to make sure that you've got an extra layer of protection for your data seems very worthwhile. You've got on-disk checksums, copy-on-write, and the ZIL to give you some safety, may as well tack on error correction at the RAM level too.

Even if you're not using ZFS, that little bit of extra spending can go a long way for your data, especially if you're dealing with critical stuff like long term financial records or even just family photos. Sure, you should be using an off-site backup for those things. But what's to say the integrity of your backup exists when you're transferring that data TO the backup? The ECC is just a little bit more of a safety net.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

I don't think the "may as well spend the extra 10 or 20%" is that a great of an argument. You have to balance that against the likelihood of needing the protection and the amount of damage that failure costs.

There are tons of things you can spend an extra 20% on in life to get something that is objectively better/safer/faster, but you don't.

Personally, I don't use ECC, but that's because my home servers are always built from the hardware I previously used for my desktop PC. My current server is using an i5-750 on a P55 chipset. Also, a failure isn't the end of the world for the data I put on there. I can re-obtain all the data easily enough. In my case, it's not an extra couple hundred dollars for ECC, it's a whole new system I'd have to buy to get ECC.

I'd certainly consider ECC if I was ever to buy hardware specifically for the server

Rather than trying to just use fallible heuristics like "they're using a NAS with ZFS, and people doing this must use ECC because the calculus of ECC cost vs likelihood of bit flips vs the value of their data always favors using ECC" we'd be better off just telling people why they may want to use ECC.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
You double-speak yourself there. Most of us here have been pretty up front that ECC isn't an absolute 100% hard requirement, but it is what we recommend when people ask about what new, purpose-built hardware they should buy, vice people interested in just repurposing old bits.

I mean, if you follow your logic, we should also remind everyone that they can get cheaper hard drives by shucking random externals, and chances are that things will probably be fine. But no one ever mentions that as a counter to us recommending WD Reds.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

DrDork posted:

You double-speak yourself there. Most of us here have been pretty up front that ECC isn't an absolute 100% hard requirement, but it is what we recommend when people ask about what new, purpose-built hardware they should buy, vice people interested in just repurposing old bits.

Except this started out with someone doing basically that:



DrDork posted:

I mean, if you follow your logic, we should also remind everyone that they can get cheaper hard drives by shucking random externals, and chances are that things will probably be fine. But no one ever mentions that as a counter to us recommending WD Reds.

That's because the cost/benefit is a lot different in that case.

IOwnCalculus
Apr 2, 2003





Comparing the incremental cost to upgrade to ECC versus the incremental cost to grab Reds instead of Greens would make more sense if the Reds were SAS drives instead of SATA.

And if the M1015 wasn't so gloriously cheap on eBay.

BlankSystemDaemon
Mar 13, 2009



ECC might not be mandatory depending on what supposedly scientific data you choose to depend on, whether your data is important to you, whether you are using old gear you happen to have lying around, if you're building something new and you find that ECC is/isn't going to break the bank for your budgetted homeserver, or ECC out costs no premium compared to regular memory depending on the market at the moment in time when you choose to buy memory.
I don't think I can get much more vague than that. Hope this helps.

BlankSystemDaemon fucked around with this message at 22:42 on Oct 7, 2014

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

IOwnCalculus posted:

Comparing the incremental cost to upgrade to ECC versus the incremental cost to grab Reds instead of Greens would make more sense if the Reds were SAS drives instead of SATA.

And if the M1015 wasn't so gloriously cheap on eBay.
The cost of ECC RAM is about $25/8 GB more. So almost exactly the same price difference as between 4TB greens and reds (currently $20). And most people aren't going to buy 4+ sets of RAM, either.

Hadlock
Nov 9, 2004

ECC is only required if you're caching the data in ram before writing it to disk, right? If you're not caching (direct write to mirrored array) then there's no benefit, or is there?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
In theory, if all you wanted was direct I/O to an add-in hardware-based RAID 0 controller, you would minimize the impact of potential RAM errors (though not eliminate it since I doubt there's any reasonable method of getting the data from the disk to the NIC and out without passing it through system RAM at least temporarily). You'd also tank your performance, though, which would seem solidly in the category of cutting off your nose to spite your face.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
In case of ZFS, it'll cache your data indefinitely, if it doesn't get flushed out by newer IO. If you want guaranteed correctness, ECC would be recommended, too. Writes get cached 30 seconds by default, altho FreeNAS has this tweaked to five seconds. Not sure about other distros.

Brain Issues
Dec 16, 2004

lol
I'm currently using an old ATX full size motherboard in an old case for a NAS server and I'm looking for a new case to swap the parts into. Searching around I'm only finding mATX and ITX cases that were designed for NAS.

I want a full size ATX case with at least 6 spots for 3.5" drives with the ability to hot swap drives. Is anyone here aware of such a case?

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.

Brain Issues posted:

I'm currently using an old ATX full size motherboard in an old case for a NAS server and I'm looking for a new case to swap the parts into. Searching around I'm only finding mATX and ITX cases that were designed for NAS.

I want a full size ATX case with at least 6 spots for 3.5" drives with the ability to hot swap drives. Is anyone here aware of such a case?

Buy a full size case with 6 5 1/4" spots and 2 3x5 hot swap bays (converts 3 5 1/4" bays to 5 3.5" hot swaps.)

Mthrboard
Aug 24, 2002
Grimey Drawer

Don Lapre posted:

Buy a full size case with 6 5 1/4" spots and 2 3x5 hot swap bays (converts 3 5 1/4" bays to 5 3.5" hot swaps.)

Or a case with at least 4 5.25" bays and a pair of 3-in-2 hot-swaps like the IcyDock MB153SP. I have one of those in my case. It works great, but there is one small potential issue with it. It uses SATA power connectors, and the way they are installed on the cage makes it very difficult to install the power connectors if the wires are perpendicular to the plug, like they are on most power supplies. If you have connectors with the wires coming straight out the back (or can get a couple of Molex to SATA adapters like that), you'll be fine.

SamDabbers
May 26, 2003



Brain Issues posted:

I'm currently using an old ATX full size motherboard in an old case for a NAS server and I'm looking for a new case to swap the parts into. Searching around I'm only finding mATX and ITX cases that were designed for NAS.

I want a full size ATX case with at least 6 spots for 3.5" drives with the ability to hot swap drives. Is anyone here aware of such a case?

For anyone looking to build a new NAS with similar requirements, take a look at the Lenovo TS440. It comes with four 3.5" hot swap bays, and can be upgraded to eight. You'll also have to buy drive sleds, because the ones included are just dummy placeholders.

I haven't found as nice a hot swap tower chassis that supports a full ATX motherboard, and the CPU alone is going for over $200 right now, so it seems like a decent deal even having to buy the stupid sleds if you're building a new server.

Bank
Feb 20, 2004
This is probably a dumb question, but if I buy a 2 bay NAS, do I have to set it up in RAID or can I just use them separately? (i.e., have one hold media and the other hold a backup of my desktop machine)

Civil
Apr 21, 2003

Do you see this? This means "Have a nice day".

Bank posted:

This is probably a dumb question, but if I buy a 2 bay NAS, do I have to set it up in RAID or can I just use them separately? (i.e., have one hold media and the other hold a backup of my desktop machine)

Sure, it's called a JBOD (just a bunch of disks) configuration. It's not redundant, so decide what level of risk you want to assume.

The Gunslinger
Jul 24, 2004

Do not forget the face of your father.
Fun Shoe
Are there any decent options for prebuilt NAS with transcoding (Plex)? I need something fairly dummy proof for a family member and I want to spend little to no time helping them with it after setup. I was looking at the Synology DS-xxx Play line but they don't seem to support Plex.

Civil
Apr 21, 2003

Do you see this? This means "Have a nice day".

The Gunslinger posted:

Are there any decent options for prebuilt NAS with transcoding (Plex)? I need something fairly dummy proof for a family member and I want to spend little to no time helping them with it after setup. I was looking at the Synology DS-xxx Play line but they don't seem to support Plex.

DS415+ is the best option, since their Intel models support Plex.

Brain Issues
Dec 16, 2004

lol

SamDabbers posted:

For anyone looking to build a new NAS with similar requirements, take a look at the Lenovo TS440. It comes with four 3.5" hot swap bays, and can be upgraded to eight. You'll also have to buy drive sleds, because the ones included are just dummy placeholders.

I haven't found as nice a hot swap tower chassis that supports a full ATX motherboard, and the CPU alone is going for over $200 right now, so it seems like a decent deal even having to buy the stupid sleds if you're building a new server.

Wow, that is incredible value. It comes with a motherboard and power supply too? What about graphics? Xeon doesn't have integrated graphics does it?

IOwnCalculus
Apr 2, 2003





Brain Issues posted:

Wow, that is incredible value. It comes with a motherboard and power supply too? What about graphics? Xeon doesn't have integrated graphics does it?

Xeons ending in "5" actually do have on-chip graphics, ones that end in "0" do not. Supermicro typically just throws a dirt cheap onboard graphcs chip onto their motherboards.

With that said, for an E3 V3 and at least some RAM? That's a loving steal (especially the used one from Amazon itself) and would be drat tempting to me if I hadn't already redone my build for ECC-on-the-cheap.

DrDork posted:

The cost of ECC RAM is about $25/8 GB more. So almost exactly the same price difference as between 4TB greens and reds (currently $20). And most people aren't going to buy 4+ sets of RAM, either.

RAM stick to RAM stick, sure. Again, we run into the situation where people choose to reuse older commodity hardware / their out-of-date gaming PCs to make building a NAS feasible. You can plug a WD Red into any box that supports SATA; ECC RAM would require you to sell your old CPU, motherboard, and RAM to buy a CPU / motherboard that do support ECC.

SamDabbers
May 26, 2003



Brain Issues posted:

Wow, that is incredible value. It comes with a motherboard and power supply too? What about graphics? Xeon doesn't have integrated graphics does it?

Yep that includes a motherboard with a C226 chipset, 450W power supply, one 4GB stick of ECC DDR3-1600, and the CPU's integrated graphics. It also has Intel AMT for remote management.

My storage server is a little over 4 years old and I'm waffling between building a new TS440 or letting it ride one more year. It's a decent chunk of change to drop but very tempting for what you get.

IuniusBrutus
Jul 24, 2010

So, the last couple pages have been nearly indecipherable.

Anyways. I need a backup solution for two computers. It doesn't need to do anything else. Some sort of redundancy would be nice, but every vital file I have is also on a cloud storage service, so not needed. Also wouldn't mind user-replaceable drives, but I could be convinced that they aren't needed.

I can get a WD MyCloud for cheap-ish. Would that work? Or would I be better off buying something a bit more powerful?

IuniusBrutus fucked around with this message at 06:22 on Oct 10, 2014

Rat Supremacy
Jul 15, 2007

The custom title is an image and/or line of text that appears below your name in the forums
Would a NAS work for running Plex at home? Basically I have a big power hungry desktop that's on it's last legs, and I want to transition to something that is more set up purely to store/download media (so needs to run 24/7 so needs to be as low power as possible), and connect to my TV via HDMI. I don't want to break the bank too hard, and the data isn't totally irreplaceable (obviously it would be nice to avoid it being destroyed).

Important:
- Able to play HD content *flawlessly*
- As low power as possible (one of those fancy new ARM SOCs, perhaps?)
- Cheapish
- 5+TB
- Low effort

Less important
- Redundancy/security.
- Colossal space (10TB+)

Not important at all
- Linux beginner friendly
- Enterprise level deployment or anything business like that.

It would also be nice if it could run Netflix as the Chromecast is a wee bit iffy, but that's a nice to have (does Netflix even work on Linux?)

Rat Supremacy fucked around with this message at 14:56 on Oct 10, 2014

IOwnCalculus
Apr 2, 2003





IuniusBrutus posted:

So, the last couple pages have been nearly indecipherable.

Anyways. I need a backup solution for two computers. It doesn't need to do anything else. Some sort of redundancy would be nice, but every vital file I have is also on a cloud storage service, so not needed. Also wouldn't mind user-replaceable drives, but I could be convinced that they aren't needed.

I can get a WD MyCloud for cheap-ish. Would that work? Or would I be better off buying something a bit more powerful?

How much storage do you actually need?

Also, ECC-chat: i3 version of that Thinkserver with 4GB ECC is now only $200 at Newegg according to Slickdeals after a rebate and a coupon code.

Adbot
ADBOT LOVES YOU

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.

IOwnCalculus posted:

Also, ECC-chat: i3 version of that Thinkserver with 4GB ECC is now only $200 at Newegg according to Slickdeals after a rebate and a coupon code.

Note that this version seems to only come with 3 3.5" bays, instead of the 4 (upgradeable to 8) of the TS440.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply