Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
PerrineClostermann
Dec 15, 2012

by FactsAreUseless
Are WD Reds still good to go for 4TB drives?

Adbot
ADBOT LOVES YOU

qutius
Apr 2, 2003
NO PARTIES

Toshimo posted:

I got an e-mail ad from NewEgg for the QNAP TS-431+ @ $265, which seemed pretty good, but it's only got 1 review which is 1 egg. Anyone familiar with the product line have any input?

http://www.newegg.com/Product/Product.aspx?Item=N82E16822107243&ignorebbr=1

There seems to be a few more decent reviews on Amazon if you haven't checked there yet. Not the greatest place for reviews sometimes, but usually there are a couple where people seem to know what they're talking about at least.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

PerrineClostermann posted:

Are WD Reds still good to go for 4TB drives?

Yes. The Seagate NAS drives also appear to be a good choice, though there's less established history for them.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

RoadCrewWorker posted:

If i just want to hook up a variety of existing, non-uniform disks (more would be better but 4 is fine) to my network, are dedicated multi-bay enclosures even the right place to look or is a custom built fanless pc the only way to go? Is there some obvious alternative ready-made solution that i just haven't stumbled on yet?
Your current configuration sounds like a JBOD (Just a Bunch of Disks) that cheap NASes will support as long as your drives are formatted with something that the appliance can use. Because you're likely using Windows or macOS, this probably means no unless you specifically formatted your drives as FAT32 or perhaps ExFAT. However, most NAS appliances do not support FAT32 or ExFAT (I heard of one once but not sure what happened to it, probably gone because of licensing issues to Microsoft).

However, your current setup sounds extremely risky and how much you care about the data being there should be proportional to your budget and size or you'll have to temper yourself for the day when you lose some or all of that data. Not sure how much total data (it sounds like 8 TB or so?) you have but I'd be saving up some pennies for at least a 4 bay NAS with at least 6 TB hard drives in them in a mirrored array. Otherwise, you're looking at spending for even more to use 4 TB drives and supporting at least 6 drive bays to achieve some form of RAID6 or RAIDZ2 (2 drives of extra redundancy for 4 drives worth of available capacity). Given 6-bay NASes start around $600 for mediocre ones and quickly go up above $900 from there, most people with more than 8 TB of data start to build their own servers or have the budget to not care about a matter of $1k for some convenience in not having to build yet another server.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

RoadCrewWorker posted:

Hello experts, i scanned the OP (though unsurprisingly it was last edited in 2012 and most links are dead or outdated) and the last page and did a bit of own research but i'm still uncertain so i thought i'd just ask if i'm even looking for the right thing.

Basically i had an old pc serving files in my home LAN (via http server, network drive share etc) from a mixture of existing 1,2,3 or 4 TB drives that i'm looking to replace with something more economical that can be set up and administrated remotely (the pc, not the drives). Data is mostly noncritical stuff like compressed drive backup images or a history of database backup dumps that are infrequently written and even more rarely read, so transfer speed is not a factor at all. The few non-redundant parts that matter are backed up specifically off-site anyway, so internal redundancy or a lost harddrive barely matters. I was looking at entry level 4 bay NAS stuff from QNAP and Synology, but those apparently all (re)format any existing drives even for non-raid setups, and i'd rather avoid the required dump/restore for all existing data for absolutely no benefit.

If i just want to hook up a variety of existing, non-uniform disks (more would be better but 4 is fine) to my network, are dedicated multi-bay enclosures even the right place to look or is a custom built fanless pc the only way to go? Is there some obvious alternative ready-made solution that i just haven't stumbled on yet?

If you want to take advantage of multiple sized disks, but still have disk redundancy, you should look at Unraid or SnapRaid.

RoadCrewWorker
Nov 19, 2007

camels aren't so great
Thanks for the suggestions, i'll look into that!

phosdex
Dec 16, 2005

Posting this here if anyone else runs into this. I backup my macbook to my freenas time machine share. I've got over 2 years of time machine stuff, says its about 180GB. Anyway I now get the message, "Time Machine completed a verification of your backups. To improve reliability, Time Machine must create a new backup for you". Apparently this can happen if a backup occurs during a scrub. I'm going through the instructions here: http://www.garth.org/archives/2011,08,27,169,fix-time-machine-sparsebundle-nas-based-backup-errors.html

edit: those instructions worked for at least the first backup. The fsck took around 5 hours for me but it's backing up properly now. Will see what happens later.

phosdex fucked around with this message at 03:02 on Nov 8, 2016

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
I want to play with a ZFS system as a homelab for picking up skills at my job. Either FreeBSD or Illumos or something similar.

Any recommendations for a SATA card with at least 4 ports? I will probably be following the outlines of the "DIY NAS 2016" build with an 8-bay mITX case. I do have a workstation pull SAS card on hand - reads as "LSI Logic / Symbios Logic SAS1064ET". Any use?

What are my options in terms of going faster than a gigabit link? I ran into some 10GbE adapter pulls that were reasonable (like $100 IIRC? It was a while back), and I've seen a few onboards that weren't terribly expensive either, but the switches still seem totally unreasonable.

Could I just plug the application server into the NAS and get a 10GbE crossover link?

Would it be any cheaper to try and scrape up some used InfiniBand hardware? Again, if it's the switches that are prohibitively expensive could I think InfiniBand lets you do a crossover? This would be really fun to play with for programming too - I'd love to get back into MPI.

Could I gang up multiple 1GbE NICs on something like the Avoton boards? Would this make any difference going into a consumer-grade switch or would I also need something faster there too?

Paul MaudDib fucked around with this message at 02:50 on Nov 8, 2016

IOwnCalculus
Apr 2, 2003





The switches will be prohibitively expensive. Not much reason for 10G. And a switch that does proper LACP for NIC teaming is still pricey.

The 1064 controller will work fine in IT mode, but doesn't support drives over 2TB (or 1.5, one or the other).

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
To be honest I'm actually thinking I might just make space for a rack and do a regular homelab. Most of the gear is rack-format anyway.

The more I'm thinking about it, InfiniBand sounds like the way to go if I am going to pay for more than 1GbE (which I'm probably not going to do anytime soon). Even old SDRx4 stuff is still 8 Gbps and the latency will still be nice for MPI.

Paul MaudDib fucked around with this message at 03:38 on Nov 8, 2016

cycleback
Dec 3, 2004
The secret to creativity is knowing how to hide your sources

Melp posted:

[*]HDD Fans: Noctua NF-F12 iPPC 3000 PWM -- The SC846 comes with 3 80mm HDD fans which are absurdly loud. Fortunately, the fan wall is removable and 3 120mm fans fit perfectly in its place. I zip-tied the 120mm fans together and used zip-tie mounts to secure them to the chassis. I started with Noctua NF-F12 1500 RPM fans, but some of the drives were getting a bit hot under heavy load, so I switched to their 3000 RPM model. I also discovered that air was flowing from the CPU side of the fan wall back over the top of the fans rather than coming through the HDD trays, so I cut a ~3/4" strip of wood to block the space between the top of the fans and the chassis lid. With the wood strip in place, HDD temps dropped like 10 C. Pics of the fan wall install process (still showing 1500 RPM fans): http://imgur.com/a/SCaWu

Thanks for the super detailed post. Do you think if you used 140 mm fans as a fan wall the blocking strip you made of wood would still be necessary. Would 3 x 140 mm fans fit horizontally?

Thanks for the movie too. How would you compare the noise level after modifications to a quiet tower?

Did you make custom cables to power the drives or did the cables come with the Supermicro PSUs?

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Most people that can use 10GbE at home are probably going to use it exclusively between maybe 2 or 3 different machines, so a switch may be entirely unnecessary if you do direct cable connections via SFP+. I've seen some NAS appliances come with SFP+ connectors and I think some of the Supermicro Xeon D boards have them as well, which may be a necessity if you're trying to add in an LSI HBA on a mini ITX board. I think there's a mini-ITX extended board that has 10+ SATA ports or a SAS HBA out there that's like $700+ with all that, which is what I'd need if I went crazy and wanted to build a storage + GPU compute server on a single machine.

Paul MaudDib posted:

The more I'm thinking about it, InfiniBand sounds like the way to go if I am going to pay for more than 1GbE (which I'm probably not going to do anytime soon). Even old SDRx4 stuff is still 8 Gbps and the latency will still be nice for MPI.
Infiniband is a good option for storage traffic and its support in Linux-y OSes is reasonable as well. With 10GbE adoption being far stronger than Infiniband in most datacenters, it's been a bit of a firesale on that equipment which means you can snag Infiniband adapters and switches for rather low cost on Ebay now.

The hassle with Infiniband is mostly that you'll have to add a card to each device and likely switch it on top of your normal Ethernet connections, which isn't so bad in a home lab with less than 10 nodes but can get really awkward with higher scale. Me, I just shoved everything into VMs and used VMXNet3 drivers to get single-host 10GbE and to be able to separate my applications from the storage server better than if I were to use containers.

If your workload makes sense to gain throughput from LACP / NIC teaming / Etherchannel or whatnot there's some cheaper HP switches for under $300 that can support it, but unless you're trying to do some star topology kind of software with high throughput services (maybe a Ceph cluster?) it's unlikely that you'll get much improvement.

Mr Shiny Pants
Nov 12, 2012

Paul MaudDib posted:

To be honest I'm actually thinking I might just make space for a rack and do a regular homelab. Most of the gear is rack-format anyway.

The more I'm thinking about it, InfiniBand sounds like the way to go if I am going to pay for more than 1GbE (which I'm probably not going to do anytime soon). Even old SDRx4 stuff is still 8 Gbps and the latency will still be nice for MPI.

Be wary though, all the RDMA stuff on Linux has some funky stuff. Solaris is better but I also had some funky stuff with the RDMA NFS server. It would completely hang the machine on OpenIndiana.

For Windows if you want the highest speed you need something like SRP, but the drivers for this are not signed so it can be a bit of a pain to install on 2008 / 2012R2.

The best/most stable configuration I had running was Windows 2012R2 with OpenIndiana as a SRP target.

Mind you this was all using the older 10GB Mellanox stuff.

Oh and Solaris based OSs need a SDR Card with memory onboard for the tavor driver to work.

Boz0r
Sep 7, 2006
The Rocketship in action.
I want a NAS to work as a Plex server, or something similar. It has to stream in at least 1080p, with good quality audio. 2 HDD bays minimum. What should I get?

BlankSystemDaemon
Mar 13, 2009



Boz0r posted:

I want a NAS to work as a Plex server, or something similar. It has to stream in at least 1080p, with good quality audio. 2 HDD bays minimum. What should I get?
Synology's Play line (currently DS216play and DS416play) works fine for this - although I'm not sure what you mean with "good-quality audio" since this depends on what you do with the digital bitstreamed audio after it's reached the TV (ie. whether you send it through the HDMI-ARC port to a receiver at which point it gets translated to analogue.

MagusDraco
Nov 11, 2011

even speedwagon was trolled
The play line won't be able to transcode through Plex if he needs to transcode.

BlankSystemDaemon
Mar 13, 2009



havenwaters posted:

The play line won't be able to transcode through Plex if he needs to transcode.
I thought that was the entire point of installing Plex on a Synology play line NAS, since Plex lists as a requirement for Synology NAS that they be Intel-based and at least the 416play features a N3060 which supports Inttel Quick Sync Video and on-die transcoding.

BlankSystemDaemon fucked around with this message at 15:46 on Nov 11, 2016

MagusDraco
Nov 11, 2011

even speedwagon was trolled

D. Ebdrup posted:

I thought that was the entire point of installing Plex on a Synology play line NAS, since Plex lists as a requirement for Synology NAS that they be Intel-based and at least the 416play features a N3060 which supports Inttel Quick Sync Video and on-die transcoding.

Plex (for now) does not support hardware transcoding on most platforms so Intel quick sync won't help.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

D. Ebdrup posted:

I thought that was the entire point of installing Plex on a Synology play line NAS, since Plex lists as a requirement for Synology NAS that they be Intel-based and at least the 416play features a N3060 which supports Inttel Quick Sync Video and on-die transcoding.

Plex doesn't use QuickSync, as noted. Plex competitor Emby uses quicksync to transcode and can thus serve a bunch of transcoded video streams from an Atom.

Avenging Dentist
Oct 1, 2005

oh my god is that a circular saw that does not go in my mouth aaaaagh
About how loud is a Synology under normal circumstances? The most convenient place to put it is in my living room, but if it's disruptively loud, I'll have to figure something else out.

e: If it matters, I'm looking at the DS1515.

Avenging Dentist fucked around with this message at 06:07 on Nov 12, 2016

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Avenging Dentist posted:

About how loud is a Synology under normal circumstances? The most convenient place to put it is in my living room, but if it's disruptively loud, I'll have to figure something else out.

e: If it matters, I'm looking at the DS1515.

I don't have the Synology but I do have a CineRaid CR-H458 4-bay NAS. There's two factors in the noise, the drives and the fan.

I have trouble believing there will be any significant difference in drive noise between two given N-bay units. It mostly boils down to 5400 RPM drives vs 7200 RPM. Something like the Toshiba X300 is going to be a lot louder than WD Reds.

Fans can vary between units but I would imagine they're pretty comparable overall.

Our CineRaid unit is literally right next to the couch (that's where the fileserver ended up) and you can't hear it or the fileserver (Athlon 5350) during a TV show. We have four WD Reds in it.

Maybe if you have a super quiet house it would be an issue but for me it's just lost in the ambient noise.

You can also think about moving it into another room or something. Powerline Ethernet isn't super fast in terms of bandwidth but it's low latency and doesn't drop packets on most circuits.

Paul MaudDib fucked around with this message at 07:17 on Nov 12, 2016

Avenging Dentist
Oct 1, 2005

oh my god is that a circular saw that does not go in my mouth aaaaagh
In a pinch I could put it by my desktop but that would be slightly inconvenient. It's good to know it's pretty quiet though. My apartment is kinda noisy anyway.

signalnoise
Mar 7, 2008

i was told my old av was distracting
Got a variety of questions here

My situation: I have a windows-based NAS/Plex server already but I'm thinking about throwing it some hand-me-down poo poo from my current desktop. I have a variety of 3.5" platter drives to throw at it also. My desktop is full ATX, but I am having difficulty finding NAS chassiseses that fit ATX. I don't know poo poo about RAID and I'm not sure if I care to use it, especially since I don't have a big set of the same drive. The current NAS/Plex server is also our HTPC but really it's just there to run website-based streaming that isn't allowed over set top boxes by our cable company. Apparently xfinity is a bunch of bullshit when it comes to those. I have an Alienware Alpha I could put in the HTPC's place, and move the NAS into a closet, which I would like to do.

So questions I guess are...

Is there a good option for an ATX NAS case? Should I bother with RAID, or just throw more drives in the box? Would it be a better use of my money to buy a mATX or mITX motherboard and all the poo poo to go with it to be able to use a smaller and more fully featured case, or would buying a standalone NAS enclosure be a better use of my money? How can I protect my data using drives of different sizes?

PerrineClostermann
Dec 15, 2012

by FactsAreUseless
If you have the drives or money for an array of identical drives, go raid. I put FreeNAS on my old core2duo with six 2tb drives, put them into RAIDZ2, and never looked back. I used this case at the recommendation of a dude who runs a NAS blog.

http://www.amazon.com/gp/product/B0...CKTVINVBZWC25DY

http://blog.brianmoses.net/2015/05/diy-nas-econonas-2015.html

Krailor
Nov 2, 2001
I'm only pretending to care
Taco Defender

signalnoise posted:

Got a variety of questions here

My situation: I have a windows-based NAS/Plex server already but I'm thinking about throwing it some hand-me-down poo poo from my current desktop. I have a variety of 3.5" platter drives to throw at it also. My desktop is full ATX, but I am having difficulty finding NAS chassiseses that fit ATX. I don't know poo poo about RAID and I'm not sure if I care to use it, especially since I don't have a big set of the same drive. The current NAS/Plex server is also our HTPC but really it's just there to run website-based streaming that isn't allowed over set top boxes by our cable company. Apparently xfinity is a bunch of bullshit when it comes to those. I have an Alienware Alpha I could put in the HTPC's place, and move the NAS into a closet, which I would like to do.

So questions I guess are...

Is there a good option for an ATX NAS case? Should I bother with RAID, or just throw more drives in the box? Would it be a better use of my money to buy a mATX or mITX motherboard and all the poo poo to go with it to be able to use a smaller and more fully featured case, or would buying a standalone NAS enclosure be a better use of my money? How can I protect my data using drives of different sizes?

Silverstone just released an ATX version of their NAS case, the CS380. This gives you 8 hotswap drive cages.

True RAID only makes sense if you have identical storage drives. If you have a bunch of mismatched drives I suggest DrivePool by StableBit. It will take all of your random drives and present them to windows as just one big drive. You can then tell it what duplication level you want (2x, 3x, etc) and it will take care figuring out what physical drive to put your data. You can even setup different levels by folder if you want. I use it at home and it does a great job. If you go this route one bit of advice I have is to just mount your data drives to a folder and not an actual drive letter so that way in windows you just see the pooled drive that DrivePool creates and not all of the other random drives.

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.

Krailor posted:

Silverstone just released an ATX version of their NAS case, the CS380. This gives you 8 hotswap drive cages.



The 2 5 1/4s look like such a waste. WTF are they even there for.

They should have given it like 12 swaps

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

If you can find it anywhere this is the case I've had for years: http://www.newegg.com/Product/Product.aspx?Item=N82E16811160008

With two cages to insert four 3.5" in the six 5.25" bays, you can easily put 16 drives in it. I've also got another 4x 3.5" drive bay mounted up above the power supply for a total of 20 3.5" drives in it.

I don't know if there's anything better out there right now for as cheap as you can sometimes find this on ebay for, but anyway it's cheap and huge...

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.
Yea i already use a DS380, just seems silly they didn't either do like 12 external 3 1/4" bays or do 3 5 1/4" bays you can put a drive mount into.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
A case with 12 3.5" drive bays, hotswap or not, would be my perfect case right now. I can slap my disc drive in an external enclosure, and stick SSD drives wherever. But 12 3.5" and two 5.25" bays would fill my needs perfectly for the next several years. I've got 9 3.5" drives, two SSDs, and a CD/DVD recorder in my current case, but airflow sucks and cable management is nonexistent.

Something like the Fractal Design Define R5 would be great, with just 4 more 3.5" bays.

some kinda jackal
Feb 25, 2003

 
 
Was gifted an eight-bay rackmount QNap unit. What's the consensus on QNap's OS? Should I try to throw XPENOLOGY on there? It's really just a glorified PC running Linux either way, I guess.

I'm going to have to try to resistor down the power supply fans too. They start out nice and quiet but really ramp up. To be expected from a unit meant for a rack. I could theoretically try to replace the PSU with a quieter 2U PSU.

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer
I have a Synology DS1515+ for work and 2 of 5 new WD Red 3TBs just popped their first bad sectors within 24h. Everything was bought about 3 months ago. Configuration is SHA2 (basically RAID6 I believe). Doing extended checks on all the HDDs. Can I send them back to Amazon for replacements with just 1 bad sector or do things need to get worse?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Mr Shiny Pants posted:

Be wary though, all the RDMA stuff on Linux has some funky stuff. Solaris is better but I also had some funky stuff with the RDMA NFS server. It would completely hang the machine on OpenIndiana.

For Windows if you want the highest speed you need something like SRP, but the drivers for this are not signed so it can be a bit of a pain to install on 2008 / 2012R2.

The best/most stable configuration I had running was Windows 2012R2 with OpenIndiana as a SRP target.

Mind you this was all using the older 10GB Mellanox stuff.

Oh and Solaris based OSs need a SDR Card with memory onboard for the tavor driver to work.

Let me ask this differently: what am I best off going with? Is there any Infiniband adapter brand that is reasonably stable under Linux/FreeBSD/OpenSolaris?

Mr Shiny Pants
Nov 12, 2012

Paul MaudDib posted:

Let me ask this differently: what am I best off going with? Is there any Infiniband adapter brand that is reasonably stable under Linux/FreeBSD/OpenSolaris?

Well. It's not the cards that aren't stable its the software running on top of them that I had problems with. Getting the cards to work is not a problem once you know which to commands to use. It is getting the software stable ( RDMA NFS Server, iSER etc. ) for a couple of days.

If you don't mind spending some money I would go for the newer ConnectX cards instead of the older ones and just try it.

I have no experience running FreeBSD.

emocrat
Feb 28, 2007
Sidewalk Technology
OK, looking for some advice on a path forward here. I'm gonna go through my use case/desires and then my already owned available hardware and some options, looking for any and all opinions on this.

I am in need of a NAS/Home server. My initial need, and the primary driver of this is Plex. I am sitting at about 8TB of media and they are just on a couple random HDD's in my windows machine. I expect my storage needs to continue to increase over time at a steady pace. So, right now I just have files on a disk, with no parity of any type, a disk failure will just wipe whats on that disk. I am basically maxed out on my current storage, and I want a real solution rather than just grabbing another HDD and forgetting about it.

I am also interested in a certain amount of application support as well, specifically I would prefer to run PMS on the NAS itself, as well as potentially some other apps (Owncloud maybe, I dunno yet really). One thing to note is that my movies are straight disc rips to .MKV, so in the case of blu ray discs, they can be huge. I serve to a variety of devices and so I expect to transcode fairly often, and I want to make sure I have the horse power to do at least 2 1080p streams at he same time.

Initially I looked at some commercially sold NAS devices like Synology etc, but the cost vs capability seems wrong, so I have been looking into DIY. I am comfortable working with hardware and have always built my own computers, however my experience with OS's is 100% windows. I am definitely willing to learn on that front, but understand where I am starting at.

I have, just sitting there, able to used right now, the following: i5 2500 cpu, with corresponding H67m Motherbaord (6 SATA ports) and 8GB of ram, decent sized case and powersupply. I have a 128GB SSD as a system disk, 2 3TB WD Reds, a 1TB HDD and a few other smaller HDD's that probably don't matter cause they are getting too old to consider.

So, things I am considering and my current understanding:

FreeNas. Seems like a super solid platform that is probably the best I can get in terms of data integrity. I would have to buy basically 100% new hardware, due to the need for ECC. Also would require some planning on the HDD's as they need to be matched sizes. I am a little worried about my ability to properly configure it, as it seems there are a ton things to set up and I have no experience at all with FreeBSD or ZFS.

XPenology. Not very familiar with it, just hear it referenced a lot. I don't know what the hardware requirements would be, and I am a little put off by what I see as a lack of central documentation. Seems like I would mostly be looking through forum posts for guides.

UnRaid. Just learned about this recently, and it seems very appealing. Does not require ECC, and allows for mismatched HD sizes and easy future expandability. Seems like I could probably run it on my current hardware decently well, so despite having to pay for the software, it would likely be fairly low cost for me. I have read its not very efficient though, so I am a bit worried about performance, particularly for transcoding. Being locked at only using single parity is not ideal, but to be honest, it is likely how I would configure other systems as well. I have read that is is extremely user friendly and simple to get running. While I do not have a current need, the ability to run KVM -->whatever on it seems pretty dope.

I am not really a tinkerer, so I am most likely to get something running well and pretty much leave it alone for decently long periods of time.

So, suggestions? Right this minute UnRaid seems like a really good idea for me, but maybe it has drawbacks I am not aware of. Are there other platforms I should consider? I am willing to spend some cash, although the prospect of an entire new systems for ECC + set of HDD's seems a bit much. Thanks.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

emocrat posted:

I have, just sitting there, able to used right now, the following: i5 2500 cpu, with corresponding H67m Motherbaord (6 SATA ports) and 8GB of ram, decent sized case and powersupply. I have a 128GB SSD as a system disk, 2 3TB WD Reds, a 1TB HDD and a few other smaller HDD's that probably don't matter cause they are getting too old to consider.

So, things I am considering and my current understanding:

FreeNas. Seems like a super solid platform that is probably the best I can get in terms of data integrity. I would have to buy basically 100% new hardware, due to the need for ECC. Also would require some planning on the HDD's as they need to be matched sizes. I am a little worried about my ability to properly configure it, as it seems there are a ton things to set up and I have no experience at all with FreeBSD or ZFS.
So, suggestions? Right this minute UnRaid seems like a really good idea for me, but maybe it has drawbacks I am not aware of. Are there other platforms I should consider? I am willing to spend some cash, although the prospect of an entire new systems for ECC + set of HDD's seems a bit much. Thanks.

Only being able to add drives in the form of a complete new zpool is the biggest downside of FreeNAS. Even though the community circlejerks over ECC, you sure don't need it. FreeNAS without ECC isn't a time-bomb like alarmist grognards will tell you, and really is just more comparable to any OS at all without ECC.

If you've got the budget to add a whole batch of drives at once and don't need to grow over time, I'd recommend FreeNAS over the other options you're considering. Below is some info about FreeNAS on non-ECC RAM.

http://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/

emocrat
Feb 28, 2007
Sidewalk Technology

Twerk from Home posted:

Only being able to add drives in the form of a complete new zpool is the biggest downside of FreeNAS. Even though the community circlejerks over ECC, you sure don't need it. FreeNAS without ECC isn't a time-bomb like alarmist grognards will tell you, and really is just more comparable to any OS at all without ECC.

If you've got the budget to add a whole batch of drives at once and don't need to grow over time, I'd recommend FreeNAS over the other options you're considering. Below is some info about FreeNAS on non-ECC RAM.

http://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/

Good read, thanks. Any opinions on coming to FreeNas with no linux/BSD experience?

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
I'd be tempted to say keep adding 3TB WD Reds to the ones you've already got. The downside to this is that you've got 5 SATA to play with (once you've used up one of the six for the boot drive). In RAID-5 that would give you about 11TB to play with, which doesn't bode well for future data expansion. Someone will probably be able to recommend a good PCI card to add extra SATA.

EDIT:

emocrat posted:

Good read, thanks. Any opinions on coming to FreeNas with no linux/BSD experience?

I tried it a while ago. FreeNAS is easy. If you can appreciate the principle of controlling the machine from another machine, via a web interface, that's about all the BSD experience you need.

apropos man fucked around with this message at 17:32 on Nov 14, 2016

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

emocrat posted:

OK, looking for some advice on a path forward here. I'm gonna go through my use case/desires and then my already owned available hardware and some options, looking for any and all opinions on this.

So, suggestions? Right this minute UnRaid seems like a really good idea for me, but maybe it has drawbacks I am not aware of. Are there other platforms I should consider? I am willing to spend some cash, although the prospect of an entire new systems for ECC + set of HDD's seems a bit much. Thanks.

Check out SnapRaid, it is very similar in function to Unraid, but can run on top of Windows, and supports multiple disk parity (I'm unsure where Unraid currently stands on this)

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.
I'm starting to approach the point of wanting to expand my storage on FreeNAS and have been debating how to go about it. Currently, I'm out of physical capacity for drives in my NAS, with 8 4TB drives in-case in a RAID-Z2. My motherboard is one of the nice SuperMicro ones with a SAS controller built in and a boatload of SATA ports fanning out from it, so I'm using only on-board ports right now. I do have two PCI-E slots available (16x physical wired for 8x, and 8x physical wired for 4x), so there's room for an HBA no problem. I'm really just torn on what to do.

  • Swap each 4TB drive with an 8TB drive, one at a time, and let the array resilver fully each time.
  • Buy/build an external array with the 8TB drives, daisy chained back to the NAS, with a power controller to power it up when the NAS powers on
  • Build a second NAS and just cross-mount between them

Obviously, all of these options come with their various positives and negatives.

Option 1 will leave me with less storage in total, which I could handle but might be annoyed with in the longer-term. It also is extremely time consuming, and runs a risk of data loss. But it also would require less power, and generate less heat, and wouldn't take up as much physical space in my already small home.

Options 2 and 3 both have the space issue. They should, in theory, both generate similar amounts of heat, though 3 should have a bit more power draw. They'd also both give me that full capacity of another ~48TB (I know it's less than that) usable, which would be extremely nice. 2 obviously wouldn't require passing traffic across the network going between them, but I don't honestly feel like that's a major concern. 3 is notably more costly, and arguably a pain in the rear end since I'd have to manage configs for two separate NAS devices.

Basically, I know all of these options have been used in practice and none of them are perfect, but there's also not a ton of clear-cut recommendations for situations like this when using ZFS at home, so I'm hoping somebody's got suggestions they would want to throw out for this.

G-Prime fucked around with this message at 20:12 on Nov 14, 2016

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



Twerk from Home posted:

Only being able to add drives in the form of a complete new zpool is the biggest downside of FreeNAS. Even though the community circlejerks over ECC, you sure don't need it. FreeNAS without ECC isn't a time-bomb like alarmist grognards will tell you, and really is just more comparable to any OS at all without ECC.

If you've got the budget to add a whole batch of drives at once and don't need to grow over time, I'd recommend FreeNAS over the other options you're considering. Below is some info about FreeNAS on non-ECC RAM.

http://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/
ZFS corrupting without ECC is not a time-bomb because you can't hear it ticking, nor will you know when it happens. It's also not right to call it comparable with any system, because no system uses memory as heavily for caching as ZFS does - and with how the caching works, and ZFS' complete inability to detect corruption without ECC, and Googles numbers for DRAM flipped bits, you're more likely to end up with corrupted data.
Couple that with the fact that including ECC in a new system isn't nearly as prohibitively expensive as people claim (last I spec'd out a system with and without ECC, the price difference ended up being $53 on pcpartspicker; and since the DRAM has dropped in price), the only reason for not running ECC is if you're repurposing old hardware that simply does not support it - the latter of which seems to be the situation in this case.

All that being said, ECC or the lack of it is no excuse for not having a proper backup.

BlankSystemDaemon fucked around with this message at 20:10 on Nov 14, 2016

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply