Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Loving Africa Chaps
Dec 3, 2007


We had not left it yet, but when I would wake in the night, I would lie, listening, homesick for it already.

Krakkles posted:

This is probably a dumb question, but I've got a N40L running FreeNAS, and I feel like the speeds I see from it are ... Not great. Everytime I access it, there's a long delay (I assume while the drive(s) spin up?) upon initial access. Then, if I try to transcode something on my PC (I used to use ... I think Tversity, something like that? - I stopped when it didn't work), performance is very poor - constant skipping, etc.

The PC runs modern games at high settings, so I don't think that's a hardware issue.

Is there something I can do on the FreeNAS config or my network to make it work better?

Relevant details that I can think of:

FreeNAS and PC are plugged into a Linksys EA2700 router. NAS is a N40L with (I think) 4GB of RAM, running 4 WD Green drives (2TB each) in RAIDZ2.

Thanks, guys!

I thought greens were a huge no no in any sort of raid array?

Adbot
ADBOT LOVES YOU

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Loving Africa Chaps posted:

I thought greens were a huge no no in any sort of raid array?

The main issue was with using them with hardware RAID cards, as they would reply to some message too quickly (don't remember exactly...), which would end up causing the RAID card to report the array broken, where the RAID specific drives have a much longer delay and everything ends up fine. This is not an issue in pure Software RAID, or other RAID-type setups. There was also a WD tool for increasing this delay for the Green drives.

phosdex
Dec 16, 2005

writes to thin provisioned virtual drives is pretty much always going to be slower than to thick

Touchfuzzy
Dec 5, 2010
Okay, so I took a look at some DIY NAS solutions, and I'm kinda liking the whole unRaid getup. For simply being a separate storage computer, the system requirements are pretty small, and the ability to just pop extra drives as I need them or upgrading drives to higher capacities in the future without much hassle sounds super nice -- much nicer than storing crap on disks in boxes or filling my actual workstation with more and more drives.

CPU: Intel Core i3-4150 3.5GHz Dual-Core Processor ($109.99 @ SuperBiiz)
Motherboard: ASRock C226WS ATX LGA1150 Motherboard ($183.99 @ SuperBiiz)
Memory: Mushkin Proline 16GB (2 x 8GB) DDR3-1333 Memory ($79.99 @ Newegg)
Storage: PNY CS2211 240GB 2.5" Solid State Drive ($69.99 @ Amazon)
Storage: Western Digital Red 5TB 3.5" 5400RPM Internal Hard Drive ($194.99 @ Amazon)
Storage: Western Digital Red 5TB 3.5" 5400RPM Internal Hard Drive ($194.99 @ Amazon)
Storage: Western Digital Red 5TB 3.5" 5400RPM Internal Hard Drive ($194.99 @ Amazon)
Power Supply: EVGA SuperNOVA P2 750W 80+ Platinum Certified Fully-Modular ATX Power Supply ($115.36 @ Amazon)
Other: unRaid USB Key + Plus Registration ($110.00)
Total: $1254.29

Looking at it a second time, it looks like a giant chunk of change, but I expect to use this for a long time. The motherboard supports ECC/Unbuffered memory, and has 10 SATA ports. I don't plan on going over that, so I'll be reusing an old Fractal Design case that has very nearly that many hard drive bays. I picked the power supply because it's got exactly 10 SATA power cords and has a 10 year warranty, as well as a strong single rail, which I read is preferred for these kinds of things.

The only things I'm a bit iffy about are the CPU and SSD. unRaid mentions something about using a dedicated drive for caching, and that it will improve speeds. I'm all for faster speeds as long as they're tangible, but if they're not, I don't feel bad removing it. Also, the CPU is basically the weakest thing I could find (since I don't play on using many, if any, modules), aside from looking at AMD offerings, and I haven't bought AMD in a long time so I don't know any equivalent offerings.

To anyone who's built their own, does it look okay?

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Touchfuzzy posted:

Okay, so I took a look at some DIY NAS solutions, and I'm kinda liking the whole unRaid getup. For simply being a separate storage computer, the system requirements are pretty small, and the ability to just pop extra drives as I need them or upgrading drives to higher capacities in the future without much hassle sounds super nice -- much nicer than storing crap on disks in boxes or filling my actual workstation with more and more drives.

CPU: Intel Core i3-4150 3.5GHz Dual-Core Processor ($109.99 @ SuperBiiz)
Motherboard: ASRock C226WS ATX LGA1150 Motherboard ($183.99 @ SuperBiiz)
Memory: Mushkin Proline 16GB (2 x 8GB) DDR3-1333 Memory ($79.99 @ Newegg)
Storage: PNY CS2211 240GB 2.5" Solid State Drive ($69.99 @ Amazon)
Storage: Western Digital Red 5TB 3.5" 5400RPM Internal Hard Drive ($194.99 @ Amazon)
Storage: Western Digital Red 5TB 3.5" 5400RPM Internal Hard Drive ($194.99 @ Amazon)
Storage: Western Digital Red 5TB 3.5" 5400RPM Internal Hard Drive ($194.99 @ Amazon)
Power Supply: EVGA SuperNOVA P2 750W 80+ Platinum Certified Fully-Modular ATX Power Supply ($115.36 @ Amazon)
Other: unRaid USB Key + Plus Registration ($110.00)
Total: $1254.29

Looking at it a second time, it looks like a giant chunk of change, but I expect to use this for a long time. The motherboard supports ECC/Unbuffered memory, and has 10 SATA ports. I don't plan on going over that, so I'll be reusing an old Fractal Design case that has very nearly that many hard drive bays. I picked the power supply because it's got exactly 10 SATA power cords and has a 10 year warranty, as well as a strong single rail, which I read is preferred for these kinds of things.

The only things I'm a bit iffy about are the CPU and SSD. unRaid mentions something about using a dedicated drive for caching, and that it will improve speeds. I'm all for faster speeds as long as they're tangible, but if they're not, I don't feel bad removing it. Also, the CPU is basically the weakest thing I could find (since I don't play on using many, if any, modules), aside from looking at AMD offerings, and I haven't bought AMD in a long time so I don't know any equivalent offerings.

To anyone who's built their own, does it look okay?

What is the SSD for? You can boot Unraid from a USB drive. In fact, I think the licencing thing kind of depends on it. Is the SSD for the cache drive? The cache drive works by you just copying all stuff to that drive (so no parity calc), and then it will do the actual copy operation every night (configurable). It will make it look like the files are in the right place though, when looking at it from network. However, if the cache drive dies, you'll lose data, since it's not protected. Also, an SSD is probably overkill, since your network speed will cap out far sooner than the SSD write speed. I'd replace it with another 5TB drive, that way, you can use the cache drive as a warm spare if anything should fail.

Also, Unraid does not need 16gb of RAM, it's not ZFS. I ran mine on 4gb for years.

Touchfuzzy
Dec 5, 2010

Skandranon posted:

What is the SSD for? You can boot Unraid from a USB drive. In fact, I think the licencing thing kind of depends on it. Is the SSD for the cache drive? The cache drive works by you just copying all stuff to that drive (so no parity calc), and then it will do the actual copy operation every night (configurable). It will make it look like the files are in the right place though, when looking at it from network. However, if the cache drive dies, you'll lose data, since it's not protected. Also, an SSD is probably overkill, since your network speed will cap out far sooner than the SSD write speed. I'd replace it with another 5TB drive, that way, you can use the cache drive as a warm spare if anything should fail.

Also, Unraid does not need 16gb of RAM, it's not ZFS. I ran mine on 4gb for years.

Yeah, it was supposed to be for caching, but if it's not really worth it, then sweet, out it goes. Also, I only went with the 16 GB of RAM because anything cheaper was only marginally (compared to the overall price) so. I could throw in an 8 GB stick for only 40, as long as unRaid doesn't use dual channel all that much.

So I can use one of the 5TB drives as a cache, then in the event of a fail, I can just tell unRaid to use that cache drive as the new replacement? If so, that's neat!

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Touchfuzzy posted:

Yeah, it was supposed to be for caching, but if it's not really worth it, then sweet, out it goes. Also, I only went with the 16 GB of RAM because anything cheaper was only marginally (compared to the overall price) so. I could throw in an 8 GB stick for only 40, as long as unRaid doesn't use dual channel all that much.

So I can use one of the 5TB drives as a cache, then in the event of a fail, I can just tell unRaid to use that cache drive as the new replacement? If so, that's neat!

You can actually tell Unraid to automatically start rebuilding on the cache drive if you want, though I think you run some risk of losing your cached data if it hasn't been copied over.

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.

Touchfuzzy posted:

Okay, so I took a look at some DIY NAS solutions, and I'm kinda liking the whole unRaid getup. For simply being a separate storage computer, the system requirements are pretty small, and the ability to just pop extra drives as I need them or upgrading drives to higher capacities in the future without much hassle sounds super nice -- much nicer than storing crap on disks in boxes or filling my actual workstation with more and more drives.

CPU: Intel Core i3-4150 3.5GHz Dual-Core Processor ($109.99 @ SuperBiiz)
Motherboard: ASRock C226WS ATX LGA1150 Motherboard ($183.99 @ SuperBiiz)
Memory: Mushkin Proline 16GB (2 x 8GB) DDR3-1333 Memory ($79.99 @ Newegg)
Storage: PNY CS2211 240GB 2.5" Solid State Drive ($69.99 @ Amazon)
Storage: Western Digital Red 5TB 3.5" 5400RPM Internal Hard Drive ($194.99 @ Amazon)
Storage: Western Digital Red 5TB 3.5" 5400RPM Internal Hard Drive ($194.99 @ Amazon)
Storage: Western Digital Red 5TB 3.5" 5400RPM Internal Hard Drive ($194.99 @ Amazon)
Power Supply: EVGA SuperNOVA P2 750W 80+ Platinum Certified Fully-Modular ATX Power Supply ($115.36 @ Amazon)
Other: unRaid USB Key + Plus Registration ($110.00)
Total: $1254.29

Looking at it a second time, it looks like a giant chunk of change, but I expect to use this for a long time. The motherboard supports ECC/Unbuffered memory, and has 10 SATA ports. I don't plan on going over that, so I'll be reusing an old Fractal Design case that has very nearly that many hard drive bays. I picked the power supply because it's got exactly 10 SATA power cords and has a 10 year warranty, as well as a strong single rail, which I read is preferred for these kinds of things.

The only things I'm a bit iffy about are the CPU and SSD. unRaid mentions something about using a dedicated drive for caching, and that it will improve speeds. I'm all for faster speeds as long as they're tangible, but if they're not, I don't feel bad removing it. Also, the CPU is basically the weakest thing I could find (since I don't play on using many, if any, modules), aside from looking at AMD offerings, and I haven't bought AMD in a long time so I don't know any equivalent offerings.

To anyone who's built their own, does it look okay?

PSU is pretty silly but will work fine.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Don Lapre posted:

PSU is pretty silly but will work fine.

It's not bad... room to expand and he can feel good about saving the environment over the lifetime of it. Also, you don't want to flake out on the PSU for something that runs 24/7. Probably don't need the full modular part though, hopefully you aren't dicking around with this thing often. I'd go for a Seasonic that's not modular. A lot of the modular power supplies cheap out on the internals to get the modular part within a certain budget.

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.
Well yea, room to expand but my 5 drive nas with an i3 pulls all of 90w under load from the wall on a 450bronze psu.

phosdex
Dec 16, 2005

those bronze, silver, gold, plat ratings usually are based on the ps being under something like 80% load too. Below that and they aren't any different.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
One of the better things you can use for efficiency with lower power NASes is to go down to as low of a wattage rating on the PSU to keep it within about 40% load. Even if you have a platinum PSU rating at 600w rating you won't do as well pulling 60w as if you got a 150w PSU. Even if you got close the cost of the expensive PSU would exceed the power savings. Where a nice PSU matters is in avoiding just shorting out all your crap. That's not a problem for 90%+ of PSUs on the market today. And if you're crazy about reliability, you should be using a rack with redundant PSUs anyway.

simcole
Sep 13, 2003
PATHETIC STALKER
I need help picking a NAS OS. I have a HTPC currently with one 1TB HD running windows 10, btsync, and plex. I currently use plex a lot and at Christmas I also use a windows program to animate my exterior lights to music. That being said I ordered three 3 TB WD Red drives today, and I'm ready to plunge into full blown NAS world. I'm kind of leaning toward FreeNas or UnRaid, but I'm not sure how the plex support is on either or if I should consider another alternative. Also how do I configure some kind of boot loader if I don't go windows OS to make that 1 TB drive run my light show? Thoughts and guidance is appreciated.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

simcole posted:

I need help picking a NAS OS. I have a HTPC currently with one 1TB HD running windows 10, btsync, and plex. I currently use plex a lot and at Christmas I also use a windows program to animate my exterior lights to music. That being said I ordered three 3 TB WD Red drives today, and I'm ready to plunge into full blown NAS world. I'm kind of leaning toward FreeNas or UnRaid, but I'm not sure how the plex support is on either or if I should consider another alternative. Also how do I configure some kind of boot loader if I don't go windows OS to make that 1 TB drive run my light show? Thoughts and guidance is appreciated.

Are you planning on putting these drives into the same box as your Windows 10 machine? If so, I would keep Windows 10, and do either a) Software RAID5 or b) Create a SnapRaid array. SnapRaid is a lot like Unraid, except it is free, and can run on top of Windows. This way you can keep your current Windows machine running without a hiccup.

rizzo1001
Jan 3, 2001

Touchfuzzy posted:

Motherboard: ASRock C226WS ATX LGA1150 Motherboard ($183.99 @ SuperBiiz)

Does that board not have IPMI or AMT? I tried to look but it might be worth double checking.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
The ASRock LGA1150 C226 board I have definitely has IPMI - I've been using it for over a couple years now. Newer Supermicro boards requiring some extra software license to use IPMI is a huge annoyance so I'd be careful with those boards. ASRock doesn't seem to have problems like that though.

insularis
Sep 21, 2002

Donated $20. Get well, Lowtax.
Fun Shoe

necrobobsledder posted:

The ASRock LGA1150 C226 board I have definitely has IPMI - I've been using it for over a couple years now. Newer Supermicro boards requiring some extra software license to use IPMI is a huge annoyance so I'd be careful with those boards. ASRock doesn't seem to have problems like that though.

Are you sure that's SuperMicro, not some other vendor? I've deployed 6 new model boards from them this year, from 1151 Xeon E3 to dual E5s, and IPMI has worked the same as ever.

The only licensing I'm aware of for updating BIOS directly through IPMI without a CPU installed.

simcole
Sep 13, 2003
PATHETIC STALKER

Skandranon posted:

Are you planning on putting these drives into the same box as your Windows 10 machine? If so, I would keep Windows 10, and do either a) Software RAID5 or b) Create a SnapRaid array. SnapRaid is a lot like Unraid, except it is free, and can run on top of Windows. This way you can keep your current Windows machine running without a hiccup.

How's snapraid performance hit ontop of windows? Stablebit drive pool looks similar. I'm so new I don't know what's the pro's/con's of those two.

More or less this HTPC is doing nothing but transcoding and I don't think I need to switch from windows, but if I do I still need to dual boot somehow for my light show.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

simcole posted:

How's snapraid performance hit ontop of windows? Stablebit drive pool looks similar. I'm so new I don't know what's the pro's/con's of those two.

More or less this HTPC is doing nothing but transcoding and I don't think I need to switch from windows, but if I do I still need to dual boot somehow for my light show.

It's basically the same as reading from the drives individually. Your write performance is the same, your parity is written by a batch job that is scheduled. If you want to do some pooling to create what looks like a single drive, you'll need something else. I forget what I'm using at the moment, but it's not a hard thing to set up. I'm not that familiar with Stablebit's offerings, but they cost money and are more proprietary. I think the main benefit there is just a more polished UI for managing things.

rizzo1001
Jan 3, 2001

necrobobsledder posted:

The ASRock LGA1150 C226 board I have definitely has IPMI - I've been using it for over a couple years now. Newer Supermicro boards requiring some extra software license to use IPMI is a huge annoyance so I'd be careful with those boards. ASRock doesn't seem to have problems like that though.

Which board do you have, the E3C226D2I? Anyway I don't see anything in the manual regarding IPMI for that particular board whereas the D2I has a separate IPMI config .pdf online. Maybe it's an undocumented feature?

Photex
Apr 6, 2009




Skandranon posted:

I prefer mid-full tower cases, so you have a lot more room to upgrade if you should so desire. Been using an Antec 1200 for over 5 years as my main storage appliance, which easily fits 12 drives with plenty of room for cooling. Could push it to 16 with some 3to4 bay drive converters, or even 20 if using a sideways 3-5. I expect to be using this case for another 5 years at least.

I don't really see myself adding more than 4 drives for this build, I made this build to compete against a synology 4 bay which I think I did a pretty good job at.

http://pcpartpicker.com/p/VfstXL

anyone else have any other comments before I pull the trigger on this stuff?

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

insularis posted:

Are you sure that's SuperMicro, not some other vendor? I've deployed 6 new model boards from them this year, from 1151 Xeon E3 to dual E5s, and IPMI has worked the same as ever.

The only licensing I'm aware of for updating BIOS directly through IPMI without a CPU installed.
See the reviews for this board for the sole reference I've seen http://www.newegg.com/Product/Product.aspx?Item=N82E16813182964 It may be FUD but I'd check up carefully on anything to make sure everything's as expected before dropping $800+ on a motherboard / CPU combo anyway.

rizzo1001 posted:

Which board do you have, the E3C226D2I? Anyway I don't see anything in the manual regarding IPMI for that particular board whereas the D2I has a separate IPMI config .pdf online. Maybe it's an undocumented feature?
Not sure how you got the impression that it's missing from any source worth considering. Evidently I have a C224 board but that's not a big deal difference anyway. It's the E3C224D2I, says IPMI right there on the ASRock page http://www.asrockrack.com/general/productdetail.asp?Model=E3C224D2I#Specifications The E3C226D2I page has it listed just the same http://www.asrockrack.com/general/productdetail.asp?Model=E3C226D2I#Specifications

phosdex
Dec 16, 2005

that supermicro review is about bios upgrading via ipmi like insularis mentioned. IPMI works, you just can't update bios through it without a key.

rizzo1001
Jan 3, 2001

necrobobsledder posted:


Not sure how you got the impression that it's missing from any source worth considering. Evidently I have a C224 board but that's not a big deal difference anyway. It's the E3C224D2I, says IPMI right there on the ASRock page http://www.asrockrack.com/general/productdetail.asp?Model=E3C224D2I#Specifications The E3C226D2I page has it listed just the same http://www.asrockrack.com/general/productdetail.asp?Model=E3C226D2I#Specifications

Sorry yeah my post wasn't very clear, I was referencing the C226WS board that was picked out above.

http://www.asrockrack.com/general/productdetail.asp?Model=C226%20WS#Specifications

It looks like the WS board is missing IPMI according to the asrock page, where as they do in fact list the feature on E3C226D2I and E3C224D2I.

Furism
Feb 21, 2006

Live long and headbang
I have an older Synology 4 slots NAS that's been acting up recently. It takes about 20-30 presses on the power button to turn it on and I don't take this as a good sign. It's 4 or 5 years old so not under warranty anymore. I was thinking to replace it before something more important breaks (like the CF card or whatever they store the OS on - happened to me in the past, on a Synology as well).

I was thinking to get something with a x86 processor because, as far as I know, soft RAIDs are faster with those than ARM? I don't need any fancy feature (web UI, Bittorent, DLNA, ...), just SSH (for rsync) and CIFS/NFS. Is there any brand or model that's basically "simple but sturdy" ?

Alternatively I could go for a DIY solution but those seemed quite expensive. I need 4 x3.5" slots and 1x2.5 (for the OS, but I guess I could boot from a USB drive... not sure how reliable those would be in the long term though) if possible on a very small motherboard and case, and very low power consumption as well. Mini-ITX motherboards are incredibly expensive apparently. Note that I live in Europe so Neweggs and websites like that don't work.

So what would be the best solution for my problem?

Nulldevice
Jun 17, 2006
Toilet Rascal

Furism posted:

I have an older Synology 4 slots NAS that's been acting up recently. It takes about 20-30 presses on the power button to turn it on and I don't take this as a good sign. It's 4 or 5 years old so not under warranty anymore. I was thinking to replace it before something more important breaks (like the CF card or whatever they store the OS on - happened to me in the past, on a Synology as well).

I was thinking to get something with a x86 processor because, as far as I know, soft RAIDs are faster with those than ARM? I don't need any fancy feature (web UI, Bittorent, DLNA, ...), just SSH (for rsync) and CIFS/NFS. Is there any brand or model that's basically "simple but sturdy" ?

Alternatively I could go for a DIY solution but those seemed quite expensive. I need 4 x3.5" slots and 1x2.5 (for the OS, but I guess I could boot from a USB drive... not sure how reliable those would be in the long term though) if possible on a very small motherboard and case, and very low power consumption as well. Mini-ITX motherboards are incredibly expensive apparently. Note that I live in Europe so Neweggs and websites like that don't work.

So what would be the best solution for my problem?

Qnap might not be so bad for what you're looking for. I've got one at work that I use for various purposes. It's similar to the Synology but doesn't cost as much. I've got a four bay unit. It does support SSH login and has an rsync client/daemon. I think I paid about $220 for it on sale. It does have an ARM processor in it but it really performs very well. I'm currently running a raid 6 array on it and there is no real latency and I get near wire speeds with it. This is a TS-431+. As always YMMV. But I think they're worth looking into.

Yaoi Gagarin
Feb 20, 2014

Got a question about freenas. I've read a few articles that suggest that using striped mirrors is actually safer in zfs than raidz2, because the rebuild time is a lot shorter. Like if you lose a drive in raidz2 the whole array is going to be worked over but if you lose a drive in a striped mirror it's only the partner drive. Looking for the goon opinion on this. Would it make sense to start off with a 4tb mirror pool and slowly expand with more mirrors as needed, or should I save up for a full 6 drive raidz2?

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.

Furism posted:

I have an older Synology 4 slots NAS that's been acting up recently. It takes about 20-30 presses on the power button to turn it on and I don't take this as a good sign. It's 4 or 5 years old so not under warranty anymore. I was thinking to replace it before something more important breaks (like the CF card or whatever they store the OS on - happened to me in the past, on a Synology as well).

I was thinking to get something with a x86 processor because, as far as I know, soft RAIDs are faster with those than ARM? I don't need any fancy feature (web UI, Bittorent, DLNA, ...), just SSH (for rsync) and CIFS/NFS. Is there any brand or model that's basically "simple but sturdy" ?

Alternatively I could go for a DIY solution but those seemed quite expensive. I need 4 x3.5" slots and 1x2.5 (for the OS, but I guess I could boot from a USB drive... not sure how reliable those would be in the long term though) if possible on a very small motherboard and case, and very low power consumption as well. Mini-ITX motherboards are incredibly expensive apparently. Note that I live in Europe so Neweggs and websites like that don't work.

So what would be the best solution for my problem?

You can do xpenology and migrate your array over.

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!

VostokProgram posted:

Got a question about freenas. I've read a few articles that suggest that using striped mirrors is actually safer in zfs than raidz2, because the rebuild time is a lot shorter. Like if you lose a drive in raidz2 the whole array is going to be worked over but if you lose a drive in a striped mirror it's only the partner drive. Looking for the goon opinion on this. Would it make sense to start off with a 4tb mirror pool and slowly expand with more mirrors as needed, or should I save up for a full 6 drive raidz2?

The big thing is surviving any two drives failing or one drive failing with data errors on another. That means each mirror needs three drives. It gets a lot more expensive to do things with mirrors but it will rebuild a lot faster. It depends on what tradeoffs you're willing to make.

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.

VostokProgram posted:

Got a question about freenas. I've read a few articles that suggest that using striped mirrors is actually safer in zfs than raidz2, because the rebuild time is a lot shorter. Like if you lose a drive in raidz2 the whole array is going to be worked over but if you lose a drive in a striped mirror it's only the partner drive. Looking for the goon opinion on this. Would it make sense to start off with a 4tb mirror pool and slowly expand with more mirrors as needed, or should I save up for a full 6 drive raidz2?

Yes, striped mirrors are inherently safer. You're also getting less than 50% of the total capacity of the drives you're running. At 6 drives (the minimum for striped mirrors, because you need two drives plus a third for parity, and then a second set to match), you're losing 4 drives to redundancy (your parity drive on one side of the mirror, plus the entire other side). You can tolerate the loss of one entire side of the mirror without issue, plus one drive of the other side. Whereas a z2 is losing 2 drives to it. Less fault tolerance, more capacity. It's a matter of priorities. And if you care about read IOPS, which would lean toward the striped mirrors as well, because they should be a fair bit faster.

Edit: I apparently was half asleep when I wrote this and it's probably safer to disregard what I said and listen to other folks in the thread. Leaving the original text intact though.

G-Prime fucked around with this message at 13:31 on Apr 17, 2016

BlankSystemDaemon
Mar 13, 2009



G-Prime posted:

Yes, striped mirrors are inherently safer.
This is demonstrably untrue.
Assuming 4 disks, raidz2 will protect against data-loss from any two disks failing whereas a striped mirror can lose data if two specific drives fail.
Additionally, while striped mirrors do rebuild faster, it's a moot point if you're not running hotspare(s) and have to go buy a disk, then also have to shut down the machine because hotplug without extra capasitors and a backplane isn't always what you want to be gambling on when you've got a degraded array.

As for IOPS, in that respect you're much better off on ZFS if you stuff the machine full of memory to increase ARC size, and - if you need extra scratch/work space, adding additional L2ARC via a SSD.

BlankSystemDaemon fucked around with this message at 11:11 on Apr 15, 2016

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
When dealing with cold data in ZFS, you're better off with (striped) mirrors, because with RAIDZ, for every block, all non-parity disks need to be touched. Whereas with mirrors, ZFS is free to run multiple requests in quasi-parallel.

BlankSystemDaemon
Mar 13, 2009



Combat Pretzel posted:

When dealing with cold data in ZFS, you're better off with (striped) mirrors, because with RAIDZ, for every block, all non-parity disks need to be touched. Whereas with mirrors, ZFS is free to run multiple requests in quasi-parallel.
Better off in what respect? IOPS or read/write? L2ARC can be added to both RAIDzN and (striped) mirror(s) and is much faster.

Yaoi Gagarin
Feb 20, 2014

G-Prime posted:

Yes, striped mirrors are inherently safer. You're also getting less than 50% of the total capacity of the drives you're running. At 6 drives (the minimum for striped mirrors, because you need two drives plus a third for parity, and then a second set to match), you're losing 4 drives to redundancy (your parity drive on one side of the mirror, plus the entire other side). You can tolerate the loss of one entire side of the mirror without issue, plus one drive of the other side. Whereas a z2 is losing 2 drives to it. Less fault tolerance, more capacity. It's a matter of priorities. And if you care about read IOPS, which would lean toward the striped mirrors as well, because they should be a fair bit faster.

I think we have a different understanding of striped mirrors? Sounds like you're suggesting two raidz1 vdevs in a single pool?

My plan was to start with a single vdev pool. The vdev would be two drives in mirror configuration. Then if I needed to add more space later I would just add more vdevs. That's theoretically less safe than a raidz2 vdev. But supposedly the idea is that with raidz2 the rebuild process could take out two more drives, since it takes so long. Wheras with mirrors the rebuild is a simple copy. Does that make sense? Am I overestimating how dangerous raidz2 rebuilds are?

Also I don't care about IOPS at all, as long as the array can support watching videos.

E: here is the article I'm basing this on.
http://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/

I'd appreciate insight re: whether that article is full of poo poo or not

Yaoi Gagarin fucked around with this message at 20:01 on Apr 15, 2016

BlankSystemDaemon
Mar 13, 2009



From a quick read through the article, his point seems to be that if all you care about is high storage efficiency with very limited redundancy while keeping initial setup and future expansion costs low, you can use mirrored vdevs.
That's a tall set of assumptions, including some you can't even begin to make guesses about; as an example, I didn't know when I built my pool that I'd be using it to store what I'm currently storing on it - and had I known, I'd probably have opted for more redundancy.

So yes, the solution makes with the above provisos taken into account - but do you really care about your data that little?

BlankSystemDaemon fucked around with this message at 20:33 on Apr 15, 2016

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Mirrors are inherently faster in that an I/O request can be issued to multiple physical storage groups and the one with more ideal head positioning on disks will be able to serve the request sooner. A similar action can happen in a write scenario depending upon consistency guarantees by the file system (transaction may commit but the write may not be replicated across all mirrors at the time). Given how slow most I/O is compared to CPU (even with these new SSDs capable of 4 GBps+ sustained transfer), a lot of I/O schedulers will be able to determine which mirror will be able to make the request faster instead of amplifying reads (or writes) and schedule requests reasonably well for reads.

Most database folks prefer some form of mirrors to reduce disk access latency, increase queue length, and thusly to improve IOPS. The problem with mirrors is that you lose total available capacity, but the bonus of extra reliability is always welcome. Mirrors are favored for high IOPS systems that also have some stringent availability concerns in themselves. I wouldn't bother with mirrors on, say, a Kafka or HDFS node in AWS (Kafka has its own algorithms designed around sequential writes that I think may make using mirrored RAID potentially not as critical for throughput). But for any production Postgres or MySQL server I've seen worth anything, they were using mirrored forms of disk redundancy and is where I start before even talking about file systems used.

Striped vdevs will get you greater capacity and ZFS will roughly round-robin distribute writes across all devices. It is imperative from both an availability and performance perspective that your vdevs are balanced in latency. As a horrific classic example, if you create a RAID0 of 5 SSDs and a single hard drive, you will be screwed by the single hard drive on almost every write and write. ZFS mirror selection is substantially better than a naive RAID0 with block reads preferred by the lower latency device on average although this wasn't true several years ago. With how queuing theory shows, if you equally load multiple queues with equal work units all with different processing times by perhaps 50%, the queue length of the slowest will not be just 2x as large as the fastest, it'll actually be 4x!

diremonk
Jun 17, 2008

I currently have a Western Digital EX4 NAS serving files to an older Windows system (Intel i7 920, 12 gigs ram). It is primarily a Plex and Windows Media Center server, although WMC is just used to record OTA broadcasts. I'll probably setup a way for my parents and sister to backup files to it since they currently do no backups.

I'm thinking about going a couple different directions for an upgrade and I need someone to idiot check me so I don't waste money. My budget is about $1000-1400 if that makes a difference

Option 1 - Purchasing a QNAP TS-453A NAS and throwing in either 4 4TB or 4 3TB drives in it.
I'm semi leaning towards this because the power demand for it will be a lot lower than the tower I'm currently using. But going this way I lose some of the flexibility of having an actual PC do the heavy lifting. Also since I share out my library with my family, I have had 5 streams going out at the same time (1 local, 4 remote) so I'm not sure if the processor will handle that amount of transcoding.

Option 2 - Purchasing a Intel NUC (i7 model)
Buy as close to a top of the line NUC and have that take over the processing side of things but I would probably have to purchase another NAS or external enclosure so I don't run out of room anytime soon. Wouldn't install Windows on this, probably some flavor of linux.

Option 3 - Build a new PC
I'd kind of like to do this, but since I've never built a pc from the ground up I'm a bit nervous. And when you add in hardware Raid cards, i7 v. Xeon, etc I start to get a headache. But doing this would more than likely give the most bang for the dollar? Plus I'd like to start playing around with virtualization

Option 4 - Continue with the existing setup, only swap over to a different OS like FreeNAS or unRaid This would be a cheaper option, just throwing more drives into the tower and setting up a new OS. I think the motherboard only has 5 SATA-2 ports on it so I'd need a hardware raid card? It also only has 3 3.5 inch drive bays and 3 5.25 bays. I guess I could rip everything out and put it all in a new case with more room.

Any advice on the direction I should move towards?

g0del
Jan 9, 2001



Fun Shoe

VostokProgram posted:

I think we have a different understanding of striped mirrors? Sounds like you're suggesting two raidz1 vdevs in a single pool?

My plan was to start with a single vdev pool. The vdev would be two drives in mirror configuration. Then if I needed to add more space later I would just add more vdevs. That's theoretically less safe than a raidz2 vdev. But supposedly the idea is that with raidz2 the rebuild process could take out two more drives, since it takes so long. Wheras with mirrors the rebuild is a simple copy. Does that make sense? Am I overestimating how dangerous raidz2 rebuilds are?

Also I don't care about IOPS at all, as long as the array can support watching videos.

E: here is the article I'm basing this on.
http://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/

I'd appreciate insight re: whether that article is full of poo poo or not
I wouldn't say it's full of poo poo, but it's definitely not talking to you. His arguments basically come down to "You can totally afford to buy 2 hard drives for every one you use" and "You'll need mirrored drives to wring all the IOPS possible out of your array, especially when resilvering". The second definitely doesn't apply to you, and most home users can't afford to throw money at redundancy they way businesses do so the first probably doesn't apply either.

As for the danger - I've done dozens of resilvers on RAIDZ1* and RAIDZ2 vdevs, both at home and at work. I've never lost a pool during one. People talk about the dangers of getting an URE while rebuilding a regular RAID5 array and somehow think that applies to ZFS. It doesn't. If you did get an URE during rebuild of a RAIDZ array ZFS would simply mark that particular file as unreadable, it wouldn't immediately kill the whole pool.

Your pool will technically be slower with a failed drive and during a resilver, but for home use that won't be noticeable. And as for IOPS, unless you spent way too much money on 10GB networking throughout your house, your network will be the bottleneck when watching movies not pool performance.

Oh, and his assertion that "gnarly" parity calculations are what kill your performance during a rebuild is just dumb.

Just stress test your drives first to weed out any bad ones and then set them up in RAIDZ2 in a machine with plenty of RAM.


* It was put in place before I started working there, I would have designed things differently if I'd been around when it was set up.

OldSenileGuy
Mar 13, 2001
I don't know a lot on the topic of RAIDs, and I've done a little research and I think I already know the answer to my question, but I just wanted to make sure I haven't missed anything:

I recently bought this RAID secondhand:

http://www.sansdigital.com/towerraid-plus/tr5mp.html

The RAID was sold to me with 5 drives (4x2TB and 1x3TB). But I only got the enclosure, I didn't get the eSata card that it apparently can come bundled with. Since I didn't get this card, does that mean I can only use this raid as RAID0, meaning that the 5 drives within are read as 1x 11TB drive, but if one of them fails then I lose the whole thing?

I was planning on making it into a RAID5 array by replacing the 3TB drive with a 2TB drive, but I didn't realize I needed this card. From what I can tell, without it I'm stuck with RAID0.

I'm using it with OSX if it matters.

Adbot
ADBOT LOVES YOU

Thanks Ants
May 21, 2004

#essereFerrari


It's just an enclosure so the RAID levels available to you are going to be determined by whatever RAID controller you connect it to.

Say for instance you connected it to your Mac using this:

http://www.amazon.com/dp/B00TYF2AFA

Each individual disk would show up in OS X and you would have the option to build them into a software RAID using Disk Utility.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply