Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Jamy
May 8, 2007

by I Ozma Myself

PopeOnARope posted:

That's it, I think I'm fed up with using consumer drives inside RAID arrays.

I've had a ST341000340AS drop out of my RAID array twice in the last two weeks, thank gently caress that's not my boot array.

You always pay what you get for. I've had my 8 WD RE2 drives running nearly 24/7 for the past 6 months and except for the 2-3 drives that came DOA or had some shipping errors they've been flying smooth.

Adbot
ADBOT LOVES YOU

PopeOnARope
Jul 23, 2007

Hey! Quit touching my junk!

Jamy posted:

You always pay what you get for. I've had my 8 WD RE2 drives running nearly 24/7 for the past 6 months and except for the 2-3 drives that came DOA or had some shipping errors they've been flying smooth.

And I do really appreciate the difference in standards, I just can't stomach it when the RAID drives are TWICE the loving price.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Jamy posted:

You always pay what you get for. I've had my 8 WD RE2 drives running nearly 24/7 for the past 6 months and except for the 2-3 drives that came DOA or had some shipping errors they've been flying smooth.
And I've had 4 GP drives running in a raidz that gets a good amount of activity for 16 months, and other than the one drive that died right after I got it and was replaced w/ advanced exchange, I've had no issues.

No matter the vendor or model, there is someone who has had a bad experience, many others have had good experiences. Other than the 1.5TB seagate fiasco, a (consumer) drive is pretty much a drive.

Wanderer89
Oct 12, 2009
Hello again all, it's time to upgrade my raidz opensolaris from 4x1tb to 6x1tb, and that also means it's outgrown its original Antec Sonata II case...

I've been looking through newegg for a suitable replacement but it looks like the 10x5.25 bay rosewill I had picked out months ago has been discontinued, and not finding anything interesting in the mid-tower category.

However, I've seen a few 4U rackmounts for <80$ that seem nice, but this will be the first rackmount I've had at home, what am I getting myself into? At some point a rack may be plausible, but it needs to serve double duty as htpc for a little while first, can I just leave it on a well ventilated floor or stand? Coffee table or something?

Any suggestions for cheap cases? Need at least 7x3.5, + 3x5.25 or more 3.5 for future expansion...

Scuttle_SE
Jun 2, 2005
I like mammaries
Pillbug
So, I've managed to get my hands on a Dell Perc 6/i card, and I was thinking I'd build a raid5-array on it. I was thinking getting four of either the WD Green 1.5TB drives or the Samsung Ecogreen 1.5TB drives.

Now... After some googling and reading various threads I have found two things.

* Running a RAID5-array on consumer disks is hard, due to TLER loving things up.

* Running a RAID5-array on consumer disks works just fine.

What's the real verdict? Will TLER gently caress me, or does it run just fine? Will the randomness of the spin-speed on the WD Green disks cause trouble?

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
I have a similar question. I'm planning a machine to use as a freeNAS box, running a 5 drive Raidz array as a start. Is there any issue with using the Green drives like this? I know running them in a "real" raid setup can/does cause issues, but how do they work in a zfs pool?

Jamy
May 8, 2007

by I Ozma Myself

PopeOnARope posted:

And I do really appreciate the difference in standards, I just can't stomach it when the RAID drives are TWICE the loving price.

I can't argue with that. It is a bit ridiculous.

PopeOnARope
Jul 23, 2007

Hey! Quit touching my junk!

Scuttle_SE posted:

So, I've managed to get my hands on a Dell Perc 6/i card, and I was thinking I'd build a raid5-array on it. I was thinking getting four of either the WD Green 1.5TB drives or the Samsung Ecogreen 1.5TB drives.

Now... After some googling and reading various threads I have found two things.

* Running a RAID5-array on consumer disks is hard, due to TLER loving things up.

* Running a RAID5-array on consumer disks works just fine.

What's the real verdict? Will TLER gently caress me, or does it run just fine? Will the randomness of the spin-speed on the WD Green disks cause trouble?

I'm currently running a RAID-5 Array on 4x WD and 1x Seagate drives - the WDs have TLER enabled, and since then, have stopped dropping out of the array.

The Seagate doesn't. I've had to do 2 50 hour rebuilds this week.

So uh yeah. Use WD drives, turn TLER on, and head parking off, and hope it doesn't gently caress up?

To clarify - the seagate usually drops from the array when the system comes back from sleep mode, and doesn't spin the drive up in time.

Scuttle_SE
Jun 2, 2005
I like mammaries
Pillbug

PopeOnARope posted:

So uh yeah. Use WD drives, turn TLER on, and head parking off, and hope it doesn't gently caress up?

Can I mess with TLER on newer WD drives? I seem to recall WD locking that down or something...

Edit: Found this on the Wikipedia

code:
Note: Western Digital (1.5TB Green Power) WD15EADS-00P8B0 (Nov 2009) drives do not support TLER. 
WD15EADS-00S2B0 (Feb 2010) models do support TLER.
So...I just have to make sure to get a drive manufactured after feb 2010?

Scuttle_SE fucked around with this message at 20:48 on Jun 13, 2010

movax
Aug 30, 2008

So I've pretty much filled my existing RAID-Z2 array (8x1.5TB)...I want to add another 8 drives to the zpool as another vdev (pretty sure you can do that), am I going to kill performance by using 8x2TB as opposed to matching the existing drives (7200rpm Seagate 1.5s)?

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Scuttle_SE posted:

Can I mess with TLER on newer WD drives? I seem to recall WD locking that down or something...
WDXXEADS drives have them disabled after those dates (you must get EACS or older EADS drives, most of which have been taken out of supply lines by now). Regardless, if you're going to be using consumer drives in a regular RAID array, you MUST NOT USE WDXXEARS DRIVES. They have WDTLER and WDIDLE disabled and you will get problems with no possible fix but to get new drives. Seagate and Samsung are worth looking into. Given Seagate's wonky problems with reliability (bursts of bad drives seem to happen given the sort of reviews I'm reading, although on the aggregate unlikely to be much worse than the others), I'd go with Samsung drives now for cheap, reliable home mass storage.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

movax posted:

So I've pretty much filled my existing RAID-Z2 array (8x1.5TB)...I want to add another 8 drives to the zpool as another vdev (pretty sure you can do that), am I going to kill performance by using 8x2TB as opposed to matching the existing drives (7200rpm Seagate 1.5s)?

Nope. It'll right most of the new data to the second vdev anway, by virtue of the first one being full.

PopeOnARope
Jul 23, 2007

Hey! Quit touching my junk!

necrobobsledder posted:

WDXXEADS drives have them disabled after those dates (you must get EACS or older EADS drives, most of which have been taken out of supply lines by now). Regardless, if you're going to be using consumer drives in a regular RAID array, you MUST NOT USE WDXXEARS DRIVES. They have WDTLER and WDIDLE disabled and you will get problems with no possible fix but to get new drives. Seagate and Samsung are worth looking into. Given Seagate's wonky problems with reliability (bursts of bad drives seem to happen given the sort of reviews I'm reading, although on the aggregate unlikely to be much worse than the others), I'd go with Samsung drives now for cheap, reliable home mass storage.

Why is this so loving confusing. loving Western Digital. How are Hitachi in terms of reliability and warranty (e.g. do they offer advanced replacement)

Star War Sex Parrot
Oct 2, 2003

PopeOnARope posted:

Why is this so loving confusing. loving Western Digital. How are Hitachi in terms of reliability and warranty (e.g. do they offer advanced replacement)
Because they don't want you using consumer drives in RAID configurations. They want you to pay more for the RE drives.

movax
Aug 30, 2008

FISHMANPET posted:

Nope. It'll right most of the new data to the second vdev anway, by virtue of the first one being full.

Gotcha. Now to wait for bank account stabilization so I can buy 8 2TB drives.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

PopeOnARope posted:

Why is this so loving confusing. loving Western Digital. How are Hitachi in terms of reliability and warranty (e.g. do they offer advanced replacement)
Hitachis are par with Samsung although they seem to be worst (by like 1w) in terms of power consumption and heat. To their credit, they're the ones that came out with THE first 1TB drives, so it's not like they have no R&D muscle. It's just that being a technological innovator doesn't actually mean you make better products.

I decided to stop giving a crap about the drives and to just use ZFS. I'm still migrating over the drives I put into my Thecus NAS that I bought as a literal omg, I need it now emergency (my company laptop broke and I needed to use my fileserver as my workstation).

Farmer Crack-Ass
Jan 2, 2001

this is me posting irl
I've heard anecdotally that Hitachi consumer drives seem to do pretty well in RAID environments even without a TLER-alike option available to modify.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

necrobobsledder posted:

Hitachis are par with Samsung although they seem to be worst (by like 1w) in terms of power consumption and heat. To their credit, they're the ones that came out with THE first 1TB drives, so it's not like they have no R&D muscle. It's just that being a technological innovator doesn't actually mean you make better products.

I decided to stop giving a crap about the drives and to just use ZFS. I'm still migrating over the drives I put into my Thecus NAS that I bought as a literal omg, I need it now emergency (my company laptop broke and I needed to use my fileserver as my workstation).

Yeah, ZFS is nice like that because it's so stupidly robust when it comes to the bizarre kinds of errors that crop up sometimes with consumer level stuff.

md10md
Dec 11, 2006
I was messing around with my ZFS mirrored pool this morning and decided to throw in 2x750GB HDs and add them to the existing 2x2TB HD mirrored pool. So I went from this:
code:
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
Media       1.17T   660G      2     22   296K   304K
  mirror    1.17T   660G      2     22   296K   303K
    ad4         -      -      1     19   149K   304K
    ad6         -      -      1     19   149K   304K
----------  -----  -----  -----  -----  -----  -----   
To this:
code:
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
Media       1.17T  1.32T      2     22   296K   304K
  mirror    1.17T   660G      2     22   296K   303K
    ad4         -      -      1     19   149K   304K
    ad6         -      -      1     19   149K   304K
  mirror     373M   696G      0     11  41.8K  68.2K
    ad0         -      -      0     11  21.0K  68.7K
    ad2         -      -      0     11  21.0K  68.7K
----------  -----  -----  -----  -----  -----  ----- 
It worked great but now I just realized that I'm not sure how to take it back! I thought I might be able to just remove the second mirror but the filesystem has already started adding files to ad0 and ad2. So, is there anyway to revert this to the old setup while still transferring the data on ad0 and ad2 to ad4 and ad6?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

md10md posted:

I was messing around with my ZFS mirrored pool this morning and decided to throw in 2x750GB HDs and add them to the existing 2x2TB HD mirrored pool. So I went from this:
code:
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
Media       1.17T   660G      2     22   296K   304K
  mirror    1.17T   660G      2     22   296K   303K
    ad4         -      -      1     19   149K   304K
    ad6         -      -      1     19   149K   304K
----------  -----  -----  -----  -----  -----  -----   
To this:
code:
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
Media       1.17T  1.32T      2     22   296K   304K
  mirror    1.17T   660G      2     22   296K   303K
    ad4         -      -      1     19   149K   304K
    ad6         -      -      1     19   149K   304K
  mirror     373M   696G      0     11  41.8K  68.2K
    ad0         -      -      0     11  21.0K  68.7K
    ad2         -      -      0     11  21.0K  68.7K
----------  -----  -----  -----  -----  -----  ----- 
It worked great but now I just realized that I'm not sure how to take it back! I thought I might be able to just remove the second mirror but the filesystem has already started adding files to ad0 and ad2. So, is there anyway to revert this to the old setup while still transferring the data on ad0 and ad2 to ad4 and ad6?

Nope, you're stuck.

md10md
Dec 11, 2006

FISHMANPET posted:

Nope, you're stuck.
Haha, that figures. It's not the end of the the world since I planned on migrating to a RAID-Z in a few months anyway. That will teach me not to make changes on a live system instead of a VM.

NeuralSpark
Apr 16, 2004

Just wired a 30A 125 twist-lock to a 15A wall outlet pigtail for my SmartUPS 3000XL I got from a dumpster. With the new set of batteries I put in, I should get an hour or so from my server, core switch, PoE WAP, and AT&T U-verse gateway. Need those batteries for your RAID5s.

dietcokefiend
Apr 28, 2004
HEY ILL HAV 2 TXT U L8TR I JUST DROVE IN 2 A DAYCARE AND SCRATCHED MY RAZR
Has anyone used the Amazon S3 service to backup their home NAS? I was looking at how cheap the storage was and curious if it was worth it for the little guy.

aborn
Jun 2, 2001

1, 2, woop! woop!

necrobobsledder posted:

MUST NOT USE WDXXEARS DRIVES

Ah, gently caress. How long do I have before my Synology Raid craps out? It's been working fine for about almost two months now.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Almost all SOHO NASes actually use md RAID on Linux, which is software RAID. The RAID problems with TLER and such matter only for hardware RAID platforms.

Star War Sex Parrot
Oct 2, 2003

augustob posted:

Ah, gently caress. How long do I have before my Synology Raid craps out? It's been working fine for about almost two months now.
Do most of these NASes even play nice with 4k-sector drives like the EARS? I'd be terrified of horrendous performance due to unaligned sectors.

Horse Clocks
Dec 14, 2004


After reading back 10 pages but I've probably missed something and I'm a little confused with all the TLER, IDLE, EARS, EADS, mdadm, zfs stuff. Could somebody please clarify?

* You want to turn off IDLE/TLER when using WD Green disks in RAID (why?)
* The new 2TB EARS drives can't turn on/off TLER/IDLE
* Neither TLER/IDLE matter if you're using software RAID. i.e. mdadm/zfs
* RAID-Z > RAID-6 > RAID-5


I've currently got a ~3 year old 5x1TB RAID-5 mdadm array on WD Green disks but want to bump it up to 8TB, preferably with all new disks considering their age. What's the suggested path to choose? WD or Samsung disks? FreeBSD+RAID-Z, Ubuntu+RAID-6?

Horse Clocks fucked around with this message at 09:47 on Jun 15, 2010

PopeOnARope
Jul 23, 2007

Hey! Quit touching my junk!
/\
You want TLER ON. It basically forces the drive to recover from an error in less time, meaning that if it fucks up, the RAID controller doesn't drop it because it stops responding for too long.

You want idle off. For some asanine reason, these drives wait something like two seconds before parking heads. Now, while parking heads may be all well and good in theory, all it means is that the head armature has to go from parked to active FAR more often then it should, again possibly incurring that delay. Now - the WD green drives are designed for about 300k park / unpark cycles. I have two drives in my array that are about two years old with 180k parks each - people have seen worse.

strwrsxprt posted:

Because they don't want you using consumer drives in RAID configurations. They want you to pay more for the RE drives.

What a clusterfuck.

So basically if at some point I decide to do this right, my options are limited to

WD RE ($300 a piece)
Samsung (I'm worried because of their firmware issues)
Hitachi (I don't know yet?)

Really though, it generally seems like a bait and switch to the market. They get the herald for "Bringing huge drives to the masses for cheap!" but what you really get are poo poo drives.

PopeOnARope fucked around with this message at 08:31 on Jun 15, 2010

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Don't get the WD-RE, get the WD Black. Same drive mechanics and I think also electronics. Ability to change TLER is the unknown here.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

xlevus posted:

* RAID-Z > RAID-6 > RAID-5

RAIDZ > RAID5 and RAIDZ2 > RAID6, but I'm not sure where I'd fall on RAIDZ vs RAID6

KennyG
Oct 22, 2002
Here to blow my own horn.
It depends on the setup. RAID-Z is not a classic "RAID" architecture in the sense of Raid-0, 1, 5, 1+0, 6 etc. It's a (clever) file system that is software raid aware. You can't run NTFS on RAID-Z. This isn't a big deal in 99% of the cases, but it is a distinction that I think needs to be made. You could, in theory, run a RAID-Z of RAID-6 arrays.

Saying Raid Z > Raid 5 is an apples and oranges comparison. Raid 5 on proper hardware can be just as fault tolerant and significantly faster. Raid-Z on the other hand can cope with consumer level drivers better because of it's higher level of abstraction.

It's completely situational. RAID-Z leaves you with fewer OS options at the benefit of having a wider (and less expensive) selection of hardware, RAID-5/6 will generally perform better and gives you more software options at the cost of hardware selection (and cost).

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

KennyG posted:

It depends on the setup. RAID-Z is not a classic "RAID" architecture in the sense of Raid-0, 1, 5, 1+0, 6 etc. It's a (clever) file system that is software raid aware. You can't run NTFS on RAID-Z. This isn't a big deal in 99% of the cases, but it is a distinction that I think needs to be made. You could, in theory, run a RAID-Z of RAID-6 arrays.

Saying Raid Z > Raid 5 is an apples and oranges comparison. Raid 5 on proper hardware can be just as fault tolerant and significantly faster. Raid-Z on the other hand can cope with consumer level drivers better because of it's higher level of abstraction.

It's completely situational. RAID-Z leaves you with fewer OS options at the benefit of having a wider (and less expensive) selection of hardware, RAID-5/6 will generally perform better and gives you more software options at the cost of hardware selection (and cost).

Data integrity? ZFS makes sure your data is correct, RAID just blindy copies bits around without knowing if they're good or bad. Another fact to consider.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

KennyG posted:

It depends on the setup. RAID-Z is not a classic "RAID" architecture in the sense of Raid-0, 1, 5, 1+0, 6 etc. It's a (clever) file system that is software raid aware. You can't run NTFS on RAID-Z. This isn't a big deal in 99% of the cases, but it is a distinction that I think needs to be made. You could, in theory, run a RAID-Z of RAID-6 arrays.

Saying Raid Z > Raid 5 is an apples and oranges comparison. Raid 5 on proper hardware can be just as fault tolerant and significantly faster. Raid-Z on the other hand can cope with consumer level drivers better because of it's higher level of abstraction.

It's completely situational. RAID-Z leaves you with fewer OS options at the benefit of having a wider (and less expensive) selection of hardware, RAID-5/6 will generally perform better and gives you more software options at the cost of hardware selection (and cost).

Actually, speed is a non-issue these days. You're offloading into hardware what a single core and half a gig of ram can do without any issue on modern hardware. Given most modern file servers have 4 cores and 8-16 gigs, it really doesn't make a difference. It all comes down to interoperability, fault tolerance, and featureset at that point.


xlevus posted:

After reading back 10 pages but I've probably missed something and I'm a little confused with all the TLER, IDLE, EARS, EADS, mdadm, zfs stuff. Could somebody please clarify?

* You want to turn off IDLE/TLER when using WD Green disks in RAID (why?)
* The new 2TB EARS drives can't turn on/off TLER/IDLE
* Neither TLER/IDLE matter if you're using software RAID. i.e. mdadm/zfs
* RAID-Z > RAID-6 > RAID-5


I've currently got a ~3 year old 5x1TB RAID-5 mdadm array on WD Green disks but want to bump it up to 8TB, preferably with all new disks considering their age. What's the suggested path to choose? WD or Samsung disks? FreeBSD+RAID-Z, Ubuntu+RAID-6?


I use RAIDZ2 on an opensolaris installation. All 8 of the disks are new 4k format WD1500EARS drives, Intel SAS HBAs, and an AMD 4 core processor. It has also runs a bunch of VMs through virtualbox, which makes life a shitload easier when it comes to using the server box for things that linux/solaris blow rear end at, like multimedia streaming, uTorrent, and poo poo that only comes on windows flavored boxes.

Once you get used to the wonky way solaris wants you to do things, it's really not that bad at all. Plus it's stable like a tank, I have yet to have any major issue that wasn't caused by a dying boot HDD or a dumbass command.

The best part is if your system catches fire, as long as you can rescue 6 out of 8 of the drives, you can plug them into ANY computer, run the liveCD, 'import pool -f' and your data is right back where you left it. No flakey raid cards, no dropping 500-600 on hardware RAID, you get to spend that on a Norco 4220 instead!

Methylethylaldehyde fucked around with this message at 19:04 on Jun 15, 2010

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Just a note, but Newegg has 2TB Samsung drives for $110 each. I was waiting for them to hit about this pricepoint before expanding out. I should be set for a while with this next order, wheee.

The black drives get you better warranty and electronics over the regular ol' variety of consumer drives, but sometimes the extra warranty hardly matters if you keep upgrading drives every few years anyway. 3 years ago 1TB drives cost a bit more than what 2 TB drives cost now. Given the pace is keeping decent stride, 4TB for $100 will be likely in 2013 while SSDs will have gone down significantly in price.

KennyG posted:

It's completely situational. RAID-Z leaves you with fewer OS options...
And this is part of why I'm putting VMs on my OpenSolaris box. Lets me run whatever OSes I need on the machine with local filesystem access speeds. It's not like I'm going to play games on the fileserver, right? SMB sucks balls (and iSCSI over ethernet scares me with my grade of equipment), so I'd rather put as many services as possible local to the file server anyway.

On top of the other benefits mentioned, RAIDZ offers de-dupe options, which can be a huge cost (and even performance) saver. Its performance gains are mostly done via the ARC (adaptive reactive cache) that implies you need a bit more RAM to get good performance than you would with straight hardware RAID.

RAIDZ also reduces the critical need for ECC RAM on your fileserver, further reducing hardware costs. With 10TB+ of data, even a 1.0E-15 error rate in memory will get you some bad writes once in a while on a regular hardware RAID.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

necrobobsledder posted:

And this is part of why I'm putting VMs on my OpenSolaris box. Lets me run whatever OSes I need on the machine with local filesystem access speeds. It's not like I'm going to play games on the fileserver, right? SMB sucks balls (and iSCSI over ethernet scares me with my grade of equipment), so I'd rather put as many services as possible local to the file server anyway.

Purely anecdotal evidence, but virtualization networking has been a huge pain in my rear end on build 134. First I tried using Xen, and the network would occasionally just freeze. Now I'm using Virtual Box, and having the same problem.

Also, wondering what you mean by "SMB sucks balls"

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

FISHMANPET posted:

Purely anecdotal evidence, but virtualization networking has been a huge pain in my rear end on build 134. First I tried using Xen, and the network would occasionally just freeze. Now I'm using Virtual Box, and having the same problem.

Also, wondering what you mean by "SMB sucks balls"

SMB is kinda annoying sometimes with opensolaris. The kernel mode CIFS driver I've been using hasn't had any real issues, but I'm also not exactly hammering the hell out of it either.

The networking really doesn't like unusual NICs in your system. I used the e1000 Intel Ethernet card, and it works fine in the fancy bridging mode that gives each VM it's own un-NATed IP.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Be careful about the NICs you use and the driver you have. OpenSolaris drivers for lots of common consumer NICs are pretty bad. There's alternative NIC drivers out there (Realtek ones come to mind) that will help with stability. OpenSolaris is about where Linux was in 2003 in terms of support for consumer devices IMO, so it's Intel NICs and server-grade hardware or expect problems.

FISHMANPET posted:

Also, wondering what you mean by "SMB sucks balls"
The protocol is terribly inefficient and incurs an incredible amount of overhead and latency per request, not to mention the anemic ACL support making it difficult to create isomorphic permissions with POSIX ACLs. Furthermore, it makes mounting SMB filesystems to use for network apps horrible (have you tried iTunes libraries on an SMB share? Slow as hell, even with gigabit ethernet and jumbo frames w/ direct connections) while it's normally not a big deal with NFS. I'm pretty impatient about my network filesystems, even at home. It pisses me off that I've moved over my 1.5TB iTunes library to the network and now it takes forever to even edit metadata because iTunes does so many back-and-forth reads / writes (try adding a large video file to iTunes from an SMB mount. Try it from local. Weep).

Bridged NATs are better typically for VMs on workstations mostly because you don't have to fire up a DHCP client on the virtual ethernet devices. Also host to guest filesystem sharing on Virtualbox fires up a lightweight NetBIOS daemon along with some DNS-level override for the VBOXSVR share and other stuff to work with Windows last I saw.

PopeOnARope
Jul 23, 2007

Hey! Quit touching my junk!

necrobobsledder posted:

Just a note, but Newegg has 2TB Samsung drives for $110 each. I was waiting for them to hit about this pricepoint before expanding out. I should be set for a while with this next order, wheee.

The black drives get you better warranty and electronics over the regular ol' variety of consumer drives, but sometimes the extra warranty hardly matters if you keep upgrading drives every few years anyway. 3 years ago 1TB drives cost a bit more than what 2 TB drives cost now. Given the pace is keeping decent stride, 4TB for $100 will be likely in 2013 while SSDs will have gone down significantly in price.
And this is part of why I'm putting VMs on my OpenSolaris box. Lets me run whatever OSes I need on the machine with local filesystem access speeds. It's not like I'm going to play games on the fileserver, right? SMB sucks balls (and iSCSI over ethernet scares me with my grade of equipment), so I'd rather put as many services as possible local to the file server anyway.

On top of the other benefits mentioned, RAIDZ offers de-dupe options, which can be a huge cost (and even performance) saver. Its performance gains are mostly done via the ARC (adaptive reactive cache) that implies you need a bit more RAM to get good performance than you would with straight hardware RAID.

RAIDZ also reduces the critical need for ECC RAM on your fileserver, further reducing hardware costs. With 10TB+ of data, even a 1.0E-15 error rate in memory will get you some bad writes once in a while on a regular hardware RAID.

It's not so much the drive as the cost of the infrastructure, though.

I would venture that the simplest route is mdadm or ZFS, but then you need to consider housing, powering, and controlling the drives. I'd venture to say that to properly build a multi-tb array these days would easily cost into the thousands.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

necrobobsledder posted:

RAIDZ also reduces the critical need for ECC RAM on your fileserver, further reducing hardware costs. With 10TB+ of data, even a 1.0E-15 error rate in memory will get you some bad writes once in a while on a regular hardware RAID.
You are absolutely incorrect. ZFS will write whatever it is told to, so if what is in RAM is bad, ZFS will still write it, checksum it, and write the checksum that tells it the data is good.

You should use ECC ram if your data is actually important.

PopeOnARope posted:

I would venture that the simplest route is mdadm or ZFS, but then you need to consider housing, powering, and controlling the drives. I'd venture to say that to properly build a multi-tb array these days would easily cost into the thousands.
I could build a *CHEAP* 8TB usable, 12TB raw raidz2 for under $1000 today. $1500 and it would be blazing fast w/ an intel or sandforce SSD for the ZIL.

adorai fucked around with this message at 23:56 on Jun 15, 2010

Adbot
ADBOT LOVES YOU

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

necrobobsledder posted:

Be careful about the NICs you use and the driver you have. OpenSolaris drivers for lots of common consumer NICs are pretty bad. There's alternative NIC drivers out there (Realtek ones come to mind) that will help with stability. OpenSolaris is about where Linux was in 2003 in terms of support for consumer devices IMO, so it's Intel NICs and server-grade hardware or expect problems.
The protocol is terribly inefficient and incurs an incredible amount of overhead and latency per request, not to mention the anemic ACL support making it difficult to create isomorphic permissions with POSIX ACLs. Furthermore, it makes mounting SMB filesystems to use for network apps horrible (have you tried iTunes libraries on an SMB share? Slow as hell, even with gigabit ethernet and jumbo frames w/ direct connections) while it's normally not a big deal with NFS. I'm pretty impatient about my network filesystems, even at home. It pisses me off that I've moved over my 1.5TB iTunes library to the network and now it takes forever to even edit metadata because iTunes does so many back-and-forth reads / writes (try adding a large video file to iTunes from an SMB mount. Try it from local. Weep).

Bridged NATs are better typically for VMs on workstations mostly because you don't have to fire up a DHCP client on the virtual ethernet devices. Also host to guest filesystem sharing on Virtualbox fires up a lightweight NetBIOS daemon along with some DNS-level override for the VBOXSVR share and other stuff to work with Windows last I saw.

I've got an Intel NIC. I've got a second, and considering throwing that in and wiring it directly to the VM.

I'm using Samba (not the built in CIFS stuff) and I have my 120 GB iTunes library shared via Smb. v:shobon:v

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply