Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Weinertron posted:

^^^^^
Yes ZFS is for you. ZFS is loving amazing. This box that I talk about below was built out of random hardware lying around, and is using both onboard nic and an add-in card.

My friend brought over his Opensolaris box to let me dump some data I had backed up for him to it, and I'm seeing transfer speed slow down as time passes. Furthermore, I seem to have overloaded it by copying too many things at once and I lost the network share for a second. I ssh'ed into it, and everything looks fine, but transfer speed keeps dropping from the initial 60MB/s writes I was seeing all the way down to 20MB/s. Is everything OK as long as zpool status returns 0 errors?

I don't know much about zfs, how full should the volume be allowed to run? Its on 4x1TB drives, so it has about 2.67TB logical space. Of this, about 800GB is available right now.

The speed drop might just be a side effect of your host computer's disks. Most drives will transfer 60mb/sec pretty easily if it's large files on the outer sectors, but as you fragment the files, move smaller files, or move toward the inner tracks, the drives will slow down quite a bit. I know that internal disk to disk transfers of files on my computer will go anywhere from 150MB/sec SSD to RAID array all the way down to ~15MB/sec slow assed disk to RAID array.

Adbot
ADBOT LOVES YOU

Vinlaen
Feb 19, 2008

Can you expand a RAID-Z or RAID-Z2 array?

I'm pretty sure I'm going to go the hardware route (Dell Perc 6/i + RAID-6) but I've thought about ZFS and RAID-Z2 in the past...

eames
May 9, 2009

Vinlaen posted:

Can you expand a RAID-Z or RAID-Z2 array?

No, unless you gradually swap every single existing HD for a larger one and resilver in between. But I doubt that is what you want. :)

Wompa164
Jul 19, 2001

Don't write ghouls.
After learning about it in this thread (and subsequently hearing about it nonstop), I just wrote a paper on ZFS for my Digital Forensics class.

http://www.opensolaris.org/os/community/zfs/docs/zfs_last.pdf

This is a really good, (fairly) simple read from Sun that highlights a lot of ZFS' strengths and why it's awesome. To be completely honest, I had never heard of ZFS before I found this thread, and after finishing my research/paper, I want the entire world to run on ZFS.

Vinlaen
Feb 19, 2008

Darn. The fact that you can't expand pools/arrays in ZFS is a deal breaker for me as I like to upgrade my storage every few months or every year, etc. :(

devilmouse
Mar 26, 2004

It's just like real life.

Vinlaen posted:

Darn. The fact that you can't expand pools/arrays in ZFS is a deal breaker for me as I like to upgrade my storage every few months or every year, etc. :(

You can expand a ZFS POOL with an arbitrary amount of disks, but if you're talking about drobo/unraid-like functionality where you can just add a disk to an existing ZFS DEVICE (well, unless you're going from non-mirrored to mirrored or striped), then no, you're out of luck.

You can however set up another array inside the same pool. So if you had a 4 disk device with parity, you could add an entire second 4 disk array to the same pool. Yeah, it's a waste of disks, but it's an option.

Adding a single disk at a time isn't in the cards for the foreseeable future. Here's an article that talks more about it from the Sun guys: http://blogs.sun.com/ahl/entry/expand_o_matic_raid_z

Suniikaa
Jul 4, 2004

Johnny Walker Wisdom
A constant theme throughout the thread is that ZFS is the second coming and it is glorious, but what are the disadvantages? Expanding the pool/array was covered, what other problems/shortcomings are there using ZFS/RAID-Z?

devilmouse
Mar 26, 2004

It's just like real life.

Suniikaa posted:

A constant theme throughout the thread is that ZFS is the second coming and it is glorious, but what are the disadvantages?

After thinking about it, outside of the expansion, the biggest downside is that it's only supported under Solaris and FreeBSD. There's a FUSE port too, I guess, but last I looked, it was still kind of meh. It's not hard to use per se, but getting your head around the pool management takes a bit of getting used to as well.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
Expansion is coming

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

adorai posted:

Expansion is coming
No it isn't.

Some enabling functionality is coming at some point, once they deem it stable. Deduplication and vdev removal are up first, which coincidentally depend on it. Then encryption. Then maybe block rebalancing. And then maybe a long way down the road, RAID-Z expansion.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Methylethylaldehyde posted:

The speed drop might just be a side effect of your host computer's disks. Most drives will transfer 60mb/sec pretty easily if it's large files on the outer sectors, but as you fragment the files, move smaller files, or move toward the inner tracks, the drives will slow down quite a bit. I know that internal disk to disk transfers of files on my computer will go anywhere from 150MB/sec SSD to RAID array all the way down to ~15MB/sec slow assed disk to RAID array.

All of the disks involved were Seagate 7200.12 TB drives, including the drive that I was transferring from. I have two in my machine and the NAS has 4. They've been really fast except for this one time, I'm hoping that it was just an isolated problem. These drives have been fast as all hell for sustained file transfers, its impressive how quick a 2-platter TB drive is.

poopgiggle
Feb 7, 2006

it isn't easy being a cross dominate shooter.


Wompa164 posted:

After learning about it in this thread (and subsequently hearing about it nonstop), I just wrote a paper on ZFS for my Digital Forensics class.

http://www.opensolaris.org/os/community/zfs/docs/zfs_last.pdf

This is a really good, (fairly) simple read from Sun that highlights a lot of ZFS' strengths and why it's awesome. To be completely honest, I had never heard of ZFS before I found this thread, and after finishing my research/paper, I want the entire world to run on ZFS.

Has there been any progress on ZFS forensics? Last I checked, basic deleted file recovery was as far as anyone had gotten and forensic RAID-Z reconstruction was a pipe dream.

network.guy
Jun 20, 2004

I'm quite out of the loop and I'd appreciate it if someone could help me:

If I get a NAS like a QNAP TS-239 Pro can I access it from multiple PCs as though it's just another folder on the PC? If so, is it possible to set it up so that one can read and write to the NAS, but not delete (or only as an admin)?

Lobbyist
Aug 2, 2002
Now THAT'S Comedy!
What about ZFS performance compared to Linux software RAID? Are there any comparisons out there?

GerbilNut
Dec 30, 2004
Does anyone have any experience with the Iomega StorCenter Pro ix4-100 NAS? I'm considering picking up a 2 TB unit to replace our aging HP StorageWorks array that has incredibly expensive replacement hard drives.

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
I'm rolling my own NAS to hold most of my media and I was wondering if any goons had a recommendation for a decent case. I'm looking for something small that could hold four hard drives. I'll be running the OS off a USB drive, but I figured I'd plan the actual setup around the case first. CD drive bays aren't really that important, and ideally I'd like the thing to look decent since it will be sitting in either my living room or near my entertainment center.

edit: nevermind, I actually read the OP!

edit2: Well, actually, if anyone had a recommendation for a smaller form factor tower, I'm all ears... I can't find anything in my price range that is halfway decent.

Gyshall fucked around with this message at 17:25 on Oct 8, 2009

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

network.guy posted:

If I get a NAS like a QNAP TS-239 Pro can I access it from multiple PCs as though it's just another folder on the PC? If so, is it possible to set it up so that one can read and write to the NAS, but not delete (or only as an admin)?
Yeah, just setup SMB or NFS ACLs / user accounts.

eames
May 9, 2009

http://www.tuxradar.com/content/debian-gives-freebsd-some-love

quote:

the upcoming release of Debian, codenamed Squeeze, will be available in a juicy new FreeBSD flavour alongside the regular Linux version.

apt-get install zfs :psyboom:

roadhead
Dec 25, 2001



Just got these in my hot little mitts with-in the last hour. Had to rush back to work before USPS got there though, I need more bits and pieces from monoprice to get power to the fans+drives.

So a 9 drive raidz1 pool with a hot-spare, or a 10 drive raidz2 pool? Same effective space either way, but reading the ZFS manual just now the max "recommended" size of a vdev is 9 disks.

roadhead fucked around with this message at 20:59 on Oct 13, 2009

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

roadhead posted:

So a 9 drive raidz1 pool with a hot-spare, or a 10 drive raidz2 pool? Same effective space either way, but reading the ZFS manual just now the max "recommended" size of a vdev is 9 disks.
I would go 7+2 with a hotspare, but if that is too much wasted space I would do 8+2.

Wanderer89
Oct 12, 2009

roadhead posted:

So a 9 drive raidz1 pool with a hot-spare, or a 10 drive raidz2 pool? Same effective space either way, but reading the ZFS manual just now the max "recommended" size of a vdev is 9 disks.

Disclaimer: Not a real zfs/solaris expert... but I built a opensolaris based raidz setup last month out of 4x 1TB drives (two mirrored IDE drives as boot, nice on the CF adapter, have been looking at those...)

Anyway, from all the reading I was seeing even industrial applications were making their zpools out of sets of only 7-drive raidz1 or 8 drive raidz2. I think your best bet is either 9-drive raidz1 with hot spare, or else create two separate raidz arrays and attach them to the same zpool. I would probably try the latter.

By the way, I would be Weinertron's friend with the 4x 7200.12 drive setup, and I think he was just having issues on limitations with his local disk or else massive number of files during transfer, as I have been able to stably saturate a gigabit connection reading from the pool on a regular basis, writing to a single local drive. I will be putting a second NIC into the system soon for duplexing...

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

roadhead posted:



Just got these in my hot little mitts with-in the last hour. Had to rush back to work before USPS got there though, I need more bits and pieces from monoprice to get power to the fans+drives.

So a 9 drive raidz1 pool with a hot-spare, or a 10 drive raidz2 pool? Same effective space either way, but reading the ZFS manual just now the max "recommended" size of a vdev is 9 disks.

Given the failure stats for RAID5 given earlier in the thread, you want to use RAID6/raidz2. The other problem is with 8+ drives is that you end up with a huge chance of a non-recoverable error even with 2 parity drives.

Honestly, I'd take the space hit and make it 2 RAIDZ2 arrays. 9TB of useful space, and it's about as fault tolerant as you're ever going to get without taping it and hiding it in Iron Mountain. The lost capacity won't really cause many problems when you can just add in another vdev made of 2 TB drives 6 months from now when they're $100 each.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Methylethylaldehyde posted:

The other problem is with 8+ drives is that you end up with a huge chance of a non-recoverable error even with 2 parity drives.
With regular scrubbing, I'm sure he'll be ok. Plus, since this is almost certainly going to be filled with MP3s and xvids, who gives a poo poo about a URE? ZFS will keep rebuilding. Although your argument re: expandability is decent, 2 4+1 arrays will be easier to expand down the road with another 4+1.

roadhead
Dec 25, 2001

adorai posted:

With regular scrubbing, I'm sure he'll be ok. Plus, since this is almost certainly going to be filled with MP3s and xvids, who gives a poo poo about a URE? ZFS will keep rebuilding. Although your argument re: expandability is decent, 2 4+1 arrays will be easier to expand down the road with another 4+1.

I bought a case with twice the drive bays I wanted to make expansion easier in the future :)

I went with a single RaidZ2 - after formatting and everything its over 10 TB, almost 11 TB usable.

Vinlaen
Feb 19, 2008

I'm just about ready to setup my Dell Perc 6/i card with RAID-6, but I figured I'd ask this anyways...

Assuming you're running a decent system (CPU+RAM), what kind of performance difference is there between a Perc 6/i and a ZFS RAID-Z2?

riichiee
Jul 5, 2007
I'm currently deciding on whether to run a separate file server/torrentbox or not.

My main concern is power usage.

I was hoping to setup some kind of server on my old AMD-3800+ (maybe FreeNAS, although I do want to run torrents as well, not 100% sure whether that supports it or not) that basically goes to sleep when it's not in use.

Then, when the file server is accessed, the machine wakes up, serves the files, stays on for another 30 minutes (or whatever) then goes back to sleep.

Is this possible?

Does anyone run anything similar to this? And, is it actually worth the hassle?

roadhead
Dec 25, 2001

Vinlaen posted:

I'm just about ready to setup my Dell Perc 6/i card with RAID-6, but I figured I'd ask this anyways...

Assuming you're running a decent system (CPU+RAM), what kind of performance difference is there between a Perc 6/i and a ZFS RAID-Z2?

I've got the Phenom II 705e and 4 gigs of DDR2800, the highest read bandwidth I've seen reported from "zpool iostat 3" is 300MB, and that is during a "zpool scrub" of the array.

Writes are obviously slower, but I have trouble getting that to peak and catching it :) I can't push files to the box fast enough via Samba to get anywhere near stressing the array, need some sort of synthetic HD benchmark for FreeBSD :)

KennyG
Oct 22, 2002
Here to blow my own horn.

Methylethylaldehyde posted:

Given the failure stats for RAID5 given earlier in the thread, you want to use RAID6/raidz2. The other problem is with 8+ drives is that you end up with a huge chance of a non-recoverable error even with 2 parity drives.

With regular maintenance (re:scrubbing & replacing failed drives fairly quickly) this isn't really that big of an issue.

See: http://blog.kj.stillabower.net/?p=93

specifically, this graph...


In 10 years, you are very likely to replace the drat thing all together. Read the blog post for the methodology, but basically it accounts for drive aging too, something most back of the napkin calculations do not. I would feel extremely comfortable with a Raid6 setup as large as 16 drives, and still be comfortable with MP3s and XviDs as large as 20 or more.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

riichiee posted:

Then, when the file server is accessed, the machine wakes up, serves the files, stays on for another 30 minutes (or whatever) then goes back to sleep.

Is this possible?
Looks like you're looking for Wake On LAN. You'll need a NIC and BIOS that supports it.

roadhead
Dec 25, 2001

KennyG posted:

With regular maintenance (re:scrubbing & replacing failed drives fairly quickly) this isn't really that big of an issue.

See: http://blog.kj.stillabower.net/?p=93

specifically, this graph...


In 10 years, you are very likely to replace the drat thing all together. Read the blog post for the methodology, but basically it accounts for drive aging too, something most back of the napkin calculations do not. I would feel extremely comfortable with a Raid6 setup as large as 16 drives, and still be comfortable with MP3s and XviDs as large as 20 or more.

Two of my drives have a few megabytes of data to be recovered every time I scrub, and its always the same two drives. Generally I've transferred anywhere from 100 to 500 gigs of files to the array between scrubs, so statistically speaking its really not all that much data, and it IS repaired, but I'm wondering if I should be placing the order for 2 hot spares sooner rather than later :/

KennyG
Oct 22, 2002
Here to blow my own horn.
What I would likely is scrub, replace one drive, rebuild, replace second drive, rebuild, and if you don't have hotspares, re-enable one of the other two (or both) of the old drives as my hotspares. If it's consistently the same two drives, just remove them from the equation.

Scuttle_SE
Jun 2, 2005
I like mammaries
Pillbug
I'm currently eyeing the Qnap TS-410, it looks great on paper, but I haven't really been able to find any benchmarks or similar...

I was thinking of using it as a iSCSI-target for my ESXi home-lab, so it would be nice if it could push at least 50-60MB/s when reading...

So...has anyone had any hands-on experience with this thing?

PlasticSpoon
Apr 2, 2004
I currently have 3 1TB Western Digital Caviar Green models, I am probably going to get another 3 or 4, can I get the Black caviar model, or will that throw off the RAID which they will all be on.

Jonny 290
May 5, 2005



[ASK] me about OS/2 Warp
That will be fine. You won't get the speed bonus from the black drives, but it won't slow the setup or anything.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Gyshall posted:

I'm rolling my own NAS to hold most of my media and I was wondering if any goons had a recommendation for a decent case. I'm looking for something small that could hold four hard drives. I'll be running the OS off a USB drive, but I figured I'd plan the actual setup around the case first. CD drive bays aren't really that important, and ideally I'd like the thing to look decent since it will be sitting in either my living room or near my entertainment center.

edit: nevermind, I actually read the OP!

edit2: Well, actually, if anyone had a recommendation for a smaller form factor tower, I'm all ears... I can't find anything in my price range that is halfway decent.

My setup is a bit hackish, but I'm pretty proud of it:
http://www.newegg.com/Product/Product.aspx?Item=N82E16811144140
http://www.newegg.com/Product/Product.aspx?Item=N82E16817121404
My motherboard has 4 SATA ports, I got a 2 port SATA controller. I dremelled out everything down to the handle, creating enough room for the 5 in 3. I had to get a 1U server power supply to fit in there (only 250 W if I recall correctly). It hums along nicely, and it's small enough that I can take it to LAN parties. It's also really loving heavy, because it has six drives.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
You guys with the WD Green drives, do you have constant IO pounding the array, or if not, did you adjust the idle timer that parks the heads?

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Combat Pretzel posted:

You guys with the WD Green drives, do you have constant IO pounding the array, or if not, did you adjust the idle timer that parks the heads?
i have 4 of them in a 4 drive raidz1 with ~8 ESXi VMs running as well as general CIFS for my house. I made zero changes to them, and am so far happy.

roadhead
Dec 25, 2001

adorai posted:

i have 4 of them in a 4 drive raidz1 with ~8 ESXi VMs running as well as general CIFS for my house. I made zero changes to them, and am so far happy.

10 here in a Raid-Z2 and also running everything on the defaults.

I don't "constantly" pound the array though. Its either sustained writes for a particular period, or very light reads over the network (either DLNA or SMB)

roadhead fucked around with this message at 14:19 on Oct 19, 2009

gregday
May 23, 2003

zfs.macosforge.org posted:

The ZFS project has been discontinued. The mailing list and repository will also be removed shortly.

Welp, ZFS on OS X is never gonna loving happen.

Adbot
ADBOT LOVES YOU

Phatty2x4
Dec 11, 2002
Masseous Gaseous Produceous

Vinlaen posted:

What's the general consensus on storing non-critical data like movies and TV shows?

I'm torn between no raid, RAID 5, and RAID 6.

I'm only talking about movies that I've ripped (so I have the physical media as my backup), TV shows, and application/game ISOs (which again, I have the physical media for).

This is data I can afford to lose but of course I'd prefer that I don't.

If I don't go with any RAID and a HD dies than I lose everything on that drive.

If I go with RAID 5 and a HD dies I have the possibility of losing all other drives during a rebuild (eg. second-drive failure).

If I go with RAID 6 I will a LOT of space but am relatively safe. I'm not certain that I need this kind of safety but I'm not sure.

The array size I'm talking about is about 5x 1 TB drives and my non-critical media is currently taking up about 2.5 GB of space.

Have you looked at other options such as DISPARITY (http://www.vilett.com/disParity/forum/) or FLEXRAID (http://www.openegg.org/FlexRAID.curi)?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply