|
Weinertron posted:^^^^^ The speed drop might just be a side effect of your host computer's disks. Most drives will transfer 60mb/sec pretty easily if it's large files on the outer sectors, but as you fragment the files, move smaller files, or move toward the inner tracks, the drives will slow down quite a bit. I know that internal disk to disk transfers of files on my computer will go anywhere from 150MB/sec SSD to RAID array all the way down to ~15MB/sec slow assed disk to RAID array.
|
# ? Oct 5, 2009 06:57 |
|
|
# ? May 28, 2024 15:55 |
|
Can you expand a RAID-Z or RAID-Z2 array? I'm pretty sure I'm going to go the hardware route (Dell Perc 6/i + RAID-6) but I've thought about ZFS and RAID-Z2 in the past...
|
# ? Oct 5, 2009 11:35 |
|
Vinlaen posted:Can you expand a RAID-Z or RAID-Z2 array? No, unless you gradually swap every single existing HD for a larger one and resilver in between. But I doubt that is what you want.
|
# ? Oct 5, 2009 13:06 |
|
After learning about it in this thread (and subsequently hearing about it nonstop), I just wrote a paper on ZFS for my Digital Forensics class. http://www.opensolaris.org/os/community/zfs/docs/zfs_last.pdf This is a really good, (fairly) simple read from Sun that highlights a lot of ZFS' strengths and why it's awesome. To be completely honest, I had never heard of ZFS before I found this thread, and after finishing my research/paper, I want the entire world to run on ZFS.
|
# ? Oct 6, 2009 08:57 |
|
Darn. The fact that you can't expand pools/arrays in ZFS is a deal breaker for me as I like to upgrade my storage every few months or every year, etc.
|
# ? Oct 6, 2009 11:50 |
|
Vinlaen posted:Darn. The fact that you can't expand pools/arrays in ZFS is a deal breaker for me as I like to upgrade my storage every few months or every year, etc. You can expand a ZFS POOL with an arbitrary amount of disks, but if you're talking about drobo/unraid-like functionality where you can just add a disk to an existing ZFS DEVICE (well, unless you're going from non-mirrored to mirrored or striped), then no, you're out of luck. You can however set up another array inside the same pool. So if you had a 4 disk device with parity, you could add an entire second 4 disk array to the same pool. Yeah, it's a waste of disks, but it's an option. Adding a single disk at a time isn't in the cards for the foreseeable future. Here's an article that talks more about it from the Sun guys: http://blogs.sun.com/ahl/entry/expand_o_matic_raid_z
|
# ? Oct 6, 2009 12:03 |
|
A constant theme throughout the thread is that ZFS is the second coming and it is glorious, but what are the disadvantages? Expanding the pool/array was covered, what other problems/shortcomings are there using ZFS/RAID-Z?
|
# ? Oct 6, 2009 12:19 |
|
Suniikaa posted:A constant theme throughout the thread is that ZFS is the second coming and it is glorious, but what are the disadvantages? After thinking about it, outside of the expansion, the biggest downside is that it's only supported under Solaris and FreeBSD. There's a FUSE port too, I guess, but last I looked, it was still kind of meh. It's not hard to use per se, but getting your head around the pool management takes a bit of getting used to as well.
|
# ? Oct 6, 2009 12:38 |
|
Expansion is coming
|
# ? Oct 6, 2009 12:42 |
|
adorai posted:Expansion is coming Some enabling functionality is coming at some point, once they deem it stable. Deduplication and vdev removal are up first, which coincidentally depend on it. Then encryption. Then maybe block rebalancing. And then maybe a long way down the road, RAID-Z expansion.
|
# ? Oct 6, 2009 14:25 |
|
Methylethylaldehyde posted:The speed drop might just be a side effect of your host computer's disks. Most drives will transfer 60mb/sec pretty easily if it's large files on the outer sectors, but as you fragment the files, move smaller files, or move toward the inner tracks, the drives will slow down quite a bit. I know that internal disk to disk transfers of files on my computer will go anywhere from 150MB/sec SSD to RAID array all the way down to ~15MB/sec slow assed disk to RAID array. All of the disks involved were Seagate 7200.12 TB drives, including the drive that I was transferring from. I have two in my machine and the NAS has 4. They've been really fast except for this one time, I'm hoping that it was just an isolated problem. These drives have been fast as all hell for sustained file transfers, its impressive how quick a 2-platter TB drive is.
|
# ? Oct 7, 2009 23:31 |
|
Wompa164 posted:After learning about it in this thread (and subsequently hearing about it nonstop), I just wrote a paper on ZFS for my Digital Forensics class. Has there been any progress on ZFS forensics? Last I checked, basic deleted file recovery was as far as anyone had gotten and forensic RAID-Z reconstruction was a pipe dream.
|
# ? Oct 7, 2009 23:38 |
|
I'm quite out of the loop and I'd appreciate it if someone could help me: If I get a NAS like a QNAP TS-239 Pro can I access it from multiple PCs as though it's just another folder on the PC? If so, is it possible to set it up so that one can read and write to the NAS, but not delete (or only as an admin)?
|
# ? Oct 8, 2009 01:54 |
|
What about ZFS performance compared to Linux software RAID? Are there any comparisons out there?
|
# ? Oct 8, 2009 15:54 |
|
Does anyone have any experience with the Iomega StorCenter Pro ix4-100 NAS? I'm considering picking up a 2 TB unit to replace our aging HP StorageWorks array that has incredibly expensive replacement hard drives.
|
# ? Oct 8, 2009 16:00 |
|
I'm rolling my own NAS to hold most of my media and I was wondering if any goons had a recommendation for a decent case. I'm looking for something small that could hold four hard drives. I'll be running the OS off a USB drive, but I figured I'd plan the actual setup around the case first. CD drive bays aren't really that important, and ideally I'd like the thing to look decent since it will be sitting in either my living room or near my entertainment center. edit: nevermind, I actually read the OP! edit2: Well, actually, if anyone had a recommendation for a smaller form factor tower, I'm all ears... I can't find anything in my price range that is halfway decent. Gyshall fucked around with this message at 17:25 on Oct 8, 2009 |
# ? Oct 8, 2009 17:03 |
|
network.guy posted:If I get a NAS like a QNAP TS-239 Pro can I access it from multiple PCs as though it's just another folder on the PC? If so, is it possible to set it up so that one can read and write to the NAS, but not delete (or only as an admin)?
|
# ? Oct 8, 2009 17:41 |
|
http://www.tuxradar.com/content/debian-gives-freebsd-some-lovequote:the upcoming release of Debian, codenamed Squeeze, will be available in a juicy new FreeBSD flavour alongside the regular Linux version. apt-get install zfs
|
# ? Oct 8, 2009 20:26 |
|
Just got these in my hot little mitts with-in the last hour. Had to rush back to work before USPS got there though, I need more bits and pieces from monoprice to get power to the fans+drives. So a 9 drive raidz1 pool with a hot-spare, or a 10 drive raidz2 pool? Same effective space either way, but reading the ZFS manual just now the max "recommended" size of a vdev is 9 disks. roadhead fucked around with this message at 20:59 on Oct 13, 2009 |
# ? Oct 13, 2009 20:17 |
|
roadhead posted:So a 9 drive raidz1 pool with a hot-spare, or a 10 drive raidz2 pool? Same effective space either way, but reading the ZFS manual just now the max "recommended" size of a vdev is 9 disks.
|
# ? Oct 13, 2009 23:10 |
|
roadhead posted:So a 9 drive raidz1 pool with a hot-spare, or a 10 drive raidz2 pool? Same effective space either way, but reading the ZFS manual just now the max "recommended" size of a vdev is 9 disks. Disclaimer: Not a real zfs/solaris expert... but I built a opensolaris based raidz setup last month out of 4x 1TB drives (two mirrored IDE drives as boot, nice on the CF adapter, have been looking at those...) Anyway, from all the reading I was seeing even industrial applications were making their zpools out of sets of only 7-drive raidz1 or 8 drive raidz2. I think your best bet is either 9-drive raidz1 with hot spare, or else create two separate raidz arrays and attach them to the same zpool. I would probably try the latter. By the way, I would be Weinertron's friend with the 4x 7200.12 drive setup, and I think he was just having issues on limitations with his local disk or else massive number of files during transfer, as I have been able to stably saturate a gigabit connection reading from the pool on a regular basis, writing to a single local drive. I will be putting a second NIC into the system soon for duplexing...
|
# ? Oct 13, 2009 23:20 |
|
roadhead posted:
Given the failure stats for RAID5 given earlier in the thread, you want to use RAID6/raidz2. The other problem is with 8+ drives is that you end up with a huge chance of a non-recoverable error even with 2 parity drives. Honestly, I'd take the space hit and make it 2 RAIDZ2 arrays. 9TB of useful space, and it's about as fault tolerant as you're ever going to get without taping it and hiding it in Iron Mountain. The lost capacity won't really cause many problems when you can just add in another vdev made of 2 TB drives 6 months from now when they're $100 each.
|
# ? Oct 14, 2009 00:11 |
|
Methylethylaldehyde posted:The other problem is with 8+ drives is that you end up with a huge chance of a non-recoverable error even with 2 parity drives.
|
# ? Oct 14, 2009 00:42 |
|
adorai posted:With regular scrubbing, I'm sure he'll be ok. Plus, since this is almost certainly going to be filled with MP3s and xvids, who gives a poo poo about a URE? ZFS will keep rebuilding. Although your argument re: expandability is decent, 2 4+1 arrays will be easier to expand down the road with another 4+1. I bought a case with twice the drive bays I wanted to make expansion easier in the future I went with a single RaidZ2 - after formatting and everything its over 10 TB, almost 11 TB usable.
|
# ? Oct 14, 2009 12:19 |
|
I'm just about ready to setup my Dell Perc 6/i card with RAID-6, but I figured I'd ask this anyways... Assuming you're running a decent system (CPU+RAM), what kind of performance difference is there between a Perc 6/i and a ZFS RAID-Z2?
|
# ? Oct 14, 2009 17:38 |
|
I'm currently deciding on whether to run a separate file server/torrentbox or not. My main concern is power usage. I was hoping to setup some kind of server on my old AMD-3800+ (maybe FreeNAS, although I do want to run torrents as well, not 100% sure whether that supports it or not) that basically goes to sleep when it's not in use. Then, when the file server is accessed, the machine wakes up, serves the files, stays on for another 30 minutes (or whatever) then goes back to sleep. Is this possible? Does anyone run anything similar to this? And, is it actually worth the hassle?
|
# ? Oct 15, 2009 22:48 |
|
Vinlaen posted:I'm just about ready to setup my Dell Perc 6/i card with RAID-6, but I figured I'd ask this anyways... I've got the Phenom II 705e and 4 gigs of DDR2800, the highest read bandwidth I've seen reported from "zpool iostat 3" is 300MB, and that is during a "zpool scrub" of the array. Writes are obviously slower, but I have trouble getting that to peak and catching it I can't push files to the box fast enough via Samba to get anywhere near stressing the array, need some sort of synthetic HD benchmark for FreeBSD
|
# ? Oct 16, 2009 14:49 |
|
Methylethylaldehyde posted:Given the failure stats for RAID5 given earlier in the thread, you want to use RAID6/raidz2. The other problem is with 8+ drives is that you end up with a huge chance of a non-recoverable error even with 2 parity drives. With regular maintenance (re:scrubbing & replacing failed drives fairly quickly) this isn't really that big of an issue. See: http://blog.kj.stillabower.net/?p=93 specifically, this graph... In 10 years, you are very likely to replace the drat thing all together. Read the blog post for the methodology, but basically it accounts for drive aging too, something most back of the napkin calculations do not. I would feel extremely comfortable with a Raid6 setup as large as 16 drives, and still be comfortable with MP3s and XviDs as large as 20 or more.
|
# ? Oct 16, 2009 15:02 |
|
riichiee posted:Then, when the file server is accessed, the machine wakes up, serves the files, stays on for another 30 minutes (or whatever) then goes back to sleep.
|
# ? Oct 16, 2009 15:04 |
|
KennyG posted:With regular maintenance (re:scrubbing & replacing failed drives fairly quickly) this isn't really that big of an issue. Two of my drives have a few megabytes of data to be recovered every time I scrub, and its always the same two drives. Generally I've transferred anywhere from 100 to 500 gigs of files to the array between scrubs, so statistically speaking its really not all that much data, and it IS repaired, but I'm wondering if I should be placing the order for 2 hot spares sooner rather than later :/
|
# ? Oct 16, 2009 15:20 |
|
What I would likely is scrub, replace one drive, rebuild, replace second drive, rebuild, and if you don't have hotspares, re-enable one of the other two (or both) of the old drives as my hotspares. If it's consistently the same two drives, just remove them from the equation.
|
# ? Oct 16, 2009 17:11 |
|
I'm currently eyeing the Qnap TS-410, it looks great on paper, but I haven't really been able to find any benchmarks or similar... I was thinking of using it as a iSCSI-target for my ESXi home-lab, so it would be nice if it could push at least 50-60MB/s when reading... So...has anyone had any hands-on experience with this thing?
|
# ? Oct 16, 2009 17:58 |
|
I currently have 3 1TB Western Digital Caviar Green models, I am probably going to get another 3 or 4, can I get the Black caviar model, or will that throw off the RAID which they will all be on.
|
# ? Oct 17, 2009 04:43 |
|
That will be fine. You won't get the speed bonus from the black drives, but it won't slow the setup or anything.
|
# ? Oct 17, 2009 06:49 |
|
Gyshall posted:I'm rolling my own NAS to hold most of my media and I was wondering if any goons had a recommendation for a decent case. I'm looking for something small that could hold four hard drives. I'll be running the OS off a USB drive, but I figured I'd plan the actual setup around the case first. CD drive bays aren't really that important, and ideally I'd like the thing to look decent since it will be sitting in either my living room or near my entertainment center. My setup is a bit hackish, but I'm pretty proud of it: http://www.newegg.com/Product/Product.aspx?Item=N82E16811144140 http://www.newegg.com/Product/Product.aspx?Item=N82E16817121404 My motherboard has 4 SATA ports, I got a 2 port SATA controller. I dremelled out everything down to the handle, creating enough room for the 5 in 3. I had to get a 1U server power supply to fit in there (only 250 W if I recall correctly). It hums along nicely, and it's small enough that I can take it to LAN parties. It's also really loving heavy, because it has six drives.
|
# ? Oct 18, 2009 07:26 |
|
You guys with the WD Green drives, do you have constant IO pounding the array, or if not, did you adjust the idle timer that parks the heads?
|
# ? Oct 18, 2009 23:12 |
|
Combat Pretzel posted:You guys with the WD Green drives, do you have constant IO pounding the array, or if not, did you adjust the idle timer that parks the heads?
|
# ? Oct 19, 2009 00:16 |
|
adorai posted:i have 4 of them in a 4 drive raidz1 with ~8 ESXi VMs running as well as general CIFS for my house. I made zero changes to them, and am so far happy. 10 here in a Raid-Z2 and also running everything on the defaults. I don't "constantly" pound the array though. Its either sustained writes for a particular period, or very light reads over the network (either DLNA or SMB) roadhead fucked around with this message at 14:19 on Oct 19, 2009 |
# ? Oct 19, 2009 14:16 |
|
zfs.macosforge.org posted:The ZFS project has been discontinued. The mailing list and repository will also be removed shortly. Welp, ZFS on OS X is never gonna loving happen.
|
# ? Oct 25, 2009 02:57 |
|
|
# ? May 28, 2024 15:55 |
|
Vinlaen posted:What's the general consensus on storing non-critical data like movies and TV shows? Have you looked at other options such as DISPARITY (http://www.vilett.com/disParity/forum/) or FLEXRAID (http://www.openegg.org/FlexRAID.curi)?
|
# ? Oct 25, 2009 06:29 |