|
Captain Tram posted:I might not be exactly sure what I'm looking for. I'm trying to expand my total storage space while at the same time giving myself some level of fault tolerance. I plan to connect the drives to my network, but it seems like the cheapest way to do that is to use a DAS and plug it into my existing server, rather than build/buy a whole new server/NAS. This plug would have to be USB2, because that's pretty much all I have available to me in the mini, unless I wanted to invest in FW800. Ok, so your server is a Mac mini? If that's the case, then yeah, you need either a DAS or a NAS of some sort. I don't think a DAS is necessarily going to be cheaper than a building a cheap NAS which will give you more flexibility in the long run anyway. I'll leave it up to someone else to recommend you some parts, but, if it were me, I'd build a 200-400 dollar PC and put Linux of some flavor on it and be done with it. Optionally, OpenIndiana or whatever the best OS for ZFS is nowadays.
|
# ? Jul 26, 2011 17:55 |
|
|
# ? May 15, 2024 02:56 |
|
jeeves posted:According to FreeNAS's reporting, all 4GB are being used constantly, with no spiking or anything. Just a constant 4GB, over many hours, with 1GB put in reserve by the system I think. That 4GB in use could mean anything. Actually dig down and figure out where the memory is being used. If the memory use is from a leak or something buying more ram will only delay the slowdown a little longer. LamoTheKid posted:Are BitTorrent downloads directly to a ZFS pool a bad idea? Would I be better served using my 30GB SSD as a ZIL/L2ARC disk? Or should I just not worry about this? Assuming your Bittorrent client doesn't download sequentially, and it really shouldn't be doing this, then its access pattern will really be hard for a COW filesystem to deal with. If you're only downloading/seeding at regular broadband speeds (i.e. much slower than your disk speeds) then you shouldn't worry about it too much. Just don't let that disk get too full.
|
# ? Jul 26, 2011 19:46 |
|
Hello all! I am looking at a backup/centralized storage solution for our small office (5ish people). What I need from the system is: -Raid 1 -Differential syncing once laptops hit the network -2TB would be quite adequate People don't need to be able to work on files stored on the NAS, only grab them and work on them on their own computers. What I've been looking at is the Seagate Black Armor and the LG Super Multi N2A2 NAS based on some reviews I've seen. My main question though is on the software side, I've looked at syncback and it appears that might work, but how is the stuff that comes bundled with these types of units? And if it's good enough, what manufacturer is the best? Any guidance is much appreciated!
|
# ? Jul 26, 2011 20:47 |
|
Background:Ubuntu 11.04 64 bit in a Norco 4220 case, 2x64GB Kingston SSD's for boot (in Software raid-1), 6x1.5TB WD and 6x2TB Samsung F4's. Got my replacement 2TB Samsung F4 drive in. Raid 5 recovery took about 6-7 hours (unmounted). It seems like with these drives it IS possible to set cctl (their version of WD's TLER), although it is a volatile setting and loses it's value when the system is turned off (but not hot rebooted?). Setting this value does NOT work on the drives if they are in an array, so I first had to deactivate it (after deactivating the LVM array on top of it), and then the command worked. So, I added a script to /etc/init.d/ to run 'smartctl -l scterc,70,70 <device>' and ran update-rc.d with a priority of 2, that set the value for all 6 of the Samsung drives (using /dev/disk/by-id/), so hopefully it will set those values upon boot, before mdadm assembles the array (I checked and the init.d script for mdadm has a priority of 3). The only problem is it doesn't seem to support the smartctl command to GET the ERC value, so I'm just going to have to hope the init script works. Also, kinda stoked that hot-swapping the drives on the Norco case works fine. When the 2TB drive died I immediately shut it down and ejected all the drives, and had the system back up a few days later (with the drives still not plugged in). After the replacement drive came in, I was able to plug all 6 drives back in, and mdadm assembled the degraded array with 5 of 6 devices automatically. Next up:Finish transferring all the files from the 6x1.5TB array to the new array, then recreate the 6x1.5TB array (so I can use 512k chunk size instead of 128k), then add it to the LVM Array. Thought I'd share in case it helps anyone else down the road.
|
# ? Jul 27, 2011 19:27 |
|
atomjack posted:Got my replacement 2TB Samsung F4 drive in. Raid 5 recovery took about 6-7 hours (unmounted). It seems like with these drives it IS possible to set cctl (their version of WD's TLER), although it is a volatile setting and loses it's value when the system is turned off (but not hot rebooted?). Setting this value does NOT work on the drives if they are in an array, so I first had to deactivate it (after deactivating the LVM array on top of it), and then the command worked. So, I added a script to /etc/init.d/ to run 'smartctl -l scterc,70,70 <device>' and ran update-rc.d with a priority of 2, that set the value for all 6 of the Samsung drives (using /dev/disk/by-id/), so hopefully it will set those values upon boot, before mdadm assembles the array (I checked and the init.d script for mdadm has a priority of 3). Hmm. I've been running several F4's in a mdadm RAID5 on Ubuntu 11.04 without bothering with this and haven't ever had a problem. Now I'm wondering if I should bother. When I originally got them, I thought I'd get around to researching if it was necessary, and if so, how to do it...but then I got sidetracked and never did it.
|
# ? Jul 27, 2011 19:54 |
|
Thermopyle posted:Hmm. I've been running several F4's in a mdadm RAID5 on Ubuntu 11.04 without bothering with this and haven't ever had a problem. Now I'm wondering if I should bother. When I originally got them, I thought I'd get around to researching if it was necessary, and if so, how to do it...but then I got sidetracked and never did it.
|
# ? Jul 27, 2011 20:05 |
|
atomjack posted:It's possible it's not needed, but only two days after I got the drives and put them in RAID5, one of the drives popped out of the array. Ran smartctl on it which reported bad sectors. So, it could just be that that drive was bad, or maybe it was bad AND the cctl kicked in and dropped it out of the array too soon. I didn't want to take any chances. I'll check back in in like a week and report on the status of the array. Well, I'll give you my experience: I've never had a drive get popped from an array, and I've got 3 F4's, two of which have been in there right about the time the drives came out. I've also got 6 2TB F3's in the server, and still haven't seen a problem. I'm guessing the cctl thing isn't needed.
|
# ? Jul 27, 2011 21:39 |
|
I think md is pretty forgiving on drive error times. Due to the embarassingly ghetto cabling / controller setup I have mine living on right now, I pop DMA errors in dmesg every once in a while, but they never drop the drive from the array altogether. The worry of bit rot / the hilariously bad way this is cabled right now is part of why I really want to do a Microserver + ZFS to eventually replace it with.
|
# ? Jul 27, 2011 22:47 |
|
Can someone explain to me why most network attached drives are so slow to write to? That seems to be one of the biggest complaints about NAS drives. What's the bottleneck?
|
# ? Jul 29, 2011 00:48 |
|
Corbet posted:Can someone explain to me why most network attached drives are so slow to write to? That seems to be one of the biggest complaints about NAS drives. Either slow non-gigabit networks or poor performance from software raids and such.
|
# ? Jul 29, 2011 00:59 |
|
jeeves posted:Either slow non-gigabit networks or poor performance from software raids and such. Or even bad cabling. My wireless performance went from 5MB/s to 11MB/s after I rewired my network, and wired went from 78MB/s to 110MB/s. All from replacing one cable to the media server, really.
|
# ? Jul 29, 2011 04:18 |
|
Corbet posted:Can someone explain to me why most network attached drives are so slow to write to? That seems to be one of the biggest complaints about NAS drives. Poor RAID/pseudo-RAID performance (CPU's fault sometimes in software RAID, check CPU usage on the server when you're writing a ton of data to it), or crappy Ethernet performance usually.
|
# ? Jul 29, 2011 04:26 |
|
Basically there are a lot more possibilities for speed bottlenecks with a NAS versus when you just attach a second drive to your computer via a SATA cable. That's why people complain, most tend to be used to the transfer speeds of the latter and ignorant of the former.
|
# ? Jul 29, 2011 05:03 |
|
Corbet posted:Can someone explain to me why most network attached drives are so slow to write to? That seems to be one of the biggest complaints about NAS drives. The cheap NAS units you can buy at retail stores are usually powered by a low power ARM SoC. If you want a high performance NAS then it might be more cost effective to make one yourself.
|
# ? Jul 29, 2011 05:06 |
|
PopeOnARope posted:Or even bad cabling. Was this a cable that was poorly crimped or possibly just bent too many times where it was dropping packets? You say your increase came from when you replaced the cable to the media server. Was that increase only to the media server, or was that from a different device on the network to something other than the media server?
|
# ? Jul 29, 2011 06:05 |
|
Moey posted:Was this a cable that was poorly crimped or possibly just bent too many times where it was dropping packets? The cable was a commercial one with absolutely no noticeable kinks in it's expanse. The increase was basically in communication between my laptop and the server, both over wired and wireless (the wired line for my laptop stayed the same). I haven't bothered to test FTP to my PS3 yet. I might later. It used to top out at around 11MB/s. (wired)
|
# ? Jul 29, 2011 10:31 |
|
I am also getting slowdowns now that my zfs pool is getting full. Getting alot of CPU spikes that stalls writes. Reads are going full gigabit. ZFS would probably need to get a defrag and vdev rebalance tool to maintain performance close to being full.
|
# ? Jul 29, 2011 13:51 |
|
conntrack posted:I am also getting slowdowns now that my zfs pool is getting full. Apparently ZFS only likes being at less than 80% full. To be honest the performance of my NAS with ZFS has been pretty subpar from what I am used to from my recent contract job that had me working with/editing files via network shares on a shittier server than my home's proliant but with Win/software Raid-5. Even with me adding in another 4GB of ram to max it out at 8GB, editing/sorting files over the network is much slower. If I already didn't have ~5.5TB of data and no where to copy to I'd probably just go with Win2008+raid5 on the thing. Oh well.
|
# ? Jul 30, 2011 09:22 |
|
jeeves posted:Apparently ZFS only likes being at less than 80% full. Did you tweak tcp settings and samba?
|
# ? Jul 30, 2011 20:15 |
|
I'm in the process of setting up a Windows Server 2008 primarily for fileserving at home. I have in my workstation a Highpoint Rocketraid 2320 8xSATA raid controller that has been working fine for the past couple of years, with a 3x 1TB RAID0 and a 3x 500GB RAID5. I am planning to move this RAID controller to the server that I'm setting up, but unsure which RAID level would be more suited for the general purpose of just serving files. I've read up on it and have had suggestions saying RAID1+0, but I don't know if I'm willing to sacrifice the cost/storage for the security, I mean, it's just files right? The server will be streaming content to various devices in the house, and won't see too much writing, maybe 400-500GB a month, so write-performance isn't really a limiting factor. These are the pros/cons I have considered, are there any more you can think of? RAID5: + Capacity + Price - Redundancy - Read/Write performance RAID1+0: + Redundancy + Read/Write performance - Capacity - Price I will also be running a RAID1 for files that require some reliability, so that's not an issue.
|
# ? Jul 30, 2011 20:29 |
|
Trapdoor posted:These are the pros/cons I have considered, are there any more you can think of? You're not quite right there. RAID5 (3 disks): * 2 disks' capacity * Tolerates 1 disk failure * Stripes data across disks with parity for ~2 striped disks' worth of performance * Costs 3 disks * Slow to rebuild * RAID 5 write hole, if you don't have an UPS RAID1+0 (4 disks): * 2 disks' capacity * Tolerates 1 disk failure for sure, maybe 2 if it's set up as a stripe across two mirrors * Stripes data with no parity for 2 striped disks' worth of performance * Costs 4 disks * Rebuilds more quickly * No write hole RAID5 (4 disks): * 3 disks' capacity * Tolerates 1 disk failure * Stripes data across disks with parity for ~3 striped disks' worth of performance * Costs 4 disks * Slow to rebuild * RAID 5 write hole, if you don't have an UPS RAID 5 is best suited. RAID 0, like you have set up now, is a real because it cannot tolerate any drive failures at all.
|
# ? Jul 30, 2011 20:45 |
|
Has anybody used a slim case or barebones PC when building their own NAS/fileserver? I want something with a 2-drive RAID-1 setup (mostly for backup), but would rather have something physically small that doesn't consume much power. Trouble is, those small cases don't seem to have 2 internal 3.5" bays for RAID 1. Since my main purpose is backup, I might just get a 2-drive NAS (something like this), but I'm curious about WHS, and since a good NAS/HDD combo is going to be a few hundred bucks anyway, figured I'd see what my options are for a more powerful custom build.
|
# ? Jul 30, 2011 21:57 |
|
Factory Factory posted:You're not quite right there. The 3 disk RAID0 I run right now contains files that can sustain a loss. When you say the RAID1+0 "rebuilds more quickly", how "quicklier" does it rebuild? What I meant with RAID5 having better capacity, I meant it grows faster than the RAID1+0 as you only need to add one drive to increase the capacity, compared to 2 with RAID1+0. I am probably going to run a UPS on the system, do you know what the price for a suitable UPS would be? Trapdoor fucked around with this message at 23:03 on Jul 30, 2011 |
# ? Jul 30, 2011 22:47 |
|
Trapdoor posted:The 3 disk RAID0 I run right now contains files that can sustain a loss. 2-5 hours instead of 4-24, depending on the controller and how much data there is. You can get a sufficient one for about $100 for a low-power system with few accessories. APC and Cyberpower are the brands to look for. If you can get one with a USB connector cable, then you can configure your PC to shut down much like you would a laptop, as the UPS appears as an ACPI battery to the computer.
|
# ? Jul 30, 2011 23:07 |
|
My eVGA RMA board finally came in, megatron is alive again! Went from E6600 to i7-930, 24GB of RAM. Scrub performance went way up:code:
1x 6-drive RAID-Z2 (2TB Barracuda LP) 1x 6-drive RAID-Z2 (2TB Barracuda LP) 1x 4-dirve RAID-Z1 (3TB Hitachi) or should I just stick with my original plan and add the last 6-drive RAID-Z2? e: write speeds went way up, maxing out at 120MB/s writes compared to the previous 70 or so. GigE limited! movax fucked around with this message at 05:09 on Aug 3, 2011 |
# ? Aug 3, 2011 03:18 |
|
movax posted:
Isn't RAIDZ performance limited to that of a single drive? If that's the case then you're just comparing the performance of the Hitachi vs. the Barracudas. Also tables.
|
# ? Aug 3, 2011 03:26 |
|
Longinus00 posted:Isn't RAIDZ performance limited to that of a single drive? If that's the case then you're just comparing the performance of the Hitachi vs. the Barracudas. quote:Also tables. fixed, sorry!
|
# ? Aug 3, 2011 04:07 |
|
For anyone that is interested, I've been using ZFS on Linux (http://zfsonlinux.org/) for a little while now, and it seems pretty drat stable, and there doesn't seem to be any performance hit compared to the XFS/mdadm setup that I used to run. It's obviously still beta software, but so far it seems to work fine, zpool/zfs commands behave as expected. I'm a little annoyed that setting up an encrypted array is a choice between: cryptsetup each individual drive, enter 5 passwords to mount them all and then ZFS rebuilds it every boot -OR- zpool over the bare drives, make a zvol and encrypt that, which then requires manual work in order to increase the size. I had initially planned to try out btrfs, but it currently doesn't support Raid5/6 style arrays. Also, what are the current recommendations for SATA/SAS controller cards? My Perc-5i's worked out well for a while, but there were some finicky issues that eventually made me stop using them, and a LOT of raid cards use teamdest fucked around with this message at 04:40 on Aug 3, 2011 |
# ? Aug 3, 2011 04:28 |
|
teamdest posted:For anyone that is interested, I've been using ZFS on Linux (http://zfsonlinux.org/) for a little while now, and it seems pretty drat stable, and there doesn't seem to be any performance hit compared to the XFS/mdadm setup that I used to run. Good to know. I've been using the ZFS-FUSE implementation and the I/O overhead from FUSE is just abysmal.
|
# ? Aug 3, 2011 04:35 |
|
Suposedly each vdev is limited to the performance of one drive. So to get high performance you need several vdevs and he has vdevs of similar performance so i would not worry about the difference in speed? More anoying for performance is that zfs is unable to automaticaly rebalance old data over all the vdevs. I did some duct tape script to try and rebalance, don't know it it made it better or worse though, i copied all folders and deleted the originals. Did some rebalance but what that does for fragmentation i don't know.
|
# ? Aug 3, 2011 12:06 |
|
On the top of ZFS performance: http://constantin.glez.de/blog/2010/06/closer-look-zfs-vdevs-and-performance In early 2010 I did a bunch of research to make sure we didn't gently caress up our thumper and work, and everything I found matches up with what he's sayings, so I'd trust that. That will answer questions about RAIDZ performance and mixed vdevs for movax.
|
# ? Aug 3, 2011 16:13 |
|
Last night I rebuilt my NAS, installed OpenIndiana and tried to import my single disk vdev on a USB drive and it throws this error:code:
Here's the output of "zdb -l /dev/dsk/c8t0d0p0" code:
I tried importing with the -f option and it says theres a problem with the VDEV but the drive looks fine (as stated above).
|
# ? Aug 3, 2011 16:16 |
|
So i'm looking to buy/build a 12tb DAS for video editing/archival footage for my small TV station. My only requirements are that it connect via Firewire 800, I dont think a NAS would be right for our facility and a SAN/Fibre Channel setup would be cost prohibitive. The best bang for my buck so far that i've come across would be this offering from OWC (the Mercury Elite Qx2). Does anyone have any other reccomendations/thoughts? Based on my calculations I could build the OWC unit myself for about $100 cheaper if I bought the enclosure and drives separately, but I'd be inclined to spend the extra $100 so that I can call OWC when something goes wrong.
|
# ? Aug 3, 2011 19:05 |
|
frogbs posted:Based on my calculations I could build the OWC unit myself for about $100 cheaper if I bought the enclosure and drives separately, but I'd be inclined to spend the extra $100 so that I can call OWC when something goes wrong. If this is for a professional client then yes the 100$ is worth it. Don't ever build things yourself for real jobs like that, especially if the difference is only 100$ and you're not paying for it anyhow.
|
# ? Aug 3, 2011 19:34 |
|
jeeves posted:If this is for a professional client then yes the 100$ is worth it. Don't ever build things yourself for real jobs like that, especially if the difference is only 100$ and you're not paying for it anyhow. I agree. It's for my employer, so I think it'd make sense to just pay the extra $100 or so.
|
# ? Aug 3, 2011 19:46 |
|
frogbs posted:I agree. It's for my employer, so I think it'd make sense to just pay the extra $100 or so. How much actual storage do you need? You mentioned 12tb, that is only a 4 bay unit, so 12tb is before any formatting/redundancy. Is this just for archival, or are users going to be working from this as well?
|
# ? Aug 3, 2011 20:56 |
|
Moey posted:How much actual storage do you need? You mentioned 12tb, that is only a 4 bay unit, so 12tb is before any formatting/redundancy.
|
# ? Aug 3, 2011 21:07 |
|
You should probably use RAID6.
|
# ? Aug 3, 2011 23:08 |
|
what is this posted:You should probably use RAID6. At which point (4 drive R6) you might as well use a Striped pair of Mirrors to avoid the write-hole and probably boost performance a bit.
|
# ? Aug 3, 2011 23:31 |
|
|
# ? May 15, 2024 02:56 |
|
frogbs posted:Someone will be working from this as well as storing old footage. I figure that we'd use Raid 5 and thus get around 9tb from an initial 12tb of disks. 8tb to 6tb after Raid 5 would also be sufficient for now as well. As well as what others said, are you doing off site backup and storage of this? If this is for a TV station, the array failing and losing data may be devastating. Also depending on what kind of work they are doing, I wonder the performance out of that enclosure, and if it will have a negative impact on their work (video editing?).
|
# ? Aug 3, 2011 23:43 |