|
PopeOnARope posted:That's it, I think I'm fed up with using consumer drives inside RAID arrays. You always pay what you get for. I've had my 8 WD RE2 drives running nearly 24/7 for the past 6 months and except for the 2-3 drives that came DOA or had some shipping errors they've been flying smooth.
|
# ? Jun 13, 2010 00:40 |
|
|
# ? May 26, 2024 10:17 |
|
Jamy posted:You always pay what you get for. I've had my 8 WD RE2 drives running nearly 24/7 for the past 6 months and except for the 2-3 drives that came DOA or had some shipping errors they've been flying smooth. And I do really appreciate the difference in standards, I just can't stomach it when the RAID drives are TWICE the loving price.
|
# ? Jun 13, 2010 00:47 |
|
Jamy posted:You always pay what you get for. I've had my 8 WD RE2 drives running nearly 24/7 for the past 6 months and except for the 2-3 drives that came DOA or had some shipping errors they've been flying smooth. No matter the vendor or model, there is someone who has had a bad experience, many others have had good experiences. Other than the 1.5TB seagate fiasco, a (consumer) drive is pretty much a drive.
|
# ? Jun 13, 2010 00:51 |
|
Hello again all, it's time to upgrade my raidz opensolaris from 4x1tb to 6x1tb, and that also means it's outgrown its original Antec Sonata II case... I've been looking through newegg for a suitable replacement but it looks like the 10x5.25 bay rosewill I had picked out months ago has been discontinued, and not finding anything interesting in the mid-tower category. However, I've seen a few 4U rackmounts for <80$ that seem nice, but this will be the first rackmount I've had at home, what am I getting myself into? At some point a rack may be plausible, but it needs to serve double duty as htpc for a little while first, can I just leave it on a well ventilated floor or stand? Coffee table or something? Any suggestions for cheap cases? Need at least 7x3.5, + 3x5.25 or more 3.5 for future expansion...
|
# ? Jun 13, 2010 09:38 |
|
So, I've managed to get my hands on a Dell Perc 6/i card, and I was thinking I'd build a raid5-array on it. I was thinking getting four of either the WD Green 1.5TB drives or the Samsung Ecogreen 1.5TB drives. Now... After some googling and reading various threads I have found two things. * Running a RAID5-array on consumer disks is hard, due to TLER loving things up. * Running a RAID5-array on consumer disks works just fine. What's the real verdict? Will TLER gently caress me, or does it run just fine? Will the randomness of the spin-speed on the WD Green disks cause trouble?
|
# ? Jun 13, 2010 17:08 |
|
I have a similar question. I'm planning a machine to use as a freeNAS box, running a 5 drive Raidz array as a start. Is there any issue with using the Green drives like this? I know running them in a "real" raid setup can/does cause issues, but how do they work in a zfs pool?
|
# ? Jun 13, 2010 17:43 |
|
PopeOnARope posted:And I do really appreciate the difference in standards, I just can't stomach it when the RAID drives are TWICE the loving price. I can't argue with that. It is a bit ridiculous.
|
# ? Jun 13, 2010 18:28 |
|
Scuttle_SE posted:So, I've managed to get my hands on a Dell Perc 6/i card, and I was thinking I'd build a raid5-array on it. I was thinking getting four of either the WD Green 1.5TB drives or the Samsung Ecogreen 1.5TB drives. I'm currently running a RAID-5 Array on 4x WD and 1x Seagate drives - the WDs have TLER enabled, and since then, have stopped dropping out of the array. The Seagate doesn't. I've had to do 2 50 hour rebuilds this week. So uh yeah. Use WD drives, turn TLER on, and head parking off, and hope it doesn't gently caress up? To clarify - the seagate usually drops from the array when the system comes back from sleep mode, and doesn't spin the drive up in time.
|
# ? Jun 13, 2010 19:11 |
|
PopeOnARope posted:So uh yeah. Use WD drives, turn TLER on, and head parking off, and hope it doesn't gently caress up? Can I mess with TLER on newer WD drives? I seem to recall WD locking that down or something... Edit: Found this on the Wikipedia code:
Scuttle_SE fucked around with this message at 20:48 on Jun 13, 2010 |
# ? Jun 13, 2010 20:43 |
|
So I've pretty much filled my existing RAID-Z2 array (8x1.5TB)...I want to add another 8 drives to the zpool as another vdev (pretty sure you can do that), am I going to kill performance by using 8x2TB as opposed to matching the existing drives (7200rpm Seagate 1.5s)?
|
# ? Jun 13, 2010 23:55 |
|
Scuttle_SE posted:Can I mess with TLER on newer WD drives? I seem to recall WD locking that down or something...
|
# ? Jun 14, 2010 00:24 |
|
movax posted:So I've pretty much filled my existing RAID-Z2 array (8x1.5TB)...I want to add another 8 drives to the zpool as another vdev (pretty sure you can do that), am I going to kill performance by using 8x2TB as opposed to matching the existing drives (7200rpm Seagate 1.5s)? Nope. It'll right most of the new data to the second vdev anway, by virtue of the first one being full.
|
# ? Jun 14, 2010 00:25 |
|
necrobobsledder posted:WDXXEADS drives have them disabled after those dates (you must get EACS or older EADS drives, most of which have been taken out of supply lines by now). Regardless, if you're going to be using consumer drives in a regular RAID array, you MUST NOT USE WDXXEARS DRIVES. They have WDTLER and WDIDLE disabled and you will get problems with no possible fix but to get new drives. Seagate and Samsung are worth looking into. Given Seagate's wonky problems with reliability (bursts of bad drives seem to happen given the sort of reviews I'm reading, although on the aggregate unlikely to be much worse than the others), I'd go with Samsung drives now for cheap, reliable home mass storage. Why is this so loving confusing. loving Western Digital. How are Hitachi in terms of reliability and warranty (e.g. do they offer advanced replacement)
|
# ? Jun 14, 2010 00:34 |
|
PopeOnARope posted:Why is this so loving confusing. loving Western Digital. How are Hitachi in terms of reliability and warranty (e.g. do they offer advanced replacement)
|
# ? Jun 14, 2010 00:39 |
|
FISHMANPET posted:Nope. It'll right most of the new data to the second vdev anway, by virtue of the first one being full. Gotcha. Now to wait for bank account stabilization so I can buy 8 2TB drives.
|
# ? Jun 14, 2010 00:40 |
|
PopeOnARope posted:Why is this so loving confusing. loving Western Digital. How are Hitachi in terms of reliability and warranty (e.g. do they offer advanced replacement) I decided to stop giving a crap about the drives and to just use ZFS. I'm still migrating over the drives I put into my Thecus NAS that I bought as a literal omg, I need it now emergency (my company laptop broke and I needed to use my fileserver as my workstation).
|
# ? Jun 14, 2010 00:44 |
|
I've heard anecdotally that Hitachi consumer drives seem to do pretty well in RAID environments even without a TLER-alike option available to modify.
|
# ? Jun 14, 2010 00:54 |
|
necrobobsledder posted:Hitachis are par with Samsung although they seem to be worst (by like 1w) in terms of power consumption and heat. To their credit, they're the ones that came out with THE first 1TB drives, so it's not like they have no R&D muscle. It's just that being a technological innovator doesn't actually mean you make better products. Yeah, ZFS is nice like that because it's so stupidly robust when it comes to the bizarre kinds of errors that crop up sometimes with consumer level stuff.
|
# ? Jun 14, 2010 01:57 |
|
I was messing around with my ZFS mirrored pool this morning and decided to throw in 2x750GB HDs and add them to the existing 2x2TB HD mirrored pool. So I went from this:code:
code:
|
# ? Jun 14, 2010 22:07 |
|
md10md posted:I was messing around with my ZFS mirrored pool this morning and decided to throw in 2x750GB HDs and add them to the existing 2x2TB HD mirrored pool. So I went from this: Nope, you're stuck.
|
# ? Jun 14, 2010 22:12 |
|
FISHMANPET posted:Nope, you're stuck.
|
# ? Jun 14, 2010 22:21 |
|
Just wired a 30A 125 twist-lock to a 15A wall outlet pigtail for my SmartUPS 3000XL I got from a dumpster. With the new set of batteries I put in, I should get an hour or so from my server, core switch, PoE WAP, and AT&T U-verse gateway. Need those batteries for your RAID5s.
|
# ? Jun 15, 2010 03:09 |
|
Has anyone used the Amazon S3 service to backup their home NAS? I was looking at how cheap the storage was and curious if it was worth it for the little guy.
|
# ? Jun 15, 2010 03:14 |
|
necrobobsledder posted:MUST NOT USE WDXXEARS DRIVES Ah, gently caress. How long do I have before my Synology Raid craps out? It's been working fine for about almost two months now.
|
# ? Jun 15, 2010 04:35 |
|
Almost all SOHO NASes actually use md RAID on Linux, which is software RAID. The RAID problems with TLER and such matter only for hardware RAID platforms.
|
# ? Jun 15, 2010 05:43 |
|
augustob posted:Ah, gently caress. How long do I have before my Synology Raid craps out? It's been working fine for about almost two months now.
|
# ? Jun 15, 2010 05:46 |
|
After reading back 10 pages but I've probably missed something and I'm a little confused with all the TLER, IDLE, EARS, EADS, mdadm, zfs stuff. Could somebody please clarify? * You want to turn off IDLE/TLER when using WD Green disks in RAID (why?) * The new 2TB EARS drives can't turn on/off TLER/IDLE * Neither TLER/IDLE matter if you're using software RAID. i.e. mdadm/zfs * RAID-Z > RAID-6 > RAID-5 I've currently got a ~3 year old 5x1TB RAID-5 mdadm array on WD Green disks but want to bump it up to 8TB, preferably with all new disks considering their age. What's the suggested path to choose? WD or Samsung disks? FreeBSD+RAID-Z, Ubuntu+RAID-6? Horse Clocks fucked around with this message at 09:47 on Jun 15, 2010 |
# ? Jun 15, 2010 07:57 |
|
/\ You want TLER ON. It basically forces the drive to recover from an error in less time, meaning that if it fucks up, the RAID controller doesn't drop it because it stops responding for too long. You want idle off. For some asanine reason, these drives wait something like two seconds before parking heads. Now, while parking heads may be all well and good in theory, all it means is that the head armature has to go from parked to active FAR more often then it should, again possibly incurring that delay. Now - the WD green drives are designed for about 300k park / unpark cycles. I have two drives in my array that are about two years old with 180k parks each - people have seen worse. strwrsxprt posted:Because they don't want you using consumer drives in RAID configurations. They want you to pay more for the RE drives. What a clusterfuck. So basically if at some point I decide to do this right, my options are limited to WD RE ($300 a piece) Samsung (I'm worried because of their firmware issues) Hitachi (I don't know yet?) Really though, it generally seems like a bait and switch to the market. They get the herald for "Bringing huge drives to the masses for cheap!" but what you really get are poo poo drives. PopeOnARope fucked around with this message at 08:31 on Jun 15, 2010 |
# ? Jun 15, 2010 08:29 |
|
Don't get the WD-RE, get the WD Black. Same drive mechanics and I think also electronics. Ability to change TLER is the unknown here.
|
# ? Jun 15, 2010 12:39 |
|
xlevus posted:* RAID-Z > RAID-6 > RAID-5 RAIDZ > RAID5 and RAIDZ2 > RAID6, but I'm not sure where I'd fall on RAIDZ vs RAID6
|
# ? Jun 15, 2010 15:16 |
|
It depends on the setup. RAID-Z is not a classic "RAID" architecture in the sense of Raid-0, 1, 5, 1+0, 6 etc. It's a (clever) file system that is software raid aware. You can't run NTFS on RAID-Z. This isn't a big deal in 99% of the cases, but it is a distinction that I think needs to be made. You could, in theory, run a RAID-Z of RAID-6 arrays. Saying Raid Z > Raid 5 is an apples and oranges comparison. Raid 5 on proper hardware can be just as fault tolerant and significantly faster. Raid-Z on the other hand can cope with consumer level drivers better because of it's higher level of abstraction. It's completely situational. RAID-Z leaves you with fewer OS options at the benefit of having a wider (and less expensive) selection of hardware, RAID-5/6 will generally perform better and gives you more software options at the cost of hardware selection (and cost).
|
# ? Jun 15, 2010 16:11 |
|
KennyG posted:It depends on the setup. RAID-Z is not a classic "RAID" architecture in the sense of Raid-0, 1, 5, 1+0, 6 etc. It's a (clever) file system that is software raid aware. You can't run NTFS on RAID-Z. This isn't a big deal in 99% of the cases, but it is a distinction that I think needs to be made. You could, in theory, run a RAID-Z of RAID-6 arrays. Data integrity? ZFS makes sure your data is correct, RAID just blindy copies bits around without knowing if they're good or bad. Another fact to consider.
|
# ? Jun 15, 2010 16:28 |
|
KennyG posted:It depends on the setup. RAID-Z is not a classic "RAID" architecture in the sense of Raid-0, 1, 5, 1+0, 6 etc. It's a (clever) file system that is software raid aware. You can't run NTFS on RAID-Z. This isn't a big deal in 99% of the cases, but it is a distinction that I think needs to be made. You could, in theory, run a RAID-Z of RAID-6 arrays. Actually, speed is a non-issue these days. You're offloading into hardware what a single core and half a gig of ram can do without any issue on modern hardware. Given most modern file servers have 4 cores and 8-16 gigs, it really doesn't make a difference. It all comes down to interoperability, fault tolerance, and featureset at that point. xlevus posted:After reading back 10 pages but I've probably missed something and I'm a little confused with all the TLER, IDLE, EARS, EADS, mdadm, zfs stuff. Could somebody please clarify? I use RAIDZ2 on an opensolaris installation. All 8 of the disks are new 4k format WD1500EARS drives, Intel SAS HBAs, and an AMD 4 core processor. It has also runs a bunch of VMs through virtualbox, which makes life a shitload easier when it comes to using the server box for things that linux/solaris blow rear end at, like multimedia streaming, uTorrent, and poo poo that only comes on windows flavored boxes. Once you get used to the wonky way solaris wants you to do things, it's really not that bad at all. Plus it's stable like a tank, I have yet to have any major issue that wasn't caused by a dying boot HDD or a dumbass command. The best part is if your system catches fire, as long as you can rescue 6 out of 8 of the drives, you can plug them into ANY computer, run the liveCD, 'import pool -f' and your data is right back where you left it. No flakey raid cards, no dropping 500-600 on hardware RAID, you get to spend that on a Norco 4220 instead! Methylethylaldehyde fucked around with this message at 19:04 on Jun 15, 2010 |
# ? Jun 15, 2010 18:57 |
|
Just a note, but Newegg has 2TB Samsung drives for $110 each. I was waiting for them to hit about this pricepoint before expanding out. I should be set for a while with this next order, wheee. The black drives get you better warranty and electronics over the regular ol' variety of consumer drives, but sometimes the extra warranty hardly matters if you keep upgrading drives every few years anyway. 3 years ago 1TB drives cost a bit more than what 2 TB drives cost now. Given the pace is keeping decent stride, 4TB for $100 will be likely in 2013 while SSDs will have gone down significantly in price. KennyG posted:It's completely situational. RAID-Z leaves you with fewer OS options... On top of the other benefits mentioned, RAIDZ offers de-dupe options, which can be a huge cost (and even performance) saver. Its performance gains are mostly done via the ARC (adaptive reactive cache) that implies you need a bit more RAM to get good performance than you would with straight hardware RAID. RAIDZ also reduces the critical need for ECC RAM on your fileserver, further reducing hardware costs. With 10TB+ of data, even a 1.0E-15 error rate in memory will get you some bad writes once in a while on a regular hardware RAID.
|
# ? Jun 15, 2010 19:33 |
|
necrobobsledder posted:And this is part of why I'm putting VMs on my OpenSolaris box. Lets me run whatever OSes I need on the machine with local filesystem access speeds. It's not like I'm going to play games on the fileserver, right? SMB sucks balls (and iSCSI over ethernet scares me with my grade of equipment), so I'd rather put as many services as possible local to the file server anyway. Purely anecdotal evidence, but virtualization networking has been a huge pain in my rear end on build 134. First I tried using Xen, and the network would occasionally just freeze. Now I'm using Virtual Box, and having the same problem. Also, wondering what you mean by "SMB sucks balls"
|
# ? Jun 15, 2010 22:01 |
|
FISHMANPET posted:Purely anecdotal evidence, but virtualization networking has been a huge pain in my rear end on build 134. First I tried using Xen, and the network would occasionally just freeze. Now I'm using Virtual Box, and having the same problem. SMB is kinda annoying sometimes with opensolaris. The kernel mode CIFS driver I've been using hasn't had any real issues, but I'm also not exactly hammering the hell out of it either. The networking really doesn't like unusual NICs in your system. I used the e1000 Intel Ethernet card, and it works fine in the fancy bridging mode that gives each VM it's own un-NATed IP.
|
# ? Jun 15, 2010 22:36 |
|
Be careful about the NICs you use and the driver you have. OpenSolaris drivers for lots of common consumer NICs are pretty bad. There's alternative NIC drivers out there (Realtek ones come to mind) that will help with stability. OpenSolaris is about where Linux was in 2003 in terms of support for consumer devices IMO, so it's Intel NICs and server-grade hardware or expect problems.FISHMANPET posted:Also, wondering what you mean by "SMB sucks balls" Bridged NATs are better typically for VMs on workstations mostly because you don't have to fire up a DHCP client on the virtual ethernet devices. Also host to guest filesystem sharing on Virtualbox fires up a lightweight NetBIOS daemon along with some DNS-level override for the VBOXSVR share and other stuff to work with Windows last I saw.
|
# ? Jun 15, 2010 23:15 |
|
necrobobsledder posted:Just a note, but Newegg has 2TB Samsung drives for $110 each. I was waiting for them to hit about this pricepoint before expanding out. I should be set for a while with this next order, wheee. It's not so much the drive as the cost of the infrastructure, though. I would venture that the simplest route is mdadm or ZFS, but then you need to consider housing, powering, and controlling the drives. I'd venture to say that to properly build a multi-tb array these days would easily cost into the thousands.
|
# ? Jun 15, 2010 23:47 |
|
necrobobsledder posted:RAIDZ also reduces the critical need for ECC RAM on your fileserver, further reducing hardware costs. With 10TB+ of data, even a 1.0E-15 error rate in memory will get you some bad writes once in a while on a regular hardware RAID. You should use ECC ram if your data is actually important. PopeOnARope posted:I would venture that the simplest route is mdadm or ZFS, but then you need to consider housing, powering, and controlling the drives. I'd venture to say that to properly build a multi-tb array these days would easily cost into the thousands. adorai fucked around with this message at 23:56 on Jun 15, 2010 |
# ? Jun 15, 2010 23:53 |
|
|
# ? May 26, 2024 10:17 |
|
necrobobsledder posted:Be careful about the NICs you use and the driver you have. OpenSolaris drivers for lots of common consumer NICs are pretty bad. There's alternative NIC drivers out there (Realtek ones come to mind) that will help with stability. OpenSolaris is about where Linux was in 2003 in terms of support for consumer devices IMO, so it's Intel NICs and server-grade hardware or expect problems. I've got an Intel NIC. I've got a second, and considering throwing that in and wiring it directly to the VM. I'm using Samba (not the built in CIFS stuff) and I have my 120 GB iTunes library shared via Smb. vv
|
# ? Jun 16, 2010 00:43 |