|
Cyberdud posted:Am i better off just telling them that for this kind of money we won't be getting a reliable enterprise solution? YES. Walk away from the project. That way you you don't get blamed when things go spectacularly wrong. You could hack together some stuff out of freeNAS and spare parts, and it would work great! Until it didn't and you had to build a new box and spend a week recovering your data. Would you get fired for a week of downtime? If the answer is anything approaching "maybe" then go with a real enterprise system with a 4 hour onsite service plan. what is this fucked around with this message at 15:39 on May 12, 2010 |
# ? May 12, 2010 14:14 |
|
|
# ? May 26, 2024 09:06 |
|
Cyberdud posted:Thats the problem, its a serious project but the budget is crap. It's like trying to turn toothpicks into a functioning dell server. Any suggestions to unlock more budget or make a miracle? With the network infrastructure, i'm not supposed to go over 5-6k CAD. Not sure if it meets all your requirements but you could get a drobo elite or pro. http://www.drobo.com/products/droboelite.php Pros are only $1300 last time I checked and rack mountable.
|
# ? May 12, 2010 17:45 |
|
How are things with BTRFS these days? I keep looking for information, but most of it, especially that on "official" sites (like Oracle's), are outdated. All seems to relate to kernel 2.6.31, while there seems to be 2.6.34 already. I'm interested, because I'm looking at a parachute option, since I'm growing weary of Oracle's bullshit related to OpenSolaris.
|
# ? May 12, 2010 18:22 |
|
Combat Pretzel posted:How are things with BTRFS these days? I keep looking for information, but most of it, especially that on "official" sites (like Oracle's), are outdated. All seems to relate to kernel 2.6.31, while there seems to be 2.6.34 already. Why not use FreeBSD with ZFS?
|
# ? May 12, 2010 18:51 |
|
Combat Pretzel posted:How are things with BTRFS these days? I keep looking for information, but most of it, especially that on "official" sites (like Oracle's), are outdated. All seems to relate to kernel 2.6.31, while there seems to be 2.6.34 already. Nthing this poo poo. I'm stuck at B134 with all sorts of stupid problems (Network doesn't come up on boot, have to manually restart physical:nwam a couple of times, I can't set my XVM vms to start on boot) and I'd love to hit a stable release. But I can't go back to 2009.06 because my data pool is the most recent ZFS version. How's BSD with virtualization? I've got an Ubuntu VM that I'd like to keep running. If it's something like XVM or something like VirtualBox I don't really care, I just want something to run.
|
# ? May 12, 2010 19:57 |
|
Cyberdud posted:Am i better off just telling them that for this kind of money we won't be getting a reliable enterprise solution? 1. Write down a detailed analysis of what the pros & cons of technical options are given a budget along with the ideal solution and one-step down from ideal. Show that the budget does not allow the ability to meet critical business requirements and that you're basically throwing money away on top of taking a great risk by not budgeting for the right solution 2. Bite the bullet and do a hosed up, crazy implementation on a shoestring budget that gets you incredible amounts of praise 3. Look for a new job because the organization will likely not be in business for much longer, and it's probably stressful working there anyway 4. Not give a poo poo about the job except the bare minimum and looking good enough for your next job 5. Quit for a better job and pray that they don't (de)evolve into your previous job, ironic enough, through business success Combat Pretzel posted:I'm interested, because I'm looking at a parachute option, since I'm growing weary of Oracle's bullshit related to OpenSolaris. BTRFS will probably be production ready by the end of 2011 is my guess. I've been reading forum posts on occasion by BTRFS early adopters and they've basically had to have a marriage with the developers to get it working so far it seems. The disk format is still not stable, so I wouldn't use it for anything longterm either. It took a few years for Linux to be usable for anything beyond hobby computing, and perhaps if we're lucky something workable will be out before 2010.
|
# ? May 12, 2010 20:07 |
|
necrobobsledder posted:I've been in your sort of position for most of my career now and here's what I've done thus far: There was some talk that Oracle could just GPL-ize ZFS now that it owns Sun. I am not a lawyer, and have no idea how that works from a legal standpoint.
|
# ? May 12, 2010 21:01 |
|
three posted:There was some talk that Oracle could just GPL-ize ZFS now that it owns Sun. I am not a lawyer, and have no idea how that works from a legal standpoint. I think that would be very possible with CDDL (the Sun license) because it makes no requirements for licensing of subsequent software versions.
|
# ? May 12, 2010 21:33 |
|
three posted:Why not use FreeBSD with ZFS? FISHMANPET posted:Nthing this poo poo. I'm stuck at B134 with all sorts of stupid problems (Network doesn't come up on boot, have to manually restart physical:nwam a couple of times, I can't set my XVM vms to start on boot) and I'd love to hit a stable release. But I can't go back to 2009.06 because my data pool is the most recent ZFS version. Build 134 itself is stable for me. Looking for an exit strategy, tho. necrobobsledder posted:Linux is what is inline with Oracle's long-term product strategy it seems, so I would expect OpenSolaris to be more or less dead by 2013 anyway and ZFS by 2016 since enterprise customers take a while to be weaned off anything. But I don't actually care so long as I have a solid storage backend. Then again, Larry Ellison is a human being. necrobobsledder posted:BTRFS will probably be production ready by the end of 2011 is my guess. HAMMER looks interesting, but DragonflyBSD At least if FreeBSD would adopt it...
|
# ? May 12, 2010 21:59 |
|
Whats the consensus on WD AV-GP drives in RAIDs?
|
# ? May 12, 2010 22:42 |
|
I went ahead and ignored everyone telling me not to RAID 5 my three Western Digital Caviar Green drives and did anyway. Don't do it.
|
# ? May 13, 2010 01:11 |
|
IT Guy posted:I went ahead and ignored everyone telling me not to RAID 5 my three Western Digital Caviar Green drives and did anyway. I Raid-5'd 4 of them and had no problems. The random IOPS is kinda bad, but I basically just used it for bulk storage and serving sequential media files to hosts, and it worked out well enough. Now that I crammed them into a ZFS RAIDZ it's performance has gone up by about 15% for the usage patterns I use.
|
# ? May 13, 2010 01:56 |
|
I have 2x2TB WD AV-GPs and plan on adding 2 more when prices come down. Right now I have 1.81TB usable while mirrored, but I'd love to move that to a 4 drive RAIDZ1. I'm using FreeNAS right now and I really don't have anywhere to shuffle all my current files to while I rebuild a 4 disk array. Can anyone comment on whether this technique for transitioning a ZFS Mirror to RAIDZ1 would work? Link: http://i18n-freedom.blogspot.com/2008/01/how-to-turn-mirror-in-to-raid.html It's a bit old (2 years) but it sounds like it would work just fine. I guess for the 12 or so hours it takes me to transfer everything mid-procedure, I'd lose everything if the one drive failed. However, I'm probably willing to risk that when the time comes.
|
# ? May 13, 2010 05:17 |
|
Methylethylaldehyde posted:I Raid-5'd 4 of them and had no problems. The random IOPS is kinda bad, but I basically just used it for bulk storage and serving sequential media files to hosts, and it worked out well enough. I've got a 5 drive RAID5 under mdadm, and was offered either 5 2TB WD AV-GPs or 5 1TB WD Blacks in exchange for some work I'm doing. The pure size of the 2s was appealing, but I had to decide in the space of about 30 minutes which set I wanted so I went with the Blacks. I'm replacing Seagate 750GB ES2s. NeuralSpark fucked around with this message at 05:55 on May 13, 2010 |
# ? May 13, 2010 05:52 |
|
md10md posted:I have 2x2TB WD AV-GPs and plan on adding 2 more when prices come down. Right now I have 1.81TB usable while mirrored, but I'd love to move that to a 4 drive RAIDZ1. I'm using FreeNAS right now and I really don't have anywhere to shuffle all my current files to while I rebuild a 4 disk array. It seems pretty straight forward. Dump everything to a single drive, create a degraded 2+1 RAIDZ, copy everything to the RAIDZ, and then add your signle drive to the array to un degrade it. If your source drive dies during the copy you're boned, but if the 2 destination drives die your'e still fine, other than the time out. Also, depending on how much space you've got on the drives compared to how much data you have you could do some crazy poo poo like this: Break apart the mirror so each disk stands alone On the new disk create 3 sparse files and create a 2+1 RaidZ with that, and copy the data to that array. Now that the data is on all 3 drives, swap one sparse file for a mirror disk. Now create a sparse file on the first untouched disk, swap out of one your RAIDZ sparse files for this new file. So on Disk1 we have a complete copy of the data, and 1/3 of the data. Disk2 has 1/3 of the data on disk, and Disk3 has the final 1/3 in a file. Still two copies. Drop Disk3's file off of the RAIDZ, then put the the whole of Disk3 in the RAIDZ. So now Disk1 has a full copy and 1/3 in file, and both Disk2 and Disk3 have 1/3 of the data on disk. Finnaly, drop the sparse file on Disk1, and attach the whole disk to your array. There, completely convoluted, but you always have 2 copies of the data. E: Saw you have 4 drives. That should make it a lot easier to always have 2 copies if you want.
|
# ? May 13, 2010 05:54 |
|
IT Guy posted:I went ahead and ignored everyone telling me not to RAID 5 my three Western Digital Caviar Green drives and did anyway. Whats wrong with them? I’ve got 8 WD20EADS in a RAID6 and haven’t had a single issue so far. Before that I ran 4 of them in RAID-Z, no issues either. They saturate my GigE link with bulk transfers and that is really all they need to do. And on the topic of growing weary of Oracle's bullshit, I couldn’t agree more. I gave up on ZFS around snv132 because Opensolaris was crippled and broken in too many ways for my application. Moved to linux with the conventional mdadm/lvm/ext4 stack plus regular backups and I’m not looking back at all.
|
# ? May 13, 2010 07:40 |
|
I'm planning to upgrade from a DAS box (4 drives in a box with two usb/esata bridges) to a home server built from spare bits and pieces. Plan is separate drive for OS and two sets of four drives (one set connected to motherboard). Looking to add another 4 sata ports on the cheap, unfortunately UK based so restricted to what I can get my hands on (no rosewill cards, etc). Ideally I'd buy something like a supermicro AOC but card and cables are more than I want to spend. A port multiplier or a Sii 3124 based card are options but I was considering a cheap SiL 3114 card as performance isn't main priority. Am I likely to have issues getting SiL 3114 cards to work with recent 1Tb and 1.5Tb drives? From googling, it seems most drives no longer have Sata 150 jumpers. Also can't decide what OS to install. Was considering WHS but concerned that 2 is coming out, it uses a non-standard format (unless this was dropped) and will have issues after release like the first version. I'm experienced with Linux/FreeBSD so considering freenas/openfiler but other than easy administration do they really offer much over a plain install? I'd like to also use the box as a syslog server, general monitoring and other light stuff so tending towards a generic distro.
|
# ? May 13, 2010 09:29 |
|
If you're using WD Green drives, use WDIDLE3.EXE to disable the loving Intellipark. And WDTLER.EXE to enable TLER. There is talk that the tools don't work with the newest drives, but they did for my WD15EADS, so YMMV.
|
# ? May 13, 2010 11:03 |
|
eames posted:Whats wrong with them? I haven't tried them in a software RAID such as mdadm yet, but just using my Intel chipset fake RAID, they are actually slower than being standalone. If I transfer one or two large files they are fine but if I transfer a batch of files, maybe like 50GB worth, they will slow to a halt. I don't know, maybe it's the fake RAID. going to reinstall Ubuntu server on the weekend and give mdadm a shot.
|
# ? May 13, 2010 13:06 |
|
Combat Pretzel posted:If you're using WD Green drives, use WDIDLE3.EXE to disable the loving Intellipark. And WDTLER.EXE to enable TLER.
|
# ? May 13, 2010 13:24 |
|
Drizzt01 posted:Not sure if it meets all your requirements but you could get a drobo elite or pro. Anyone used one of these? (Pro/Elite/FS) How's the speed compared to other commercial SOHO/SMB NASes on the market. Basically I need an NFS target for backups for 2 servers and would rather go with something prebuilt as I have an aversion to cobbled together stuff.
|
# ? May 13, 2010 17:34 |
|
Combat Pretzel posted:If you're using WD Green drives, use WDIDLE3.EXE to disable the loving Intellipark. And WDTLER.EXE to enable TLER. I bought a bunch of WD15EADS back in August/September and they all ran WDTLER fine, but I bought another couple of them in December and they didn't; supposedly October was when the change occurred. I've heard the Hitachi 2TB drives tend to play nicely in RAID environments.
|
# ? May 13, 2010 18:05 |
|
bob arctor posted:Anyone used one of these? (Pro/Elite/FS) How's the speed compared to other commercial SOHO/SMB NASes on the market. Basically I need an NFS target for backups for 2 servers and would rather go with something prebuilt as I have an aversion to cobbled together stuff. They're a joke, don't buy a drobo. We've been over this. Buy a QNAP, Synology, Thecus, or Netgear (pro line only). I'd recommend one of the first two.
|
# ? May 13, 2010 18:47 |
|
Farmer Crack-rear end posted:]I've heard the Hitachi 2TB drives tend to play nicely in RAID environments. I know a large RAID enclosure maker has started shipping them with their gear.
|
# ? May 13, 2010 19:00 |
|
IT Guy posted:I went ahead and ignored everyone telling me not to RAID 5 my three Western Digital Caviar Green drives and did anyway. Dude, 3? I bought 12 of the fuckers (10 in the RAID-Z2 and 2 spares) - used WDIDLE to increase the timeout to 24 seconds and they've been great.
|
# ? May 13, 2010 20:48 |
|
roadhead posted:Dude, 3? I bought 12 of the fuckers (10 in the RAID-Z2 and 2 spares) - used WDIDLE to increase the timeout to 24 seconds and they've been great. I don't have that much disposable income to be dropping a grand on hard drives
|
# ? May 13, 2010 20:58 |
|
I don't think it'd be very cost-effective to buy that many drives unless you expect to need that much storage in the next 12 months or so. I'm only going to have 9 disks in two separate zpools soon and even then I'm going to phase them out with larger disks as the drives die off. It's the best compromise between the WHS 1 drive-at-a-time method and the ZFS "replace the entire array's drives to expand" to me.
|
# ? May 13, 2010 23:14 |
|
md10md posted:Yeah, I still need to do this. I have 2x750GB WD GPs and they just thrash the load_cycle. One drive has 1.3 million cycles. I really don't get the point of early head parking. It just fucks up your drive mechanics and offers minimal savings.
|
# ? May 13, 2010 23:36 |
|
roadhead posted:Dude, 3? I bought 12 of the fuckers (10 in the RAID-Z2 and 2 spares) - used WDIDLE to increase the timeout to 24 seconds and they've been great. What's the advantage of the AV-GP over the regular GP drives?
|
# ? May 13, 2010 23:39 |
|
Combat Pretzel posted:Holy poo poo. They're only rated like 300000 cycles. code:
Farmer Crack-rear end posted:What's the advantage of the AV-GP over the regular GP drives? From what I can tell: better (hopefully) quality control, better for long-term storage, and better temperature tolerances. We'll see.
|
# ? May 14, 2010 03:19 |
|
This is weird as gently caress. I took a gander at my drive's SMART info, and of my 4x WD10EACS, the 2 -e series (4 platter drives) have 1,620 parks each the 2 -z series (3 platter drives) have 108,000 parks each. I've since used WDTLER to set the kids to play at 25.5 seconds (Can't disable fuckin' parking for some reason), hopefully it works out. I might just set fire to the barn and get 4x hitachis to put in RAID 5. It'd be a lot easier. md10md posted:Yeah, I still need to do this. I have 2x750GB WD GPs and they just thrash the load_cycle. One drive has 1.3 million cycles. For my new drives I've found a away around it if WDIDLE3.EXE doesn't work. Just make a shell script that touches the disk (I do, date > .poke) every 5 seconds so the heads never park. It works great. Hopefully THAT won't decrease the lifespan of the drives, though. Can I do something like this in windows? \/ Christ. Do I buy seagate or hitachi when it's array time, anyway? PopeOnARope fucked around with this message at 09:07 on May 14, 2010 |
# ? May 14, 2010 06:06 |
|
PopeOnARope posted:Can I do something like this in windows? Edit: This is why I also kind of hate these drives... code:
md10md fucked around with this message at 07:39 on May 14, 2010 |
# ? May 14, 2010 07:33 |
|
PopeOnARope posted:Can I do something like this in windows? --edit: Whoops, nevermind that. Well, setting up such a script requires disabling any write caching whatsoever. I think the client versions of Windows have it enabled by default, and I don't know what the time out on the cache is. You can disable it tho, but involves some performance hit. Such a script would for instance not work for me, since ZFS groups all writes up for up to 30 seconds here when there's no considerable activity. --edit2: Wait, you should be able to stop the drive from doing this via APM. I got my laptop drive under control by running hddparm on Linux (was the OS installed). It should work with WD Green drives, too. This is a Windows equivalent, try this: http://sites.google.com/site/quiethdd/ PopeOnARope posted:\/ Christ. Do I buy seagate or hitachi when it's array time, anyway? As soon mine start doing poo poo, they're immediately flying outta the case. Should have gone Black series to begin with, but the Green ones were terribly cheap at 1.5GB, while the Black ones capped at 1GB back then. Combat Pretzel fucked around with this message at 11:22 on May 14, 2010 |
# ? May 14, 2010 11:15 |
|
I only just now noticed that WD’s Blacks have 5 years Warranty while the Greens have 3 years. Maybe I should replace my 20EADS and put them on Ebay before they start dying. Samsung’s 2TB Ecogreen F3 (HD203WI) seem like a decent alternative for low power bulk storage.
|
# ? May 14, 2010 12:56 |
|
One of my new advanced format drives from WD goes NUTS connected to my computer. It clicks back and forth every 10 seconds or so and to top it off the power management spins down the drive completely after ~20 seconds of no use. Power management on the computer has no bearing, only drive that does this. Tried changing the power stuff using HDParm on a knoppix disk without any luck either. Just a massive bitch because outside of wear it adds on another 5 seconds to a file request so the drive can spin back up.
|
# ? May 14, 2010 16:03 |
|
hddparm has to be run each boot. Running it with a Linux live CD and then rebooting into your actual OS resets the setting to default. If you're using Linux, AFAIK you can set hddparm on boot time. At least Ubuntu had an init script for it. For Windows, use QuietHDD I've linked earlier. There was something for OSX, which I've tried on my hackpro laptop. Any other OS, like Solaris or BSD, you seem to be out of luck.
|
# ? May 14, 2010 16:21 |
|
Are 2TB drives with multiple platters less reliable than the Samsung F3 series or WD Black series drives? I realize that the obvious answer is "yes, they are less reliable" but what I want to know is if it is significant. I want to get some fault tolerance setup and I'm having a hard time deciding between less drives that will cost less total but have more platters or more drives that cost more total and cost more to run but are more reliable.
|
# ? May 14, 2010 17:11 |
|
Combat Pretzel posted:IIRC, these tools only work in real DOS. I had to set up a FreeDOS USB boot stick for that. Black drives are at least $70 more, each. Edit - I'm really starting to hate fake softraid, mostly because it robs you of the ability to individually see the drives in the OS. But I can't do much about that as my array has nowhere to go. PopeOnARope fucked around with this message at 17:36 on May 14, 2010 |
# ? May 14, 2010 17:30 |
|
Combat Pretzel posted:hddparm has to be run each boot. Running it with a Linux live CD and then rebooting into your actual OS resets the setting to default. Solaris has pretty much said gently caress you to SMART. I've got a 4+1 RaidZ with 1.5TB Samsungs. But no way to see how they're doing...
|
# ? May 14, 2010 17:48 |
|
|
# ? May 26, 2024 09:06 |
|
Combat Pretzel posted:hddparm has to be run each boot. Running it with a Linux live CD and then rebooting into your actual OS resets the setting to default. On a seagate 7200.2 I was able to have hdparm settings stay from a Linux session into windows. That one also had the odd parking thing and it stopped after that.
|
# ? May 14, 2010 18:09 |