Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
what is this
Sep 11, 2001

it is a lemur

Cyberdud posted:

Am i better off just telling them that for this kind of money we won't be getting a reliable enterprise solution?

YES. Walk away from the project. That way you you don't get blamed when things go spectacularly wrong.

You could hack together some stuff out of freeNAS and spare parts, and it would work great! Until it didn't and you had to build a new box and spend a week recovering your data. Would you get fired for a week of downtime? If the answer is anything approaching "maybe" then go with a real enterprise system with a 4 hour onsite service plan.

what is this fucked around with this message at 15:39 on May 12, 2010

Adbot
ADBOT LOVES YOU

ClassH
Mar 18, 2008

Cyberdud posted:

Thats the problem, its a serious project but the budget is crap. It's like trying to turn toothpicks into a functioning dell server. Any suggestions to unlock more budget or make a miracle? With the network infrastructure, i'm not supposed to go over 5-6k CAD.

Am i better off just telling them that for this kind of money we won't be getting a reliable enterprise solution?

Not sure if it meets all your requirements but you could get a drobo elite or pro.
http://www.drobo.com/products/droboelite.php
Pros are only $1300 last time I checked and rack mountable.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
How are things with BTRFS these days? I keep looking for information, but most of it, especially that on "official" sites (like Oracle's), are outdated. All seems to relate to kernel 2.6.31, while there seems to be 2.6.34 already.

I'm interested, because I'm looking at a parachute option, since I'm growing weary of Oracle's bullshit related to OpenSolaris.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Combat Pretzel posted:

How are things with BTRFS these days? I keep looking for information, but most of it, especially that on "official" sites (like Oracle's), are outdated. All seems to relate to kernel 2.6.31, while there seems to be 2.6.34 already.

I'm interested, because I'm looking at a parachute option, since I'm growing weary of Oracle's bullshit related to OpenSolaris.

Why not use FreeBSD with ZFS?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Combat Pretzel posted:

How are things with BTRFS these days? I keep looking for information, but most of it, especially that on "official" sites (like Oracle's), are outdated. All seems to relate to kernel 2.6.31, while there seems to be 2.6.34 already.

I'm interested, because I'm looking at a parachute option, since I'm growing weary of Oracle's bullshit related to OpenSolaris.

Nthing this poo poo. I'm stuck at B134 with all sorts of stupid problems (Network doesn't come up on boot, have to manually restart physical:nwam a couple of times, I can't set my XVM vms to start on boot) and I'd love to hit a stable release. But I can't go back to 2009.06 because my data pool is the most recent ZFS version.

How's BSD with virtualization? I've got an Ubuntu VM that I'd like to keep running. If it's something like XVM or something like VirtualBox I don't really care, I just want something to run.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Cyberdud posted:

Am i better off just telling them that for this kind of money we won't be getting a reliable enterprise solution?
I've been in your sort of position for most of my career now and here's what I've done thus far:

1. Write down a detailed analysis of what the pros & cons of technical options are given a budget along with the ideal solution and one-step down from ideal. Show that the budget does not allow the ability to meet critical business requirements and that you're basically throwing money away on top of taking a great risk by not budgeting for the right solution
2. Bite the bullet and do a hosed up, crazy implementation on a shoestring budget that gets you incredible amounts of praise
3. Look for a new job because the organization will likely not be in business for much longer, and it's probably stressful working there anyway
4. Not give a poo poo about the job except the bare minimum and looking good enough for your next job
5. Quit for a better job and pray that they don't (de)evolve into your previous job, ironic enough, through business success

Combat Pretzel posted:

I'm interested, because I'm looking at a parachute option, since I'm growing weary of Oracle's bullshit related to OpenSolaris.
You know what the hilarious thing is? Even BTRFS is partly an investment by Oracle in a sense since the main guy works for Oracle. The real difference between ZFS and BTRFS is that BTRFS is more compatible with the GPL than ZFS ever will be, and Linux is what is inline with Oracle's long-term product strategy it seems, so I would expect OpenSolaris to be more or less dead by 2013 anyway and ZFS by 2016 since enterprise customers take a while to be weaned off anything. But I don't actually care so long as I have a solid storage backend.

BTRFS will probably be production ready by the end of 2011 is my guess. I've been reading forum posts on occasion by BTRFS early adopters and they've basically had to have a marriage with the developers to get it working so far it seems. The disk format is still not stable, so I wouldn't use it for anything longterm either. It took a few years for Linux to be usable for anything beyond hobby computing, and perhaps if we're lucky something workable will be out before 2010.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

necrobobsledder posted:

I've been in your sort of position for most of my career now and here's what I've done thus far:

1. Write down a detailed analysis of what the pros & cons of technical options are given a budget along with the ideal solution and one-step down from ideal. Show that the budget does not allow the ability to meet critical business requirements and that you're basically throwing money away on top of taking a great risk by not budgeting for the right solution
2. Bite the bullet and do a hosed up, crazy implementation on a shoestring budget that gets you incredible amounts of praise
3. Look for a new job because the organization will likely not be in business for much longer, and it's probably stressful working there anyway
4. Not give a poo poo about the job except the bare minimum and looking good enough for your next job
5. Quit for a better job and pray that they don't (de)evolve into your previous job, ironic enough, through business success
You know what the hilarious thing is? Even BTRFS is partly an investment by Oracle in a sense since the main guy works for Oracle. The real difference between ZFS and BTRFS is that BTRFS is more compatible with the GPL than ZFS ever will be, and Linux is what is inline with Oracle's long-term product strategy it seems, so I would expect OpenSolaris to be more or less dead by 2013 anyway and ZFS by 2016 since enterprise customers take a while to be weaned off anything. But I don't actually care so long as I have a solid storage backend.

BTRFS will probably be production ready by the end of 2011 is my guess. I've been reading forum posts on occasion by BTRFS early adopters and they've basically had to have a marriage with the developers to get it working so far it seems. The disk format is still not stable, so I wouldn't use it for anything longterm either. It took a few years for Linux to be usable for anything beyond hobby computing, and perhaps if we're lucky something workable will be out before 2010.

There was some talk that Oracle could just GPL-ize ZFS now that it owns Sun. I am not a lawyer, and have no idea how that works from a legal standpoint.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

three posted:

There was some talk that Oracle could just GPL-ize ZFS now that it owns Sun. I am not a lawyer, and have no idea how that works from a legal standpoint.

I think that would be very possible with CDDL (the Sun license) because it makes no requirements for licensing of subsequent software versions.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

three posted:

Why not use FreeBSD with ZFS?
They're kinda behind in support. I think it does version 13, while my pools are at 22. Also, I'm still waiting for 64bit NVidia driver support to stabilize until I consider that an option.

FISHMANPET posted:

Nthing this poo poo. I'm stuck at B134 with all sorts of stupid problems (Network doesn't come up on boot, have to manually restart physical:nwam a couple of times, I can't set my XVM vms to start on boot) and I'd love to hit a stable release. But I can't go back to 2009.06 because my data pool is the most recent ZFS version.
There is a respin of build 134 out in internal testing, if it passes, 2010.H1 will be based on it, released and build 138 or something newer will hit /dev. One of the issues I have with Oracle is this secrecy bullshit. Another one, while not directly affecting OpenSolaris yet, is putting up a fence around Solaris and mentioning poo poo like Solaris Next-only premium features. Not sure how that'd pan out on long-term.

Build 134 itself is stable for me. Looking for an exit strategy, tho.

necrobobsledder posted:

Linux is what is inline with Oracle's long-term product strategy it seems, so I would expect OpenSolaris to be more or less dead by 2013 anyway and ZFS by 2016 since enterprise customers take a while to be weaned off anything. But I don't actually care so long as I have a solid storage backend.
If Oracle wasn't talking bullshit, OpenSolaris is going to stay. If at all, ZFS is going to be GPLized and hacked into Linux by Oracle. From what's being said, OpenSolaris respectively Solaris Next is going to stay the base for big iron databases.

Then again, Larry Ellison is a human being.

necrobobsledder posted:

BTRFS will probably be production ready by the end of 2011 is my guess.
I'm only considering BTRFS due to feature parity. Practically, I'm not particularly impressed by it, because seeing its development pace and history, it's the usual throw stuff into a pot and make it work philosophy. Just a rough overall design that's being tried to make it work. It's just that there isn't anything else alike in the Linux world yet (don't mention md).

HAMMER looks interesting, but DragonflyBSD :psyduck: At least if FreeBSD would adopt it...

NeuralSpark
Apr 16, 2004

Whats the consensus on WD AV-GP drives in RAIDs?

IT Guy
Jan 12, 2010

You people drink like you don't want to live!
I went ahead and ignored everyone telling me not to RAID 5 my three Western Digital Caviar Green drives and did anyway.

Don't do it.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

IT Guy posted:

I went ahead and ignored everyone telling me not to RAID 5 my three Western Digital Caviar Green drives and did anyway.

Don't do it.

I Raid-5'd 4 of them and had no problems. The random IOPS is kinda bad, but I basically just used it for bulk storage and serving sequential media files to hosts, and it worked out well enough.

Now that I crammed them into a ZFS RAIDZ it's performance has gone up by about 15% for the usage patterns I use.

md10md
Dec 11, 2006
I have 2x2TB WD AV-GPs and plan on adding 2 more when prices come down. Right now I have 1.81TB usable while mirrored, but I'd love to move that to a 4 drive RAIDZ1. I'm using FreeNAS right now and I really don't have anywhere to shuffle all my current files to while I rebuild a 4 disk array.

Can anyone comment on whether this technique for transitioning a ZFS Mirror to RAIDZ1 would work? Link: http://i18n-freedom.blogspot.com/2008/01/how-to-turn-mirror-in-to-raid.html

It's a bit old (2 years) but it sounds like it would work just fine. I guess for the 12 or so hours it takes me to transfer everything mid-procedure, I'd lose everything if the one drive failed. However, I'm probably willing to risk that when the time comes.

NeuralSpark
Apr 16, 2004

Methylethylaldehyde posted:

I Raid-5'd 4 of them and had no problems. The random IOPS is kinda bad, but I basically just used it for bulk storage and serving sequential media files to hosts, and it worked out well enough.

Now that I crammed them into a ZFS RAIDZ it's performance has gone up by about 15% for the usage patterns I use.

I've got a 5 drive RAID5 under mdadm, and was offered either 5 2TB WD AV-GPs or 5 1TB WD Blacks in exchange for some work I'm doing. The pure size of the 2s was appealing, but I had to decide in the space of about 30 minutes which set I wanted so I went with the Blacks. I'm replacing Seagate 750GB ES2s.

NeuralSpark fucked around with this message at 05:55 on May 13, 2010

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

md10md posted:

I have 2x2TB WD AV-GPs and plan on adding 2 more when prices come down. Right now I have 1.81TB usable while mirrored, but I'd love to move that to a 4 drive RAIDZ1. I'm using FreeNAS right now and I really don't have anywhere to shuffle all my current files to while I rebuild a 4 disk array.

Can anyone comment on whether this technique for transitioning a ZFS Mirror to RAIDZ1 would work? Link: http://i18n-freedom.blogspot.com/2008/01/how-to-turn-mirror-in-to-raid.html

It's a bit old (2 years) but it sounds like it would work just fine. I guess for the 12 or so hours it takes me to transfer everything mid-procedure, I'd lose everything if the one drive failed. However, I'm probably willing to risk that when the time comes.

It seems pretty straight forward. Dump everything to a single drive, create a degraded 2+1 RAIDZ, copy everything to the RAIDZ, and then add your signle drive to the array to un degrade it. If your source drive dies during the copy you're boned, but if the 2 destination drives die your'e still fine, other than the time out.

Also, depending on how much space you've got on the drives compared to how much data you have you could do some crazy poo poo like this:
Break apart the mirror so each disk stands alone
On the new disk create 3 sparse files and create a 2+1 RaidZ with that, and copy the data to that array.
Now that the data is on all 3 drives, swap one sparse file for a mirror disk. Now create a sparse file on the first untouched disk, swap out of one your RAIDZ sparse files for this new file. So on Disk1 we have a complete copy of the data, and 1/3 of the data. Disk2 has 1/3 of the data on disk, and Disk3 has the final 1/3 in a file. Still two copies. Drop Disk3's file off of the RAIDZ, then put the the whole of Disk3 in the RAIDZ. So now Disk1 has a full copy and 1/3 in file, and both Disk2 and Disk3 have 1/3 of the data on disk. Finnaly, drop the sparse file on Disk1, and attach the whole disk to your array.

There, completely convoluted, but you always have 2 copies of the data.

E: Saw you have 4 drives. That should make it a lot easier to always have 2 copies if you want.

eames
May 9, 2009

IT Guy posted:

I went ahead and ignored everyone telling me not to RAID 5 my three Western Digital Caviar Green drives and did anyway.

Don't do it.

Whats wrong with them?

I’ve got 8 WD20EADS in a RAID6 and haven’t had a single issue so far. Before that I ran 4 of them in RAID-Z, no issues either. They saturate my GigE link with bulk transfers and that is really all they need to do.

And on the topic of growing weary of Oracle's bullshit, I couldn’t agree more.
I gave up on ZFS around snv132 because Opensolaris was crippled and broken in too many ways for my application. Moved to linux with the conventional mdadm/lvm/ext4 stack plus regular backups and I’m not looking back at all.

Boody
Aug 15, 2001
I'm planning to upgrade from a DAS box (4 drives in a box with two usb/esata bridges) to a home server built from spare bits and pieces. Plan is separate drive for OS and two sets of four drives (one set connected to motherboard).

Looking to add another 4 sata ports on the cheap, unfortunately UK based so restricted to what I can get my hands on (no rosewill cards, etc). Ideally I'd buy something like a supermicro AOC but card and cables are more than I want to spend. A port multiplier or a Sii 3124 based card are options but I was considering a cheap SiL 3114 card as performance isn't main priority. Am I likely to have issues getting SiL 3114 cards to work with recent 1Tb and 1.5Tb drives? From googling, it seems most drives no longer have Sata 150 jumpers.

Also can't decide what OS to install. Was considering WHS but concerned that 2 is coming out, it uses a non-standard format (unless this was dropped) and will have issues after release like the first version. I'm experienced with Linux/FreeBSD so considering freenas/openfiler but other than easy administration do they really offer much over a plain install? I'd like to also use the box as a syslog server, general monitoring and other light stuff so tending towards a generic distro.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
If you're using WD Green drives, use WDIDLE3.EXE to disable the loving Intellipark. And WDTLER.EXE to enable TLER.

There is talk that the tools don't work with the newest drives, but they did for my WD15EADS, so YMMV.

IT Guy
Jan 12, 2010

You people drink like you don't want to live!

eames posted:

Whats wrong with them?

I’ve got 8 WD20EADS in a RAID6 and haven’t had a single issue so far. Before that I ran 4 of them in RAID-Z, no issues either. They saturate my GigE link with bulk transfers and that is really all they need to do.

I haven't tried them in a software RAID such as mdadm yet, but just using my Intel chipset fake RAID, they are actually slower than being standalone. If I transfer one or two large files they are fine but if I transfer a batch of files, maybe like 50GB worth, they will slow to a halt.

I don't know, maybe it's the fake RAID. going to reinstall Ubuntu server on the weekend and give mdadm a shot.

md10md
Dec 11, 2006

Combat Pretzel posted:

If you're using WD Green drives, use WDIDLE3.EXE to disable the loving Intellipark. And WDTLER.EXE to enable TLER.

There is talk that the tools don't work with the newest drives, but they did for my WD15EADS, so YMMV.
Yeah, I still need to do this. I have 2x750GB WD GPs and they just thrash the load_cycle. One drive has 1.3 million cycles. For my new drives I've found a away around it if WDIDLE3.EXE doesn't work. Just make a shell script that touches the disk (I do, date > .poke) every 5 seconds so the heads never park. It works great. Hopefully THAT won't decrease the lifespan of the drives, though.

CISADMIN PRIVILEGE
Aug 15, 2004

optimized multichannel
campaigns to drive
demand and increase
brand engagement
across web, mobile,
and social touchpoints,
bitch!
:yaycloud::smithcloud:

Drizzt01 posted:

Not sure if it meets all your requirements but you could get a drobo elite or pro.
http://www.drobo.com/products/droboelite.php
Pros are only $1300 last time I checked and rack mountable.

Anyone used one of these? (Pro/Elite/FS) How's the speed compared to other commercial SOHO/SMB NASes on the market. Basically I need an NFS target for backups for 2 servers and would rather go with something prebuilt as I have an aversion to cobbled together stuff.

Farmer Crack-Ass
Jan 2, 2001

this is me posting irl

Combat Pretzel posted:

If you're using WD Green drives, use WDIDLE3.EXE to disable the loving Intellipark. And WDTLER.EXE to enable TLER.

There is talk that the tools don't work with the newest drives, but they did for my WD15EADS, so YMMV.

I bought a bunch of WD15EADS back in August/September and they all ran WDTLER fine, but I bought another couple of them in December and they didn't; supposedly October was when the change occurred.

I've heard the Hitachi 2TB drives tend to play nicely in RAID environments.

what is this
Sep 11, 2001

it is a lemur

bob arctor posted:

Anyone used one of these? (Pro/Elite/FS) How's the speed compared to other commercial SOHO/SMB NASes on the market. Basically I need an NFS target for backups for 2 servers and would rather go with something prebuilt as I have an aversion to cobbled together stuff.

They're a joke, don't buy a drobo. We've been over this.

Buy a QNAP, Synology, Thecus, or Netgear (pro line only).

I'd recommend one of the first two.

NeuralSpark
Apr 16, 2004

Farmer Crack-rear end posted:

]I've heard the Hitachi 2TB drives tend to play nicely in RAID environments.

I know a large RAID enclosure maker has started shipping them with their gear.

roadhead
Dec 25, 2001

IT Guy posted:

I went ahead and ignored everyone telling me not to RAID 5 my three Western Digital Caviar Green drives and did anyway.

Don't do it.

Dude, 3? I bought 12 of the fuckers (10 in the RAID-Z2 and 2 spares) - used WDIDLE to increase the timeout to 24 seconds and they've been great.

IT Guy
Jan 12, 2010

You people drink like you don't want to live!

roadhead posted:

Dude, 3? I bought 12 of the fuckers (10 in the RAID-Z2 and 2 spares) - used WDIDLE to increase the timeout to 24 seconds and they've been great.

I don't have that much disposable income to be dropping a grand on hard drives :(

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I don't think it'd be very cost-effective to buy that many drives unless you expect to need that much storage in the next 12 months or so. I'm only going to have 9 disks in two separate zpools soon and even then I'm going to phase them out with larger disks as the drives die off. It's the best compromise between the WHS 1 drive-at-a-time method and the ZFS "replace the entire array's drives to expand" to me.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

md10md posted:

Yeah, I still need to do this. I have 2x750GB WD GPs and they just thrash the load_cycle. One drive has 1.3 million cycles.
Holy poo poo. They're only rated like 300000 cycles.

I really don't get the point of early head parking. It just fucks up your drive mechanics and offers minimal savings.

Farmer Crack-Ass
Jan 2, 2001

this is me posting irl

roadhead posted:

Dude, 3? I bought 12 of the fuckers (10 in the RAID-Z2 and 2 spares) - used WDIDLE to increase the timeout to 24 seconds and they've been great.

What's the advantage of the AV-GP over the regular GP drives?

md10md
Dec 11, 2006

Combat Pretzel posted:

Holy poo poo. They're only rated like 300000 cycles.

I really don't get the point of early head parking. It just fucks up your drive mechanics and offers minimal savings.
:(
code:
  9 Power_On_Hours          0x0032   080   080   000    Old_age   Always       -       14811
193 Load_Cycle_Count        0x0032   001   001   000    Old_age   Always       -       1242235
Yeah, when I saw this I immediately bought the new drives and took these offline. They're serving as scratch drives for now until they die. Only about 15 months of action too.

Farmer Crack-rear end posted:

What's the advantage of the AV-GP over the regular GP drives?

From what I can tell: better (hopefully) quality control, better for long-term storage, and better temperature tolerances. We'll see.

PopeOnARope
Jul 23, 2007

Hey! Quit touching my junk!
This is weird as gently caress. I took a gander at my drive's SMART info, and of my 4x WD10EACS,

the 2 -e series (4 platter drives) have 1,620 parks each
the 2 -z series (3 platter drives) have 108,000 parks each.

I've since used WDTLER to set the kids to play at 25.5 seconds (Can't disable fuckin' parking for some reason), hopefully it works out. I might just set fire to the barn and get 4x hitachis to put in RAID 5. It'd be a lot easier.

md10md posted:

Yeah, I still need to do this. I have 2x750GB WD GPs and they just thrash the load_cycle. One drive has 1.3 million cycles. For my new drives I've found a away around it if WDIDLE3.EXE doesn't work. Just make a shell script that touches the disk (I do, date > .poke) every 5 seconds so the heads never park. It works great. Hopefully THAT won't decrease the lifespan of the drives, though.

Can I do something like this in windows?


\/ Christ. Do I buy seagate or hitachi when it's array time, anyway?

PopeOnARope fucked around with this message at 09:07 on May 14, 2010

md10md
Dec 11, 2006

PopeOnARope posted:

Can I do something like this in windows?
I don't see why not. I'm not really familiar with Windows batch scripts but I imagine there's something similar to the "date" or "touch" commands from unix in windows. As long as something is hitting the drives before the heads can park, they shouldn't idle. Whether this is worse or better for the drive, I have no idea. I guess I'm bargaining that since these drives are meant to be used constantly, a little nudge ever 5 seconds can't be terrible for them.

Edit: This is why I also kind of hate these drives...
code:
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       221
193 Load_Cycle_Count        0x0032   199   199   000    Old_age   Always       -       3167
These are the new 2TB HDs. I restarted the computer this morning and just noticed (as in 2 minutes ago) that the Load_Cycle_Count jumped from ~300 to 3167 in less about 18 hours. Ridiculous. Script's back on and it hasn't crept up anymore. By the way, (3167-300)/18 hrs = ~160 cycles/hr. Insanity.

md10md fucked around with this message at 07:39 on May 14, 2010

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

PopeOnARope posted:

Can I do something like this in windows?
IIRC, these tools only work in real DOS. I had to set up a FreeDOS USB boot stick for that.

--edit: Whoops, nevermind that.

Well, setting up such a script requires disabling any write caching whatsoever. I think the client versions of Windows have it enabled by default, and I don't know what the time out on the cache is. You can disable it tho, but involves some performance hit.

Such a script would for instance not work for me, since ZFS groups all writes up for up to 30 seconds here when there's no considerable activity.

--edit2: Wait, you should be able to stop the drive from doing this via APM. I got my laptop drive under control by running hddparm on Linux (was the OS installed). It should work with WD Green drives, too. This is a Windows equivalent, try this:

http://sites.google.com/site/quiethdd/

PopeOnARope posted:

\/ Christ. Do I buy seagate or hitachi when it's array time, anyway?
No, just don't get Green drives. Spend a few more dollars and get the Black ones. They don't do that poo poo by default. They're faster in the end, too, since Green drives spin at only 5400rpm.

As soon mine start doing poo poo, they're immediately flying outta the case. Should have gone Black series to begin with, but the Green ones were terribly cheap at 1.5GB, while the Black ones capped at 1GB back then.

Combat Pretzel fucked around with this message at 11:22 on May 14, 2010

eames
May 9, 2009

I only just now noticed that WD’s Blacks have 5 years Warranty while the Greens have 3 years. Maybe I should replace my 20EADS and put them on Ebay before they start dying. :ohdear:

Samsung’s 2TB Ecogreen F3 (HD203WI) seem like a decent alternative for low power bulk storage.

dietcokefiend
Apr 28, 2004
HEY ILL HAV 2 TXT U L8TR I JUST DROVE IN 2 A DAYCARE AND SCRATCHED MY RAZR
One of my new advanced format drives from WD goes NUTS connected to my computer. It clicks back and forth every 10 seconds or so and to top it off the power management spins down the drive completely after ~20 seconds of no use.

Power management on the computer has no bearing, only drive that does this. Tried changing the power stuff using HDParm on a knoppix disk without any luck either. Just a massive bitch because outside of wear it adds on another 5 seconds to a file request so the drive can spin back up. :mad:

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
hddparm has to be run each boot. Running it with a Linux live CD and then rebooting into your actual OS resets the setting to default.

If you're using Linux, AFAIK you can set hddparm on boot time. At least Ubuntu had an init script for it. For Windows, use QuietHDD I've linked earlier. There was something for OSX, which I've tried on my hackpro laptop. Any other OS, like Solaris or BSD, you seem to be out of luck.

Comatoast
Aug 1, 2003

by Fluffdaddy
Are 2TB drives with multiple platters less reliable than the Samsung F3 series or WD Black series drives? I realize that the obvious answer is "yes, they are less reliable" but what I want to know is if it is significant. I want to get some fault tolerance setup and I'm having a hard time deciding between less drives that will cost less total but have more platters or more drives that cost more total and cost more to run but are more reliable.

PopeOnARope
Jul 23, 2007

Hey! Quit touching my junk!

Combat Pretzel posted:

IIRC, these tools only work in real DOS. I had to set up a FreeDOS USB boot stick for that.

--edit: Whoops, nevermind that.

Well, setting up such a script requires disabling any write caching whatsoever. I think the client versions of Windows have it enabled by default, and I don't know what the time out on the cache is. You can disable it tho, but involves some performance hit.

Such a script would for instance not work for me, since ZFS groups all writes up for up to 30 seconds here when there's no considerable activity.

--edit2: Wait, you should be able to stop the drive from doing this via APM. I got my laptop drive under control by running hddparm on Linux (was the OS installed). It should work with WD Green drives, too. This is a Windows equivalent, try this:

http://sites.google.com/site/quiethdd/

No, just don't get Green drives. Spend a few more dollars and get the Black ones. They don't do that poo poo by default. They're faster in the end, too, since Green drives spin at only 5400rpm.

As soon mine start doing poo poo, they're immediately flying outta the case. Should have gone Black series to begin with, but the Green ones were terribly cheap at 1.5GB, while the Black ones capped at 1GB back then.

Black drives are at least $70 more, each.

Edit - I'm really starting to hate fake softraid, mostly because it robs you of the ability to individually see the drives in the OS. But I can't do much about that as my array has nowhere to go.

PopeOnARope fucked around with this message at 17:36 on May 14, 2010

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Combat Pretzel posted:

hddparm has to be run each boot. Running it with a Linux live CD and then rebooting into your actual OS resets the setting to default.

If you're using Linux, AFAIK you can set hddparm on boot time. At least Ubuntu had an init script for it. For Windows, use QuietHDD I've linked earlier. There was something for OSX, which I've tried on my hackpro laptop. Any other OS, like Solaris or BSD, you seem to be out of luck.

Solaris has pretty much said gently caress you to SMART. I've got a 4+1 RaidZ with 1.5TB Samsungs. But no way to see how they're doing...

Adbot
ADBOT LOVES YOU

dietcokefiend
Apr 28, 2004
HEY ILL HAV 2 TXT U L8TR I JUST DROVE IN 2 A DAYCARE AND SCRATCHED MY RAZR

Combat Pretzel posted:

hddparm has to be run each boot. Running it with a Linux live CD and then rebooting into your actual OS resets the setting to default.

If you're using Linux, AFAIK you can set hddparm on boot time. At least Ubuntu had an init script for it. For Windows, use QuietHDD I've linked earlier. There was something for OSX, which I've tried on my hackpro laptop. Any other OS, like Solaris or BSD, you seem to be out of luck.

On a seagate 7200.2 I was able to have hdparm settings stay from a Linux session into windows. That one also had the odd parking thing and it stopped after that.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply