Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
IOwnCalculus
Apr 2, 2003





CeciPipePasPipe posted:

mdadm

I love mdadm :swoon:

I used this site when setting up the current iteration of my fileserver:

Managing RAID and LVM with Linux (v0.5)

This one I haven't used directly yet since I haven't yet bumped up against my current RAID's total capacity or close to it, but it looks incredibly interesting and I may give it a shot at some point later this year if I get a good deal on a 500GB/750GB drive:

Growing a RAID5 in MDADM

For the record, my fileserver is a pretty simple setup, hardware-wise. Mid-tower Antec case on its side, some cheap Foxconn 945-based motherboard, a Pentium D 2140 CPU, a gig of RAM, and the main array I care about is 4x500GB in RAID5 via MDADM. I also have a secondary array I use as a scratch disk / storage for things easily replaced, which is two of my old 250GB drives in a RAID0.

It went smoothly enough and worked well enough (especially compared to my old abortion of a shitload of ~200GB drives and Windows Server 2003 Dynamic Disk RAID5) that I took what was left of my old fileserver and rebuilt it as another Ubuntu server box with a smaller RAID5 array (3x200GB), added BackupPC, and stuck it at my mom's house as an offsite backup for the files I really, really don't want to lose. Both boxes are set to email me in the event a drive goes down.

IOwnCalculus fucked around with this message at 17:39 on Mar 18, 2008

Adbot
ADBOT LOVES YOU

IOwnCalculus
Apr 2, 2003





CeciPipePasPipe posted:

Also, here's another tip for Linux software raid: make sure you put your swap partition on RAID as well (instead of striping the swap partition). I learned this the hard way when one of the drives tanked and couldn't do a clean shutdown since it was unable to swap back in from the failed drive. Fortunately it didn't hurt anything important.

Are you setting yours up with a separate drive for the OS or no? I seem to have a never-ending supply of sub-100GB drives, which are still good for boot / swap disks - and it seems a lot easier to me, in the long run at least, to keep the OS completely separate from the data disks.

IOwnCalculus
Apr 2, 2003





That's normal - it's effectively treating it as a 3-disc array that you've added the third disc to be rebuilt with. Once it finishes it will move to 3 active and clean.

IOwnCalculus
Apr 2, 2003





Syano posted:

Useless in the form of performance or useless for fault tolerance?

Useless for fault tolerance. For any type of failure that renders the drive unreadable from start to finish, this may as well be RAID0.

It will probably also kick performance in the balls since you're writing twice to each drive for one block of data.

IOwnCalculus
Apr 2, 2003





H110Hawk posted:

The real bitch is racking them without bending the rails.

Between this and the power requirements I'd guess it has...yeah, that's one big loving box.

IOwnCalculus
Apr 2, 2003





Triple Tech posted:

Doesn't RAID5 by definition use the smallest size disk as the maximum size of the fundamental unit of the array? Do you mean you want to constantly swap out small disks and then have it auto grow to inherit a new smallest size? Can RAID cards even do that?

Yeah, it would be a very difficult thing to implement on the RAID card level in my opinion - you'd need to have the card check all of the drives and determine the smallest available size, and then it would also need to automatically grow the array (not an easy task) if the smallest drive was replaced by a larger one.

Plus, it would be kind of pointless, since if you have multiple 'smallest' drives (lets say, hypothetically, 2x500 and 2x750) you wouldn't be able to touch the extra space on the larger drives until you replace all of the smaller ones.

IOwnCalculus
Apr 2, 2003





roflsaurus posted:

I guess my concern is if I go with a software raid (with the OS on a separate physical drive), what happens if the OS shits itself / OS drive dies? Likewise, what if I want to completely upgrade? can I through the 5 or 6 drives in a new box, new mb, etc, and the raid will "just work"? Also, is it easy to dynamically expand RAID 5 arrays via the command line / etc?

It is supposed to, in fact, 'just work' with minimal tweaking - I haven't had to test it yet, but your proposed setup is extremely similar to mine - 4x500GB in RAID5, 2x250GB in RAID0 as a scratch drive for torrents / whatever.

IOwnCalculus
Apr 2, 2003





roflsaurus posted:

do you have any links on how to recover from a software raid array on another machine? i'd like to see how easy / hard / problematic it is before i decide software or hardware raid.

I can't say that I have any guides on doing that exactly, but most of what I've done, I picked up from this site and this site.

IOwnCalculus
Apr 2, 2003





SnatchRabbit posted:

Sorry, I should have mentioned that the $300 cap includes drives.

There's pretty much nothing useful on the market that has any real functionality and comes with drives for under $300.

IOwnCalculus
Apr 2, 2003





Mr Chips posted:

Check this thread, it's about the new Intel Core 2 Duo mini-itx boards. Reasonable Gb ethernet, 5 SATA ports and a PCI-express 1x slot. Chuck in an E2180 and I'd like to think it would fly along with soft raid in linux (or at least do better than the 4-drive NAS appliances in an equivalent price range). I was looking at that chenbro case today, there's also a nice little two-bay version that's not quite so expensive as the 4 bay one.

I've got an E2140 in a different board (some i965-based Mini-ATX) and yes, the damned thing flies as a fileserver, even with both a RAID5 array and a RAID0 array on the same system, all via software RAID.

IOwnCalculus
Apr 2, 2003





kalibar posted:

The OS hard drive (an old IDE Maxtor 120GB) in my home filebox just died on me today. I don't have any unused SATA ports, and I don't really want to buy another IDE drive.

I do have an open PCI-e x16 slot, an open PCI slot, and a spare 100GB 2.5" laptop SATA hard drive laying around. Is there some kind of magic part I can buy that would let me get this drive into the computer and use it?

While I haven't had to look at any laptop SATA drives in person yet (only have one SATA laptop and it's already got more than enough hard drive space :) ) I'm 99% sure the ports on it are identical to the ones on a 3.5" SATA drive. So, all you would need is an addon card with a SATA controller that is compatible with your system. If your motherboard alread has an external SATA chipset (i.e. some of the SATA ports are not directly on the southbridge) you may need to make sure your addon card uses the same chipset manufacturer.

You'll also need a 2.5-3.5 drive adapter to mount it up, assuming you don't want it flopping around in the case.

I ran into this a few years ago with my old AthlonXP based fileserver. The motherboard (A7N8X Deluxe or something, Asus NForce2 board) had a Silicon Image chipset and trying to use a Highpoint based controller meant I could only use one or the other. Using another SiI chip on a PCI card meant I could use both, though.

IOwnCalculus
Apr 2, 2003





kalibar posted:

Maybe I'm off the mark here, but I feel like adding yet another motherboard-slot to SATA converter to my system might be more of a headache than I want to give myself.

Really, at least in my experience, if you can get the computer to POST with both of them enabled you're golden. The two on your board are probably provided with your northbridge so you shouldn't need to worry about them - in your case I would just go find the cheapest SATA card around with a Silicon Image chipset that fits whatever expansion slots your computer still has available. It should work, and even if it doesn't you're risking maybe $20 new?

IOwnCalculus
Apr 2, 2003





Any cheap RAID card is going to rape the hell out of your CPU for the XOR calculations so I really don't know how much of a performance benefit you're going to see - RAID5 isn't terribly fast anyway, the main goal is you have n drives with protection against a drive failing and n-1 drives of capacity.

If performance is your goal, it might well be cheaper to build a separate low-power box to just run as a fileserver, install Linux, and run a software array with md. At least then the CPU it's raping won't be the one that matters to you.

Plus, online expansion with md/lvm is loving awesome. :swoon:

code:
/dev/md0:
        Version : 00.90.03
  Creation Time : Thu Jan  3 22:07:31 2008
     Raid Level : raid5
     Array Size : 1953535744 (1863.04 GiB 2000.42 GB)
  Used Dev Size : 488383936 (465.76 GiB 500.11 GB)
   Raid Devices : 5
  Total Devices : 5
This started out as a four-disk array, and took maybe five commands to grow to a five-disk array, none of which were 'umount'. Throwing another one in this weekend probably since I got a good deal on another drive.

IOwnCalculus fucked around with this message at 07:05 on Oct 15, 2008

IOwnCalculus
Apr 2, 2003





I don't have any experience with it but the Rosewill branded version (exact same loving hardware from the look of the photos, heh) is being sold with a 500GB WD hard drive for $40 more than the Sans Digital version. I think whenever I run out of space in my current case (I have room for two more 500GB drives, leaving me either a 2.5TB array with a hot spare or 3.0TB without a spare) I may pick one of those up and switch to 1TB drives. It really looks like a nice solution for a reasonable price, and gives you a bit more flexibility in how you arrange things compared to, say, trying to cram a CM Stacker somewhere.

IOwnCalculus
Apr 2, 2003





whatever7 posted:

I don't know, that means I have to throw out the IDE drives? I couldn't bring myself to do it. I recently sent my A-data 2GB SD card back for replacement, even though I can get a new one for $3 or something, because I spent 120 freaking dollars for it.

Use 'em for target practice. Fact of the matter is, those drives are hardly worth the metal they're made from these days.

Plus, with drives that old I wouldn't consider them reliable anymore - it's a matter of time before they start dropping like flies.

IOwnCalculus
Apr 2, 2003





deimos posted:

EDIT: O man, if I had to do it all over again I would've started with this motherboard instead

drat, that thing is pretty awesome in concept but it looks like you're SOL if you want to get one - looks like it's only being sold in bulk to NAS manufacturers, or to a few places that sell it for $500 on its own.

IOwnCalculus
Apr 2, 2003





Trash Heap posted:

I have both Windows Vista and OSX 10.5 machines that I would like to have access a 'network drive' wirelessly. To be crystal clear, I want the Vista machine to have a mapped network drive where I can drag/drop files to in Windows Explorer. I want to be able to read/write to that same drive with OSX Finder.

This shouldn't be a problem. You can share to both using anything Samba based, though personally I'm not a fan of how OSX handles connecting to a Samba share. On my fileserver (running Ubuntu) I set it up to offer files via AFP as well, with shares set up pointing at the same locations on the fileserver. It doesn't matter if the system mounting the share is a Windows/Linux box via Samba or the Macbook Pro accessing it via AFP, they all see the same things.

Trash Heap posted:

I wanted to take advantage of my Mac's built-in backup software, Time Machine, because this is my work computer and backups are absolutely vital!

Perhaps I need two solutions? One drive that is solely dedicated to backing up my mac, and another device/drive that can act as storage for both OSX and Vista?

This is where it gets sticky, since it looks like Time Machine wants to use local external storage (i.e. attached USB drive) over network storage. It looks like it can use network storage if you're backing it up to "another Mac" but that's not the case here. I can't seem to find out what Time Machine needs on whatever is acting as the fileserver to work properly.

Edit: go with what macx said on Time Machine, that's what I was trying to find out :)

IOwnCalculus
Apr 2, 2003





Samba / SMB / CIFS is pretty much the standard for filesharing, since it's what Windows wants to use by default - I'd be shocked if there was a consumer NAS on the market that can't do that. I would still try and find one that does AFP as well, though.

IOwnCalculus
Apr 2, 2003





Combat Pretzel posted:

Oh, missed that. No idea, depends on how the spanned volume code reacts to a degraded array.

Theoretically it shouldn't care, since assuming that the RAID5 array that's a member of the span is degraded and not gone, all data is still available either by direct reading or by computing based off of parity.

What you're essentially looking at is a bastardized version of RAID50 - except that in a RAID50, if you lose two drives on one of the individual RAID5 arrays, all data is lost. With it set up as a span, if you lose two drives in one of the RAID5 arrays, you can theoretically recover whatever data is on the non-dead RAID5. Of course, RAID50 would be faster, especially on writes (assuming that the controllers are doing the XOR calculations, or that the CPU is doing nothing else)

I'd probably just run software RAID6 well before trying some amalgam of hardware and software RAID like that, though.

IOwnCalculus
Apr 2, 2003





I think you're going to have to just get a tower and install a few of these in it:

http://www.newegg.com/Product/Product.aspx?Item=N82E16817121405

While I managed to cram it into a NSK4480, I recommend getting something with actual space inside, like the CM Stacker or something similar :)

IOwnCalculus
Apr 2, 2003





Animaniac posted:

Finding the right case is the only thing holding me back for the last year or so. Thanks!

Arguably there's drat good reason for this - I have nine hard drives crammed into an Antec mid-tower (6x500GB RAID5, 2x200GB RAID0, 80GB boot) and a few of them run pretty drat hot. Now, granted, the airflow is pretty poor for as many drives as there are in here, and a properly designed case for this many drives would likely improve on that problem.

All that said, I also wish that what you're asking for existed.

IOwnCalculus
Apr 2, 2003





necrobobsledder posted:

Cooling shouldn't be as important for home users buying those low power drives because they happen to use a lot less power and as a result produce less heat potentially. If you've got 15k rpm SAS drives like in a business environment, then cooling is a lot bigger concern and you should be looking at a serious case with serious cooling... or just buy the dumbass 4u file server like you're supposed to.

You'd be impressed at the heat standard 7200RPM desktop drives put out in close proximity to another. Right now my fileserver is reporting drive temperatures ranging from 35* C to 48* C depending on where the drive is in the case, and it's a bit cool in the room...I'll see peak drive temperatures in the 50-55* C range for some of them.

On top of that, I once had the RAID5 array rebuilding with the 5-in-3 backplane sitting out of the case, out of the airflow and without a fan directly attached. It actually triggered the audible alarm that was set to go off at 80* C, and the whole enclosure was so hot I had to be careful handling it for a few minutes. Amazingly, none of the drives have shown any ill effects from that, and that was months ago.

IOwnCalculus
Apr 2, 2003





adorai posted:

Regardless, I got smacked down by this a while ago, google published a paper regarding drive failures in it's datacenters, and found no correlation between high temperature and drive failure rates (after an initial 6 month period). In fact, they found that low temps (68 degrees) were more likely to cause a failure.

http://labs.google.com/papers/disk_failures.pdf

Yeah, I just read that before my post and probably should have mentioned this line...

Google posted:

In fact, there is a clear trend showing that lower temperatures are associated with higher failure rates. Only at very high temperatures is there a slight reversal of this trend.

In their recording, 'very high' = 45* C or higher. I've got at least three drives averaging close to 50 degrees. All that said, I'm not really worried...it is, by their own admission, a slight reversal, and it's a failure rate that doesn't become prevalent until the 3+ year mark. I just wish it was possible to get this many drives into a small tower and at least keep them below that 45* C mark. You obviously need some form of airflow, otherwise you end up with the too-hot-to-touch drive case I had when I was rebuilding my array.

IOwnCalculus
Apr 2, 2003





jeeves posted:

:words:

This can be summed up in a few points:

*Hard drives are mechanically complex and delicate devices that will all inevitably fail, no matter how well designed
*People who have negative experiences are far quicker to post reviews than positive ones
*The average Newegg reviewer is a mongoloid who has never bothered to understand the first point, and doesn't back up data properly

I think that outside of the issues Seagate had with firmwares, there hasn't been a case of one manufacturer putting out a seriously inferior drive for a long time, and the firmware issues are easily resolved. If you're putting 1.5TB of data you give a poo poo about on that drive, back it up somewhere. If you don't care about it, don't bother backing it up and realize that sooner or later that drive will die.

IOwnCalculus
Apr 2, 2003





Torrentflux is a goddamn clusterfuck these days. wTorrent + rtorrent is a bit annoying to set up (at least in 8.04, Apache's scgi is broken so I wasted a good bit of time trying to make it play nice instead of just using lighttpd) but the interface and resource usage is worlds better.

IOwnCalculus
Apr 2, 2003





Terpfen posted:

I understand that, but at the end of the day it's still just an empty external drive with a different plug. I know networking hardware is more expensive than a USB controller, but for an external enclosure to cost triple digits is really absurd.

Not really - when operating as a USB / Firewire / eSATA enclosure, the enclosure has very little to deal with. It's just operating as a means for the computer to physically see the drive. Your computer still handles actually mounting the filesystem, reading files, etc.

For an enclosure to offer file access over ethernet, now it has to add on all of the following tasks:

*IP networking - be able to connect and communicate over a network, get an IP address, or have one manually defined
*Web management interface - at minimum, be able to define what subnet the device is on and whether or not it is using DHCP, along with other features usually
*Partition the drive, format the drive with a usable filesystem, and mount the drive
*Share the files on the drive using any of a number of protocols (SAMBA/CIFS typically, maybe FTP/HTTP, maybe NFS / other protocols)
*Offer user management on the filesharing side to offer some form of authentication to access / control what files belong to which user

Then on top of that consider that most of these devices offer more than bare minimum functionality; some are essentially very small servers that will run simple programs, such as bittorrent.

It's really not just "a different plug".

Edit: That said, for under $50, there are devices like this, though I have absolutely no idea if it's anything other than lovely: http://www.dealextreme.com/details.dx/sku.26320

IOwnCalculus fucked around with this message at 18:20 on Jul 17, 2009

IOwnCalculus
Apr 2, 2003





Terpfen posted:

Conversely, I don't see the justification in charging triple digits for anything involved with creating a network drive enclosure. I'm not denying the complexity of the technology over a generic USB external drive, I'm just saying I don't think the stuff I've seen is worth hundreds of dollars.


Thanks for the link. I would prefer something with Gigabit Ethernet, but I might bite the bullet anyway.

It's a niche market that currently has low demand - most people either stick with internal / USB directly attached drives, or they want a full-featured NAS.

Reading more on that DealExtreme NAS...from the sound of it, Gigabit would be thoroughly useless as it's apparently slow as poo poo.

IOwnCalculus
Apr 2, 2003





Unless 2008 has changed significantly from 2003 in this regard, Windows Server just doesn't work all that well for home use. It won't do what you're asking in terms of combining drives (for that matter, I don't think any server will do that if you're using drives that aren't blank and don't want the data destroyed).

On top of that, software RAID5 in Windows server is completely unable to expand. It sucks.

IOwnCalculus
Apr 2, 2003





frunksock posted:

I need recommendations on 5.25" -> 3.5" SATA enclosures. That is - one of those things that takes up 3 5.25" bays and converts them to 4 or 5 3.5" bays, ideally with power and SATA connections (doesn't need to be hot-swap tho).

Also, if anyone offhand knows a good cheap 4 port PCIe SATA controller that works with Solaris. Failing that, I'd buy (2) 2 port PCI SATA controllers, if someone has a recommendation on those. I don't need any RAID features.

I have this Supermicro and I like it a lot, though keep in mind it is pretty drat long. If you're trying to stuff this in a mid-tower case, you may run into interference issues with the motherboard like I did. Also, it's not really meant to slide into a desktop case which has all sorts of tabs and poo poo for mounting single-bay 5.25" devices, so get ready to get creative with the Dremel.

I haven't seen any cheap 4-port PCIe SATA controllers; the only cheap PCIe ones right now are 2-port. I grabbed a cheap generic one with a Silicon Image chipset and it worked great in Ubuntu, no idea on Solaris support.

IOwnCalculus
Apr 2, 2003





I switched from 500GB 7200RPM Western Digitals to 1.5TB 5400RPM Samsungs, and at least in straight read/writes, the performance actually went up; presumably due to the increased platter density.

I don't do a whole lot of random reads/writes on my array (I've got a separate RAID0 for data that I don't care about that needs to be fast) so take this with a grain of salt if you're doing everything on one array.

IOwnCalculus
Apr 2, 2003





Farmer Crack-rear end posted:

I'm about to put together a RAID-6 array and I'd like to really beat the hell out of it for awhile - hopefully to get any premature failures or unforeseen incompatibilities out of the way. Does anyone know of a good utility program to do this?

It's probably a ghetto solution, but if you've got a fast internet connection and don't live somewhere they screw you over if you use a lot of data, I'd grab a bunch of torrents for Linux isos and other large files and run them full tilt. Shitload of random writes and reads; on my original fileserver (a clusterfuck running Win2k3 Server) running a few torrents was enough to actually make uTorrent wait for the array to catch up.

IOwnCalculus
Apr 2, 2003





Atom wasn't an option when I built mine, but for me the issue I've seen with most Atom boards is you're still limited to one PCI slot. I don't ever plan on running a single array with consumer hardware so large that I need to run more than one add-in controller, but when I migrated from one array to the other it was nice being able to drop in two more add-in cards I had laying around so I could run both arrays at the same time.

For what it's worth, good job Antec on the Earthwatts line; I had my Earthwatts 380W PSU powering the base system itself (Dual-Core Pentium E2140, 945-based motherboard with onboard graphics) as well as an ungodly number of drives. 80GB boot, 6x500GB WD Caviar Blues, 2x200GB Maxtors, and 4x1.5TB 5400RPM Samsungs.

IOwnCalculus
Apr 2, 2003





Boot drive: Do you not have an old drive laying around to use for that? Or pick one up on SA-Mart...

Storage: I think Dell is running the WD 1.5TB greens at a bit over $90, check Slickdeals.

Motherboard + video: Gigabyte's P45 boards kick rear end but this is way overkill. Get one with onboard video, add a PCIe SATA controller down the road when you need more ports.

CPU: I run a Pentium D 2140 in mine but I'm not trying to virtualize anything with it. Also, at least originally, wasn't VT support actually slower than just doing it the old way?

RAM: Even with that CPU, will you see any benefit from DDR2 1066 over DDR2 800, assuming you're not overclocking?

PSU: Probably just fine, maybe slight overkill given what I was able to run on a 380W temporarily. Then again, that C2Q draws a lot more than a 2140.

IOwnCalculus
Apr 2, 2003





I have the Gigabyte EP45-DS3L in my main desktop, it supports DDR2 800 just fine.

How are you planning on handling the RAID? That may throw a wrench into things for your controllers. I run everything on software raid in Linux so I don't need to worry if an array spans multiple controllers (or even if the controllers are all the same type), but if you're planning on using the controllers to handle the array then you may want to go ahead and get a board with 8 SATA ports built in.

IOwnCalculus
Apr 2, 2003





Ha, a few of my coworkers and I discussed that today. Some of the highlights:

*I feel sorry for any customer whose data ends up on the three backplanes that are sharing one PCI controller and not one of the other six backplanes that split the load across three PCIe controllers. That thing is going to rape the PCI bus.
*Only one boot drive, and nonredundant power supplies? I hope they have all customers redundant across multiple boxes.
*How do you swap drives in that thing readily? The amount of downtime has got to be painful. You've got to take it offline, power down both PSUs, wait for everything to spin down, then unrack a nearly-100lb-server (over 70lb of drives alone!). Pop it open, locate the failed drive (how will they know which one failed at this point? Match serial numbers?), swap it, button it all up. Re-rack it, power it on (two steps), start the rebuild (mdadm is not exactly fast), and hope none of the other drives decided to use the spindown as an excuse to die.
*Why not low-speed green drives? Clearly (see bus concern above) performance is a nonissue. Low-power drives would have made this a lot easier to power and cool.

It's really a pretty damned neat idea and very clever, but I feel like they're going to regret some of their choices in a few years when drives start popping left and right.

IOwnCalculus
Apr 2, 2003





H110Hawk posted:

Since they build them all alike, they know the mapping exactly of how Linux will enumerate the disks. They probably just have a cheat sheet that shows exactly which slot holds /dev/sdh or whatever.

I wouldn't trust that, though; I've had Ubuntu randomly reassign drivemappings from one reboot to the next. I just replaced a drive in my backup server - the failed drive was /dev/sdc, but when it came back up it had remapped something else to /dev/sdc and the replacement for the failed drive to /dev/sdb.

Don't get me wrong, I like what they've done, but I think they haven't made enough concessions to long-term maintenance and reliability; it seems like for a relatively marginal increase in cost (especially compared to even the next-cheapest solution) they could make things considerably more reliable.

IOwnCalculus
Apr 2, 2003





adorai posted:

I wouldn't care about speed either, but I would strongly recommend going with a dual core proc. If you plan to use compression or dedupe in the future, you don't want your only core to be pegged, slowing down all other access.

Seconded. When I built mine, I went for the cheapest dualcore available (Pentium E2140) because I'd dealt with single-core CPUs for it and it makes a remarkable difference.

Plus, looking around Newegg, you can get combo deals with a dual-core Athlon for somewhere between $10-$20 more at most. There's just no reason to get a single-cre these days.

IOwnCalculus
Apr 2, 2003





nikomo posted:

I wonder what kind of HDDs you guys buy if you don't want to do RAID0 because the chance of malfunction is doubled. I have not had an HDD die on my hands during my (short) lifetime.

If you've never had a HD fail, you simply haven't used enough of them for a long enough time.

IOwnCalculus
Apr 2, 2003





Nice. I can't say I'm that paranoid, but at the same time I simply don't store any data I care about on anything less than a RAID1 or RAID5 volume. Primary storage for me is a 4x 1.5TB RAID5 at home, and the stuff I really do care about gets backed up to a 2x750GB RAID1 stashed away at my mom's house. The stuff I really really care about will eventually also get backed up to a single 1.5TB drive, sitting in my server in a datacenter.

Data I don't give a flying gently caress about goes on either a RAID0 scratch drive in the same box as the RAID5, or on a single drive.

Adbot
ADBOT LOVES YOU

IOwnCalculus
Apr 2, 2003





jeeves posted:

I've read that WD Green drives are not good for raids, since they spin down their heads after a few seconds of idleness. Are most 'green' drives like this? Like the Samsung line as well?

I have Samsung 1.5TB 5400RPM drives in my home server right now (4x in RAID5 using mdadm in Ubuntu). Previously, in an otherwise identical setup in the same server, I was running anywhere from 4 to 6 7200RPM WD 500GB drives.

For straight-up reads that don't require a lot of seeks, the new array is actually faster. It doesn't give up much in random seeks, and I haven't noticed any increased amount of spindown.

They do use a shitload less energy, best quantified as the significant reduction in waste heat output - they run at 39* C, as opposed to 50-60* C in the same enclosure and environment.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply