Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
angelfoodcakez
Mar 22, 2003
crank dat robocop
So I want an enclosure connected to my main PC via eSATA and then shared out on the network via said PC.

There appear to be two kinds of boxes, ones that handle the RAID for you in the box, and the kind that just offer a dumb enclosure connected via one or more sSATA cables to let your PC handle the RAID (via onboard or RAID addon card?). What are the benefits to either setup?

I suppose the benefit of a dumb enclosure is that you let Windows/RAID card handle the drives, so you don't need to worry about some wacky custom-made firmware that may or may not eat all your data for no reason. Do most RAID cards have monitoring programs that will let you know when a drive starts flaking out?

I want sirens to go off as soon as a drive starts acting up, like I've seen with email reports on the ReadyNAS boxes I've set up for clients. I'm losing my mind with these lovely Seagate STS31500341AS 7200.11 drives that are just dropping like flies. I've had 3 die in the span of two months, another 2 are giving me bad sectors now, and now I see that 6/12 disks in my WHS box are these loving drives.

Like I got over overclocking years ago, I'm done loving around with fun and exciting storage toys and just want something that works, even if it is more limited in functionality and storage capacity.

Adbot
ADBOT LOVES YOU

Rosoboronexport
Jun 14, 2006

Get in the bath, baby!
Ramrod XTreme
I'll answer my questions now after using the weekend and today to wrestle with cheapskate NAS:

Rosoboronexport posted:

And the iBox thingy is a bit questionable as well. It comes with the ISP's own firmware which I no longer have access to. Does anyone know if some other manufacturer has some combo ADSL+WLAN+NAS router-thingie? Or does someone else have experience on the Bewan iBox (It's branded as a Wippies router or Elisa/Saunalahti Kotiboksi here in Finland)?

My experience is: If all your computers are connected to iBox through RJ45, no problem. Transfer speed is around 4-5 MB/sec so it's suitable for media storage and playback. It can be easily mounted as a network drive both in Linux and Windows, you just have to remember the 2 GB file limit if you plan on recording TV, for instance.

If you are using it as a WLAN router, no go. A vague description by the ISP's support message describes that "WLAN and USB use the same processor", my interpretation is that USB and WLAN go through the same PCI bus which gets saturated by requests at the same time. This leads to WLAN spontaneously shutting down while the wired connection and NAS still work. The AP can be seen but no successful connection is created until you reboot the router.

I wrestled with the issue through the weekend and then I caved and got a b/g/n Asus router which I connected with RJ45 to the iBox and set it up as a regular access point. Unfortunately I can't disable WLAN AP from iBox so now I have two WLAN stations saturating the airwaves, and the response from ISP support when requesting that they disable WLAN from their control panel which I no longer have access to was "contact this private IT support firm which costs $12.99/min they'll do it for you". I'll try emailing them, if that gets any better results.

Oh, and I messed something up when I formatted the USB drive to ext3 with Partition magic trial, the drive is now 478 GB instead of 935 GB and cfdisk reports some errors so looks like at some point I need to empty it out and try to reformat it again. Sigh.

Secx
Mar 1, 2003


Hippopotamus retardus
Is there a catch or something that I am missing with this: SANS DIGITAL TowerRAID TR5M-BP 5 Bay eSATA RAID 0/1/10/5/JBOD Performance Tower w/ 6G PCIe Card (Black)
. It is $239.99. They don't refer to it as a NAS, but it seems to do everything I want to in a NAS.

A 2 bay NAS is $200+ everywhere I've looked.

Edit: Duuuuurf. I just noticed it doesn't have network connectivity.

Secx fucked around with this message at 21:28 on Oct 4, 2010

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Secx posted:

Is there a catch or something that I am missing with this: SANS DIGITAL TowerRAID TR5M-BP 5 Bay eSATA RAID 0/1/10/5/JBOD Performance Tower w/ 6G PCIe Card (Black)
. It is $239.99. They don't refer to it as a NAS, but it seems to do everything I want to in a NAS.

A 2 bay NAS is $200+ everywhere I've looked.

Edit: Duuuuurf. I just noticed it doesn't have network connectivity.

You could hook that up to a pretty cheap atom PC running Linux and have yourself a nice little thing, if that's what you're into. It says the controller card supports Linux.

Ceros_X
Aug 6, 2006

U.S. Marine
Hey, got some trouble. I'm running a MegaRAID SAS 8344ELP card and just got an alert - two of my drives are marked 'missing' (two WD10EACS-00Z) and when I reboot I'm getting 'Foreign Config found' when the array initializes. Any ideas? I'm running a RAID5 with 4 1TB drives and a few others on JBOD.

devmd01
Mar 7, 2006

Elektronik
Supersonik
Hope you have a backup.

ErIog
Jul 11, 2001

:nsacloud:
I have a question, and this seems like the best place to ask.

I'm the lead on a project that involves archiving old magazines. Right now we're humming along on an 8x2TB QNAP NAS being backed up to an 8x2TB Drobo that has its stock of drives rotated off-site weekly. We have 3TB of storage capacity left, and I'm trying to do my planning for next-year's upgrades.

When I first started this project 18 months ago I was assuming that since we were massing data so slowly(about 45GB a day) that hard drive sizes would keep pace, and I would just be able to keep upgrading the drives. We have about 3TB of storage free on the system right now. It looks like I'm definitely going to have to upgrade before the end of the year or just after January.

Meanwhile, loving 3TB internal drives or even 2.5TB internal drives are nowhere to be seen. Seagate released a 3TB external in July. WD announced 3TB externals yesterday or the day before. I'm fairly certain both the QNAP and the Drobo will support 3TB drives just fine once they come out, or will with firmware updates.

Do I just need to bite the bullet and drop a whole bunch of money on dual 12x2TB ReadyNAS's or does someone who reads this thread have any well-founded confidence that 3TB bare SATA drives will be available soon-ish? The reason I would have to go with dual ReadyNAS's is that I need some sort of backup solution, and the 8-bay Drobo won't cut it anymore if the total size of the data reaches 24TB.

The problem with this solution is that because of the storage redundancy and the 1000 versus 1024 problem I'm only really getting 10TB usable from a 16TB array in both systems. This means that if I step up to a 2x12TB array then it's really only buying me about 4 months worth of free space, and I really have no idea when the hell 3TB bare SATA drives are gonna be out.

I could also just split the data across 2 8x2TB QNAP models, and then double the rotation on the Drobo backups to compensate. Splitting the data set kind of throws a bunch of tiny wrenches into other pieces of the project that hinges on this high quality storage. I could rewrite various pieces of my software to compensate for it, but it just feels hacky to do that.

Seagate keeps blaming the lack of 3TB bare drives on BIOS and OS manufacturers, but they're creating this stupid chicken and egg scenario where those players won't budge until they have real compatibility problems with products on the market. I'm ready to start ordering the 3TB externals, and doing surgery on them. The thing that bothers the piss out of me is that when all is said and done this dataset will only be about 25TB, but it seems like that goal is right on the edge of that maginot line of diminishing returns in this segment of the storage market.

The one time storage innovation slows in 30 loving years, and I have to be in a job where that matters.

Star War Sex Parrot
Oct 2, 2003

ErIog posted:

Seagate keeps blaming the lack of 3TB bare drives on BIOS and OS manufacturers, but they're creating this stupid chicken and egg scenario where those players won't budge until they have real compatibility problems with products on the market.
I'd be afraid to put one of their 3TB drives from an external in mission critical applications, especially if Seagate is anything like WD where their worst drives are usually the ones that end up in externals. Not to mention they probably wouldn't play nice with a RAID environment.

I'm going to try out a Drobo with a >2.2TB drive soon, so I can let you know if it works. I know it supports 4K-sectors, but I'm not so sure about 64-bit LBA.

Star War Sex Parrot fucked around with this message at 02:46 on Oct 6, 2010

Falcon2001
Oct 10, 2004

Eat your hamburgers, Apollo.
Pillbug
So, I'm preparing to setup a new box for my home network. I posted a ways back about it and it sounds like my best bet is an OpenSolaris box.

To recap needs:
* Must interface with Windows 7 as well as SMB does now, if not better. A little setup time is fine though.
* Must hold at least 6 TB of usable space, prefer to have more.
* Preferably fire and forget, I'd like to set it up, throw it under my desk on my UPS and forget about it for a few months at a time.
* ZFS / RAID-Z support.

What I don't need:
* User-friendliness on the setup. I'm more than capable of operating in a *nix environment given some tinker time.
* Super high speed file handling.
* Corporate level drive protection, although I might use double-parity as a slight failsafe and keep a few extra drives around.
* Small form-factor: I have space for a tower under my desk, don't mind putting it there.

So here's my current plan - pick up a ton of cheap disks, throw together a RAID-Z array on a box using OpenSolaris. My biggest blocker here is hardware choices. Can anyone recommend a build setup for this, or is my best bet cross-referencing the opensolaris hardware compatibility guide with newegg and just hoping it works?

Alternately, does anyone have any alternative ideas?

Edit: So yeah apparently for some weird reason, OpenSolaris has an HCL that doesn't suck rear end. This kind of blows my mind. I can easily build a parts list from this.

Falcon2001 fucked around with this message at 10:38 on Oct 6, 2010

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

ErIog posted:

I have a question, and this seems like the best place to ask.

Depending on how much your budget is, how much space you need, and how willing you are to be the one who gets blamed when something breaks, you can't really go wrong with an opensolaris box.

A Norco 4220, decent set of computer guts, 2x 2 port SAS HBA cards, and as many drives as you need to fill the slots. Assuming you don't need real high speed writes, you can get ~30 TB of useful space per Norco 4220 using RAIDZ2(raid 6). And if you need more space, it's fairly easy to buy an expander chassis, load it with drives, hook it to the SAS card, and add more vdevs to the zpool. Some of the guys over at the [T]ard forums have racks at home with north of 100TB in them, which can push something stupid like 2.2 GB/sec to the host system.

I set mine up about 6 months ago with 8x 1.5 TB drives, and aside from one problem I resolved fairly easily, I've had zero issues with it.


If you need redundancy, you can set up two of these boxes and run a nightly rsync or zfs send/recieve to bring both filesets up to date, and if you want, you can also snapshot the data so if some chucklefuck deletes something important, it's trivial to recover.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Falcon2001 posted:

Edit: So yeah apparently for some weird reason, OpenSolaris has an HCL that doesn't suck rear end. This kind of blows my mind. I can easily build a parts list from this.

How many drives? I've got personal experience on OpenSolaris with 2 port and 8 port SATA cards, and can interpolate a recomendation for a 4 port card. Later I'll post my OpenSolaris build.

You'll also want to buy an Intel NIC.

angelfoodcakez
Mar 22, 2003
crank dat robocop

Secx posted:

Is there a catch or something that I am missing with this: SANS DIGITAL TowerRAID TR5M-BP 5 Bay eSATA RAID 0/1/10/5/JBOD Performance Tower w/ 6G PCIe Card (Black)
. It is $239.99. They don't refer to it as a NAS, but it seems to do everything I want to in a NAS.

A 2 bay NAS is $200+ everywhere I've looked.

Edit: Duuuuurf. I just noticed it doesn't have network connectivity.
A device like this looks like it's exactly what I want. Does anyone have any experience with these devices? Is Sans Digital a decent company?

For only $100 more, I;m really liking the 8 bay one

http://www.newegg.ca/Product/Product.aspx?Item=N82E16816111141

It looks like there is a bundled eSATA/RAID card with it. Does that mean that the RAID is being handled by the host computer? Or is that just included for completeness?

edit: ah, it looks like these are just enclosures, with no internal logic/firmware? so it would be up to the raid card and it's drivers to provide good performance?

angelfoodcakez fucked around with this message at 02:51 on Oct 7, 2010

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I'd recommend the Intel / LSI SASUC8I controller. It'll be useful after an initial ZFS setup even if you go out to SAS expanders and backplane connections.

Drevoak
Jan 30, 2007
I'm looking to get a simple NAS setup for when Boxee Box is released. Is the D-Link 323 still considered the champ? I just find it surprising that it has been out for like 4 years and nothing really has improved.

movax
Aug 30, 2008

FISHMANPET posted:

How many drives? I've got personal experience on OpenSolaris with 2 port and 8 port SATA cards, and can interpolate a recomendation for a 4 port card. Later I'll post my OpenSolaris build.

You'll also want to buy an Intel NIC.

Yep, I've been running OpenSolaris for about 1.5yrs or so, currently 8x1.5TB RAID-Z2 (1.93 gb free :ohdear:) in a Norco RPC-4020. Been pretty smooth, other than having to learn Solaris idiosyncrasies. My main HBA is the USAS-L8i from Supermicro, just bought another one to power another 8 drives.

To hijack a bit...right now I have 8 7200rpm Seagate 1.5s. Looking at 2TB drives for the next 8...what brand(s)/model should I get? Limiting I/O is a single-port Intel NIC at the moment, so I guess my preference is currently leaning towards 5400rpm drives for less heat. I've heard good things about the Hitachi 2TB.

Also...I hear you get an insane boost in (write?) speed by tossing in a SSD and turning it into a L2ARC device? If that SSD doesn't need to be very big, I can just get some 30GB drive with a decent controller and toss it in.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

FISHMANPET posted:

You'll also want to buy an Intel NIC.
Hilariously, removing the intel NIC from my opensolaris box fixed a problem I had with horrendously slow file transfers. Now I am using the onboard realtek gigabit nic.

Telex
Feb 11, 2003

Okay I don't know if I'm doing this right AT ALL and I'm at the point where starting over won't suck so here's my situation:

I want to re-work my file server which was previously a bunch of disks in a Vista machine with Windows Shares which was a Hassle.

Yesterday I got 4 of the WD 2TB Green drives, installed Ubuntu and got the zfs-fuse package. I created a 4 drive pool and I'm in the process of attempting to migrate all the data on the drives left in the system.

I have two more 2TB drives in the machine (Hitachi if that matters, but from what I read zfs doesn't care if your drives are mismatched in size?) as well as a few 1TB drives and some 750's. (10 drives total).

The 4 WD Green drives are hooked up with Si3132 adapters. I made the raidz using

zpool create myzfs raidz /disk1 /disk2 /disk3 /disk4

which I understand to be the 1 redundant disk in the array type such that one of my drives can die and I'm good as long as another disk doesn't die while it rebuilds the array.

Currently, I'm trying to move my data from other drives to the new volume just dragging and dropping with ubuntu, I may try using rsync instead, and I'm getting what I figure are lovely transfer speeds like 10-12MB/sec when I figure I should at least get 30-50MB/s if not a lot faster.

Have I got this thing configured lovely, is Ubuntu lovely at copying from NTFS, or is raidz not that fast at all?

edit bonus question: I CAN non-destructively add the two other 2TB drives to this array without having to erase and start over right? What about 2-3 1TB drives too?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
You can't expect a filesystem, with crucial parts not properly implemented (things like the ARC), hammered into FUSE to perform like mad.

Telex
Feb 11, 2003

Combat Pretzel posted:

You can't expect a filesystem, with crucial parts not properly implemented (things like the ARC), hammered into FUSE to perform like mad.

Like mad and 12MB/s are two different realms. I was expecting at least the normal speed of one drive under Windows, I get something that's around 2-3x less. It's a bit disappointing.

Does the read speed get better? I don't think I can live with it being this slow, especially for a "raid" and I guess I better suck it up and figure out FreeBSD tonight instead of wasting my time if ZFS on Ubuntu is this lovely.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Telex posted:

edit bonus question: I CAN non-destructively add the two other 2TB drives to this array without having to erase and start over right? What about 2-3 1TB drives too?

Depends how you mean add them. You can add another vdev, but you can't add those drives to your existing RaidZ vdev.

And seconding the fact that your speeds are always going to be poo poo. ZFS-fuse doesn't completely implement ZFS, and FUSE runs in userland, not the kernel, so any FUSE file system is pretty much going to by default run like poo poo.

Telex
Feb 11, 2003

FISHMANPET posted:

Depends how you mean add them. You can add another vdev, but you can't add those drives to your existing RaidZ vdev.

And seconding the fact that your speeds are always going to be poo poo. ZFS-fuse doesn't completely implement ZFS, and FUSE runs in userland, not the kernel, so any FUSE file system is pretty much going to by default run like poo poo.

So if I have the 4 drive set right now, I can't add two more drives to it to make a single 10TB volume, if I want that I have to clear off the two extra 2TB drives I have?

Clearly I listened to the wrong person at work, I was hoping I could make the array with new drives, clear off the old drives and just expand the array with the old drives and let zfs do all the magic of balancing out the data and parity.

Looks like I have a lot of work to re-do tonight but it's good to learn.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Telex posted:

So if I have the 4 drive set right now, I can't add two more drives to it to make a single 10TB volume, if I want that I have to clear off the two extra 2TB drives I have?

Clearly I listened to the wrong person at work, I was hoping I could make the array with new drives, clear off the old drives and just expand the array with the old drives and let zfs do all the magic of balancing out the data and parity.

Looks like I have a lot of work to re-do tonight but it's good to learn.

It's the feature that every home user wants but that Sun/Oracle will never do, because there's no business need for it. Enterprises don't add storage a disk at a time, they add it a tray at a time.

Here's what can be done:
A ZFS pool is built out of vdevs. A vdev can be a single disk, a file, or a RaidZ set. You can create a mirror after the fact, but you can't expand a vdev, you can only add more vdevs to the pool. So if you have your 4 drives in a RAIDZ, you could add another single disk vdev, or if you had three new disks you could add that as a 3 disk raidz vdev.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
The problem with RAIDZ expansion is also the way it arranges data, making it a whole huge clusterfuck to expand. The base functionality required to do that stuff is however already in. Dedupe depends partly on it, as well as vdev removal, which I hope is coming, depends on it, too.

TerryLennox
Oct 12, 2009

There is nothing tougher than a tough Mexican, just as there is nothing gentler than a gentle Mexican, nothing more honest than an honest Mexican, and above all nothing sadder than a sad Mexican. -R. Chandler.
I'm considering buying a Drobo FS or a ReadyNAS NVX Pioneer. Its not going to be used for business purposes, I'm just running out of space too quickly and adding drives piecemeal just completes backing it up. I thought a nice NAS with RAID 5 could help me with this.

Before I commit to buying an NAS I have some questions that I haven't been able to find the answers for anywhere else.

1) How important is to use Enterprise Hard Drives on the NAS? I'm not rich and would like to keep the price as low as possible while not setting myself for failure (IE no Fujitsu or Hitachi Deathstar drives).

2) Premade NAS or DIY? I'm not afraid of a little DIY as I build my own systems. Will building my own be cheaper than buying a premade? The only plus I see in doing home brew is that I could choose whether I wanted to use RAID Z or something more exotic. I'm completely clueless regarding Unix, Solaris and such. I have used RHEL 4 and 5 at work and Ubuntu at home.

Any help will be appreciated.

what is this
Sep 11, 2001

it is a lemur
Whatever you do, don't buy a drobo. Literally anything else will be better.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Combat Pretzel posted:

The problem with RAIDZ expansion is also the way it arranges data, making it a whole huge clusterfuck to expand. The base functionality required to do that stuff is however already in. Dedupe depends partly on it, as well as vdev removal, which I hope is coming, depends on it, too.

That's the Block Pointer Rewrite, and it looks like it'll be coming out as an actual feature in Solaris 11 express. Then at long last ZFS can claim feature party or better against traditional RAID implementations.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
For most of us running these ZFS setups at home, we probably won't be paying for Solaris 11 Express though, so we'd only care if it came out on OpenIndiana or Illumos, for example.

TerryLennox posted:

How important is to use Enterprise Hard Drives on the NAS? I'm not rich and would like to keep the price as low as possible while not setting myself for failure (IE no Fujitsu or Hitachi Deathstar drives).
There's a lot of us in this thread that know the differences well and still use green drives for home setups. The price premium is really not worth it for almost every home storage use case. I've been using Green drives since they came out in my software RAID setups and they've been more than fine. It's just that there's issues when using hardware RAID controllers that try to do a bunch of other crap that they present problems. Funny, I get the impression among home RAID using folks that more data has been lost due to RAID complications than drive failure.

Should we update the OP with "don't bother with the Drobo. Don't say we didn't warn you"? Because that's about as far as we need to explain ourselves I feel. Just type it into Google and you'll see the consensus.

what is this
Sep 11, 2001

it is a lemur

necrobobsledder posted:

Should we update the OP with "don't bother with the Drobo. Don't say we didn't warn you"? Because that's about as far as we need to explain ourselves I feel. Just type it into Google and you'll see the consensus.

Yes. It's annoying having to repeat the advice constantly.

Star War Sex Parrot
Oct 2, 2003

TerryLennox posted:

I'm considering buying a Drobo FS or a ReadyNAS NVX Pioneer.
Why would you buy the 6 year-old NVX instead of the brand new (and slightly cheaper) ReadyNAS Ultra?

TerryLennox
Oct 12, 2009

There is nothing tougher than a tough Mexican, just as there is nothing gentler than a gentle Mexican, nothing more honest than an honest Mexican, and above all nothing sadder than a sad Mexican. -R. Chandler.

Star War Sex Parrot posted:

Why would you buy the 6 year-old NVX instead of the brand new (and slightly cheaper) ReadyNAS Ultra?

Hurr. Thats what I get for not doing enough research. The Ready NAS Ultra 4 or 6 seem nice. X-RAID2 is basically all I need for a no-nonsense, no complications setup. Thanks for the heads up!

necrobobsledder posted:

There's a lot of us in this thread that know the differences well and still use green drives for home setups. The price premium is really not worth it for almost every home storage use case. I've been using Green drives since they came out in my software RAID setups and they've been more than fine. It's just that there's issues when using hardware RAID controllers that try to do a bunch of other crap that they present problems. Funny, I get the impression among home RAID using folks that more data has been lost due to RAID complications than drive failure.

Interesting. Software RAID is not as much of a problem as Hardware RAID. I'm not too concerned by performance as the NAS will be used to store a crap load of data that will be accessed sporadically. Perhaps as a movie and music store too. My 1.5 TB and 1 TB drives are not cutting the mustard. Could I impose on you a little more and ask you to elaborate a bit about what causes the drives to fail in hardware arrays and how to prevent that in software RAID? I recall reading a few posts at the beginning regarding head cycles or some such that are overworked by the RAID controller which causes them to fail even though the platters themselves are probably fine.

Guys you have been more helpful than the storage support department at work (major OEM). :D

IOwnCalculus
Apr 2, 2003





The biggest issue by far is that green drives try to spin down quickly to reduce power consumption. Hardware RAID controllers interpret the spin down (and resultant spin up and the delays caused by it) as a failed drive.

Software RAID typically just shrugs and waits for it.

edit: \/\/ Nevermind he's right, it's been so long since I looked into it that I had forgotten the error-recovery aspect.

IOwnCalculus fucked around with this message at 21:18 on Oct 12, 2010

Zhentar
Sep 28, 2003

Brilliant Master Genius

IOwnCalculus posted:

The biggest issue by far is that green drives try to spin down quickly to reduce power consumption. Hardware RAID controllers interpret the spin down (and resultant spin up and the delays caused by it) as a failed drive.

That's not what TLER (or rather, not having it) is. The drive can potentially spend a long time trying to read or verify bad sectors, which is not only pointless in a RAID environment, but can cause the timeouts, and thus false alarms for drive failures.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
There's two properties of low power drives in the industry that are of interest to those running RAID setups. The aggressive head parking behavior resulting in a likely shortened lifecycle due to just plain mechanical wear is one, and the TLER / timeout behavior is the other. These are not a problem in any drive line elsewhere. Otherwise, the behaviors and features are down to warranty, certain electro-mechanical longevity, and 4k / 512b sector size issues (this is relevant in storage systems overall).

If anyone wants a heads up, I'm about to post my 1.3 year old Thecus N4100Pro 4-bay NAS on SA-Mart for $250 with the most recent firmware as of August 2010. It's been a great NAS, but after I've spent a good while with my ZFS setup, I don't have a need for this anymore. Smoke free home :)

Star War Sex Parrot
Oct 2, 2003

necrobobsledder posted:

There's two properties of low power drives in the industry that are of interest to those running RAID setups. The aggressive head parking behavior resulting in a likely shortened lifecycle due to just plain mechanical wear is one, and the TLER / timeout behavior is the other. These are not a problem in any drive line elsewhere.
What about enterprise green drives? TLER should be taken care of, and I can't imagine their head-parking is as aggressive. However you still get some energy benefits due to the lower spindle speed.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Since when did they start making "enterprise" green drives? And at $260, I can't help but laugh a little that they probably took some of the regular RE4 2TB drives, spun the spindle speed down, and tada! GREEEENNNN

I can only think of so many times where a "green" drive would be useful in an enterprise storage situation or actually worth the cost for a home user. Perhaps as a hot spare of drives where reduced performance isn't terrible while needing to keep it on is a requirement or in a VTL that mostly sits collecting dust (figuratively) after a backup or sync for, say, a disaster recovery scenario. Seriously, a 2TB WD20EARS is now like $94. Not a huge deal with a Windows Home Server or most software RAID setups so long as your OS (and your drive jumpers) are able to get the partition aligned on the 4k boundaries.

If we want to get enterprisey, there's always that SAN megathread over yonder. I thought it was kinda implied that this thread is for home users with their Linux isos and homegrown collections.

CrazyLittle
Sep 11, 2001





Clapping Larry
The biggest cost of colocation today is power and heat. "Green" drives help solve both problems, so it makes a lot of sense even if they're just kneecapping faster drives. Think of the power and heat savings when you stretch that over 1000 disks.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

CrazyLittle posted:

The biggest cost of colocation today is power and heat. "Green" drives help solve both problems, so it makes a lot of sense even if they're just kneecapping faster drives. Think of the power and heat savings when you stretch that over 1000 disks.
And now imagine you took half that savings, dumped it into read and write cache, and kept your overall speed the same.

Star War Sex Parrot
Oct 2, 2003

necrobobsledder posted:

Since when did they start making "enterprise" green drives?
Several years. And from what I can tell the green enterprise drives are just the consumer green drives with more RAID-friendly firmware, rather than the RE4 drives running slower.

Profane Obituary!
May 19, 2009

This Motherfucker is Dead
If i wanted to setup a ZFS File Server serving to a mixture of Linux and Windows(mostly windows). What is the best choice for OS? From what i can tell there is something murky going on with OpenSolaris? Should i try FreeBSD?

Adbot
ADBOT LOVES YOU

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Profane Obituary! posted:

If i wanted to setup a ZFS File Server serving to a mixture of Linux and Windows(mostly windows). What is the best choice for OS? From what i can tell there is something murky going on with OpenSolaris? Should i try FreeBSD?
for cifs performance, freebsd is not going to beat opensolaris. If you do build it on opensolaris, the illumos project will probably provide an eventual upgrade path to get on one of the distros based on it.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply