Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Gozinbulx
Feb 19, 2004
I've been googling the gently caress out for what seems like hours now trying to figure out what to do with my Dell Precision 690 case (which was discussed months ago in this thread). I got it free and its enormous so it seem's like it would be great for filling with drives. The snag: it was built during that small, lovely window where dell and few others thought BTX motherboards would be the wave of the future, so its mounted as such. What's confusing is that i've read 2 posts where someone claimed than in fact the 690 COULD hold an ATX or mini ATX mobo, but I can't any real confirmation of that. Anyone have any experience with this? I think someone claimed it was reverse ATX instead of straight BTX, does that mean I can mount an ATX mobo in it? Cases are so cheap too that I'm starting to wonder if its even worth the hassle.

Adbot
ADBOT LOVES YOU

IOwnCalculus
Apr 2, 2003





Civil posted:

I've ordered 4 drives from Newegg this week, and they all shipped via Ontrac, which is what Amazon uses most of the time (for me).

I have Prime and prefer Amazon because they're generally awesome for everything, but Newegg had some serious sales, so I went with them. And they've always had excellent customer service as well. Not as good as Amazon, but better than shitbuckets like Tigerdirect.

It's not the carrier, it's the packing. Newegg wraps bare drives up like bubble-wrapped footballs and throws them into a loose box with peanuts. Amazon puts each drive in a single-drive cardboard box (like what WD or Seagate dictate their RMAs should be sent in) and then packs those boxes into a larger box; in my experience, usually using those large air bags or kraft paper.

Civil
Apr 21, 2003

Do you see this? This means "Have a nice day".

IOwnCalculus posted:

It's not the carrier, it's the packing. Newegg wraps bare drives up like bubble-wrapped footballs and throws them into a loose box with peanuts. Amazon puts each drive in a single-drive cardboard box (like what WD or Seagate dictate their RMAs should be sent in) and then packs those boxes into a larger box; in my experience, usually using those large air bags or kraft paper.

My Toshiba drives were boxed retail, and they arrived in a cardboard box full of packing peanuts, which covered all sides. My Red's are apparently waiting for me in front of my house, I don't know if they're bare or retail box. I'll take a picture for kicks tonight. Maybe they'll have footprints or dents in them or something.

I am glad they're shipping Ontrac. Their free shipping used to take a week to go from LA to Sacramento. I ordered these yesterday and they're already there.

kalicki
Jan 5, 2004

Every King needs his jester
I got a drive in from Newegg the other day and it wasn't standard bubble wrap now, it was in some sort of much stronger seeming wrap. Kind of like bubble tube wrap almost with the HDD securely enclosed inside of it, and that filled up the entire box. Haven't had a chance to install it yet to test it though.

Civil
Apr 21, 2003

Do you see this? This means "Have a nice day".



I should probably test these sooner rather than later. Backing up everything now. Decided to burn my NAS to the ground rather than swap in one disk at a time (bumping all 4 2TB's to 3 TB's).

Casull
Aug 13, 2005

:catstare: :catstare: :catstare:
All right, time to plan out a NAS for backing up my junk at home.

In terms of storage, I think I'm going to want a 6TB RAID5, so that means three 3TB WD RED drives. Going by Amazon's prices, that's $450 total for the drives.

Next, I need to figure out an enclosure and optionally a recommended OS. The enclosure should be:

1. Under $200
2. Capable of hardware RAID
3. Optionally capable of running FreeNAS (unless someone else can recommend a better OS)

Would the N40L in the OP be suitable, or can I get something with better bang for my buck nowadays?

BlankSystemDaemon
Mar 13, 2009




As I see it, the whole point of running FreeNAS (or any NAS appliance OS) is the software raid it offers. N40L and N54L, depending on which is cheaper - I've seen the latter for cheaper than N40L sometimes, will more than suit your needs.

grizzlepants
Jan 14, 2008

Casull posted:

In terms of storage, I think I'm going to want a 6TB RAID5, so that means three 3TB WD RED drives. Going by Amazon's prices, that's $450 total for the drives.

Since ZFS doesn't currently allow you to add a disk to an existing RAIDZ container and the N40L has 4 SATA slots, you might want to consider biting the bullet and getting that 4th disk if you think you there's a chance you could outgrow 6TB. Finding places to put all your data temporarily so you can drop your existing RAIDZ dataset to recreate it with an additional disk is a pain.

The_Franz
Aug 8, 2003

grizzlepants posted:

Since ZFS doesn't currently allow you to add a disk to an existing RAIDZ container and the N40L has 4 SATA slots, you might want to consider biting the bullet and getting that 4th disk if you think you there's a chance you could outgrow 6TB. Finding places to put all your data temporarily so you can drop your existing RAIDZ dataset to recreate it with an additional disk is a pain.

Speaking of, is any progress being made as far as being able to add devices to a ZFS vdev? Block pointer rewrite, which would enable this, has seemingly been coming "real soon" for years.

zorch
Nov 28, 2006

edit: looks like I missed an entire page of this thread :haw:

zorch fucked around with this message at 15:54 on Sep 20, 2013

BlankSystemDaemon
Mar 13, 2009




The_Franz posted:

Speaking of, is any progress being made as far as being able to add devices to a ZFS vdev? Block pointer rewrite, which would enable this, has seemingly been coming "real soon" for years.
Maybe now that open-zfs is a thing, we'll see some some advances.

thebigcow
Jan 3, 2001

Bully!

The large bubble stuff isn't even rated for fragile things, just using the smaller bubble stuff would be a huge improvement. Of course if you're going to bother with that you could just get real packaging for the drives.

Lowen SoDium
Jun 5, 2003

Highen Fiber
Clapping Larry

D. Ebdrup posted:

Maybe now that open-zfs is a thing, we'll see some some advances.

I saw this the other day and meant to ask about it here.

Am I correct in understanding that open-zfs is just name for the collaboration between the various open source ZFS ports on Linux, Mac OS, FreeBSD, ect?


Specifically, this is not a new port of ZFS. This is an attempt to keep all of the ports on the same page feature wise?

evol262
Nov 30, 2010
#!/usr/bin/perl

Lowen SoDium posted:

I saw this the other day and meant to ask about it here.

Am I correct in understanding that open-zfs is just name for the collaboration between the various open source ZFS ports on Linux, Mac OS, FreeBSD, ect?


Specifically, this is not a new port of ZFS. This is an attempt to keep all of the ports on the same page feature wise?

That's correct.

Longinus00
Dec 29, 2005
Ur-Quan

Casull posted:

All right, time to plan out a NAS for backing up my junk at home.

In terms of storage, I think I'm going to want a 6TB RAID5, so that means three 3TB WD RED drives. Going by Amazon's prices, that's $450 total for the drives.

Next, I need to figure out an enclosure and optionally a recommended OS. The enclosure should be:

1. Under $200
2. Capable of hardware RAID
3. Optionally capable of running FreeNAS (unless someone else can recommend a better OS)

Would the N40L in the OP be suitable, or can I get something with better bang for my buck nowadays?

Hardware raid for something under $200 total doesn't really exist. There is also zero reason to run hardware raid so there you go.

The_Franz posted:

Speaking of, is any progress being made as far as being able to add devices to a ZFS vdev? Block pointer rewrite, which would enable this, has seemingly been coming "real soon" for years.

This is just one of the sore points of the ZFS architecture. Adding resizing and defragmentation is a huge PITA due to how all the block groups are set up. To work around some of the current issues they've even had to resort to creative solutions such as ganging up smaller block groups to get bigger ones if there's a disparity in block group usage. This is where the huge difference between the design of BTRFS and ZFS becomes apparent even as BTRFS is starting to reach feature parity with ZFS. ZFS is also seems to be aimed at high end enterprise/lab workloads so I'm not sure how high of a priority this is for them.

Longinus00 fucked around with this message at 20:05 on Sep 20, 2013

ilkhan
Oct 7, 2004

Ok then
So I've got my new synology DS417 up. Its small, quiet, low power. I like it.

But I have a question.
I've got volume 1 which is the 3 disks in R5. I have a shared folder in there.
\volume1\vault

Can I share \volume1\vault\folder separately? Separate permission/name? The user with access to vault can see everything in folder, but I want a user to be able to access folder without having any other access.

I just want to map one drive letter for me while giving boxee / roommate access to specific folders.

ilkhan fucked around with this message at 22:39 on Sep 20, 2013

Megaman
May 8, 2004
I didn't read the thread BUT...
Can someone explain why ZFS pools can not be expanded by number of disks, and only by bigger disk sizes? Is this a limitation of the ZFS RAID levels? Is this something that can be patched? Or can it never be fixed ever?

ilkhan
Oct 7, 2004

Ok then

Megaman posted:

Can someone explain why ZFS pools can not be expanded by number of disks, and only by bigger disk sizes? Is this a limitation of the ZFS RAID levels? Is this something that can be patched? Or can it never be fixed ever?
AFAIK (and Im not an expert by any means) it can't be expanded at all. Bigger disks or additional disks.

Megaman
May 8, 2004
I didn't read the thread BUT...

ilkhan posted:

AFAIK (and Im not an expert by any means) it can't be expanded at all. Bigger disks or additional disks.

You can add bigger disks, at least I've been told so before, but not add disks.

Megaman
May 8, 2004
I didn't read the thread BUT...

ilkhan posted:

AFAIK (and Im not an expert by any means) it can't be expanded at all. Bigger disks or additional disks.

From wikipedia

quote:

Thus, a zpool (ZFS storage pool) is vaguely similar to a computer's RAM. The total RAM pool capacity depends on the number of RAM memory sticks and the size of each stick. Likewise, a zpool consists of one or more vdevs. Each vdev can be viewed as a group of hard disks (or partitions, or files, etc.). Each vdev should have redundancy, because if a vdev is lost, then the whole zpool is lost. Thus, each vdev should be configured as RAID-Z1, RAID-Z2, mirror, etc. It is not possible to change the number of drives in an existing vdev (Block Pointer Rewrite will allow this, and also allow defragmentation), but it is always possible to increase storage capacity by adding a new vdev to a zpool. It is possible to swap a drive to a larger drive and resilver (repair) the zpool. If this procedure is repeated for every disk in a vdev, then the zpool will grow in capacity when the last drive is resilvered. A vdev will have the same base capacity as the smallest drive in the group. For instance, a vdev consisting of three 500 GB and one 700 GB drive, will have a capacity of 4× 500 GB.

So I guess I'm curious what the reason is. Or maybe there's something in this paragraph I'm not understanding?

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
The algorithm to generate the equivalent of the RAID stripe and checksumming / forward error correction parity units for RAIDZ vdevs is fixed and indexed around the number of disks and would require completely rebuilding the data and metadata / superblock equivalents during which time the data would be made unavailable. Note that zpools consist of 1 or more vdevs, and it's vdevs that can't have their structural layout changed. While it's possible to increase the size of each physical device within a vdev and have them re-silvered later to the smallest of all drives in the vdev, you can't just remove and add a drive to a vdev that's an active device. You can add hot spares and cold spares to vdevs though.

There's been a few patches made by ZFS devs in the past but there's little expectation that they'll be added back to mainline ZFS, especially because Oracle doesn't give a drat about home users' money. The few people with the knowledge / qualifications to patch it now do not have the time nor will to add the dynamic / ephemeral multi-stripe sizing implementation to the reference ZFS implementation. Maybe Open ZFS will get it in there way down the line, but I expect by then that Dropbox will offer 1TB for free - that's how long I expect things to take.

The general recommendation for expanding zpools is to add a vdev consisting of a RAIDZ. This model is pretty close to how most people would expand a DAS or SAN (a new rack in your SAS expander array or a new shelf of disks in a chassis on a controller group). Otherwise, you can just one at a time replace your existing array's disks and wait for them to resilver for days on end.

dotster
Aug 28, 2013

Here is a very basic deck that covers ZFS and some of the adding and expanding options, at least for freenas.

http://forums.freenas.org/threads/slideshow-explaining-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Here is a link to the page it is from if you want to read more.


http://doc.freenas.org/index.php/Hardware_Recommendations#ZFS_Overview

modeski
Apr 21, 2005

Deceive, inveigle, obfuscate.
Does anyone have a NAS with 8 drives? I'm speccing out my new server and need a PSU that will support up to eight HDs (I'm starting off with 5). I'd like something efficient with enough connections for eight drives, but have no clue what else I need to be considering. Does the PSU need 8 actual SATA power connections, or can I split off of molex connectors and SATA connectors according to my needs?

EDIT: I'm using a motherboard with 8 SATA ports, so no need for a backplane unless I want to do 10 drives down the line.

modeski fucked around with this message at 01:45 on Sep 22, 2013

dotster
Aug 28, 2013

modeski posted:

Does anyone have a NAS with 8 drives? I'm speccing out my new server and need a PSU that will support up to eight HDs (I'm starting off with 5). I'd like something efficient with enough connections for eight drives, but have no clue what else I need to be considering.

I have a server with 8 bays, it has a sas/sata backplane in the server and it takes two 4 pin molex connectors. Drives don't pull that much power, 5-10w per drive. I would also recommend RAID 6 over 5 (assuming you meant 5 disks for RAID5).

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I'm currently installing and testing / validating my build based upon a U-NAS NSC-800 case, but pretty sure that your motherboard there won't fit in that case.

But to answer the question, pretty much any PSU that has sufficiently decent amps available on the 12V rails to the molex connectors should be fine if using a backplane (they tend to take molex connectors with 1 molex for 4 drives to distribute the SATA power). You can get molex to SATA power adapters that split out to 4 SATA connectors but I suspect those have dubious quality given the reviews I've seen online. If looking for as little power as possible, you'll want to make sure that the controller used for the drives supports staggered spin-up and you can probably get a PSU somewhere in the 150w rating territory for maximum efficiency. I didn't opt for a picoPSU due to lack of sufficient info on whether the molex connector on those could be split effectively for my hard drives, but those PSUs are worth a look as well as a lot of flex ATX PSUs.

deimos
Nov 30, 2006

Forget it man this bat is whack, it's got poobrain!

Longinus00 posted:

Hardware raid for something under $200 total doesn't really exist. There is also zero reason to run hardware raid so there you go.

M1015 crossflashed to IR begs to differ.

e: hell the M5014 can be had for less than $200 sometimes.

deimos fucked around with this message at 02:41 on Sep 22, 2013

Longinus00
Dec 29, 2005
Ur-Quan

deimos posted:

M1015 crossflashed to IR begs to differ.

e: hell the M5014 can be had for less than $200 sometimes.

That's only the price of the card, he was talking about the whole package/enclosure like the N40L.

As far as power supplies go you need to make sure you can supply enough current to handle starting up all the disks (better controllers will let you stagger starting them to reduce the load required here) and also the 5W draw which in consumer power supplies tend not to go over 130W. For 8 drives this probably won't be an issue though.

tarepanda
Mar 26, 2011

Living the Dream
Two questiosns:

1. What's the easiest way to transition from RAID-Z to regular RAID? Is it necessarily going to be a two-step process where I copy everything off the RAID-Z array?

2. Is there a way to share a BR drive on my NAS (FreeNAS) so that I can watch DVDs on my htpcs?

Moey
Oct 22, 2010

I LIKE TO MOVE IT

tarepanda posted:

Two questiosns:

1. What's the easiest way to transition from RAID-Z to regular RAID? Is it necessarily going to be a two-step process where I copy everything off the RAID-Z array?

2. Is there a way to share a BR drive on my NAS (FreeNAS) so that I can watch DVDs on my htpcs?

1. You will have to copy it to a temp location, then rebuild the array.

No advise on 2.

Why do you want off ZFS?

dotster
Aug 28, 2013

Only reason I can think of

tarepanda posted:

Two questiosns:

1. What's the easiest way to transition from RAID-Z to regular RAID? Is it necessarily going to be a two-step process where I copy everything off the RAID-Z array?

2. Is there a way to share a BR drive on my NAS (FreeNAS) so that I can watch DVDs on my htpcs?

#1, what Moey said.

#2, you are better off ripping it and then playing it from the NAS, a BR will peak at 40 Mbps and streaming that to a HTPC without glitching is hard. I have not done it through FreeNAS, I am guessing that you will have HDCP issues though.

dotster
Aug 28, 2013

Moey posted:

1. You will have to copy it to a temp location, then rebuild the array.

No advise on 2.

Why do you want off ZFS?

Unless the system is low on memory and he is switching to UFS there is no reason to switch off of ZFS.

SopWATh
Jun 1, 2000
I've got an Atom D525 box, a SuperMicro X7SPA-HF-D525 with the maximum 4GB of RAM, and 3x 2TB drives in software RAID5.

I originally ran WHS on it, but I had some driver issues and the backup client conflicted with some other stuff on my laptop. I dumped WHS and installed Windows Server 2008 r2 to a 4th 500GB drive. It runs slowly, which is fine for now, but I'm getting really tired of the server crashing. (Which interferes with backups and obviously if it keeps running consistency checks on the array it's down for days at a time)

I need something reliable, always on, easy to manage, and the ability to assign access rights depending on user. These seem like pretty basic requirements and I was thinking about getting a SATA Disk On Module and installing FreeNAS, but all the features listed on the FreeNAS page make it look like zfs is a requirement and with only 4GB of ram that looks like it will be a problem.

I've basically got 3 questions:
1) If I've got 3 users, will the recommended 8GB minimum amount of ram be a non-factor for FreeNAS (despite not being able to cache disk operations and such)

2) If having only 4GB of ram will be an issue, do FreeNAS features work with ufs? I'm not too worried about snapshots, but I need the access rights to work as my wife's work stuff can't be shared even with me.

3) Should I skip FreeNAS and use something else or get a Synology/Drobo/blah device and just use the Atom board as a pfsense firewall?

dotster
Aug 28, 2013

SopWATh posted:

I've got an Atom D525 box, a SuperMicro X7SPA-HF-D525 with the maximum 4GB of RAM, and 3x 2TB drives in software RAID5.

I originally ran WHS on it, but I had some driver issues and the backup client conflicted with some other stuff on my laptop. I dumped WHS and installed Windows Server 2008 r2 to a 4th 500GB drive. It runs slowly, which is fine for now, but I'm getting really tired of the server crashing. (Which interferes with backups and obviously if it keeps running consistency checks on the array it's down for days at a time)

I need something reliable, always on, easy to manage, and the ability to assign access rights depending on user. These seem like pretty basic requirements and I was thinking about getting a SATA Disk On Module and installing FreeNAS, but all the features listed on the FreeNAS page make it look like zfs is a requirement and with only 4GB of ram that looks like it will be a problem.

I've basically got 3 questions:
1) If I've got 3 users, will the recommended 8GB minimum amount of ram be a non-factor for FreeNAS (despite not being able to cache disk operations and such)

2) If having only 4GB of ram will be an issue, do FreeNAS features work with ufs? I'm not too worried about snapshots, but I need the access rights to work as my wife's work stuff can't be shared even with me.

3) Should I skip FreeNAS and use something else or get a Synology/Drobo/blah device and just use the Atom board as a pfsense firewall?

You could try FreeNAS, there are plenty of people running it on less than 8GB but several people report having performance and stability issues on low memory systems. I have never run it with less than 16GB so I don't have personnel experience in this setup.

If you do run it you wont get any kind of caching, you need to make sure you do not enable L2ARC, and there is some other tuning you will want to follow from the FreeNAS docs for low memory systems. If your onboard video uses system RAM you might want to disable that and use a cheap video card just for setup. I would not expect great performance.

You could also look at NAS4Free as it lists lower system requirements for ZFS NAS than FreeNAS. I have not used this but it reviews pretty well.

The last comment on your setup that I have is I would recommend a forth 2TB drive and RAIDZ2 instead of RAIDZ. Rebuild time on larger drives leave plenty of time for a second disk failure (I have experienced this personally) and your chances of a successful rebuild go up greatly in a RAIDZ2 setup. If you need better write performance than RAIDZ2 then look at Striped Mirrored VDEVs (RAID10). I would have a spare drive on the shelf for when you have a disk failure.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Here's something from some ex-worker of a Western Digital factory that posted on Reddit:

quote:

The production yields are always above 97%. That means 3 out of 100 drives will end up failing. The normal production output of hard drives from the plants in SEA are nearly 200000 drives A DAY.

However, that doesn't mean that they throw those drives away. Instead, the company has a recovery system whereby every failed drive is inspected and reworked in any possible manner to get it out to the consumer again.

For reworked drives, the success rate is typically 85% and above, which is decent. If you crunch the numbers, the number of failed drives will be roughly 6000 drives a day, and after rework, that number is reduced to 900 a day.

From the 900, all the drives will be torn apart, and the components will be checked to see if it can be reused, or repaired for future use. It typically becomes trash after this point on when any component is determined to be bad.

Still, the most common failures are drives being partially built and a machine fault occurs. it could be something simple like a screw failing to screw all the way in, or one of the machines building the drive catastrophically failes. in any case, its a day to day issue, where the maintenance team will monitor every hour.

There is a rule whereby any drives made with 100% virgin components will be shipped as OEM to high tier customers like PC manufacturers, and reworked drives, or drives made with reworked components will be shipped as consumer goods like External Drives, or to computer shops. The quantity is small, but it generally happens for all the drive manufacturers.

Geemer
Nov 4, 2010



My dad's taking his first foray into having a NAS because our WHS is an unreliable piece of poo poo. He's already bought a Synology DS212j, which was for sale and recommended to him by his friend who works at that store, and is now looking around to buy hard drives.

Right now, there's a sale on WD Caviar Green drives at that same computer store and he's thinking of getting two 2 TB ones for the NAS. I seem to remember horror stories about Caviar Greens in servers and stuff and would like to know if that's still the case or not (or if it ever really was the case). And if so, if I really need to steer him towards the (much more) expensive Caviar Reds.

caberham
Mar 18, 2009

by Smythe
Grimey Drawer

necrobobsledder posted:

I'm currently installing and testing / validating my build based upon a U-NAS NSC-800 case, but pretty sure that your motherboard there won't fit in that case.

But to answer the question, pretty much any PSU that has sufficiently decent amps available on the 12V rails to the molex connectors should be fine if using a backplane (they tend to take molex connectors with 1 molex for 4 drives to distribute the SATA power). You can get molex to SATA power adapters that split out to 4 SATA connectors but I suspect those have dubious quality given the reviews I've seen online. If looking for as little power as possible, you'll want to make sure that the controller used for the drives supports staggered spin-up and you can probably get a PSU somewhere in the 150w rating territory for maximum efficiency. I didn't opt for a picoPSU due to lack of sufficient info on whether the molex connector on those could be split effectively for my hard drives, but those PSUs are worth a look as well as a lot of flex ATX PSUs.

Any word on which ITX PSU? I'm looking at the same 8 bay UNAS model too, I found this

http://www.newegg.com/Product/Product.aspx?Item=N82E16817104145

IT Guy
Jan 12, 2010

You people drink like you don't want to live!
never mind.

IT Guy fucked around with this message at 21:00 on Sep 23, 2013

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

caberham posted:

Any word on which ITX PSU? I'm looking at the same 8 bay UNAS model too, I found this

http://www.newegg.com/Product/Product.aspx?Item=N82E16817104145
It accepts 1U and smaller PSUs - there's really not much room for anything besides disks in this case and if you're going with this you might as well go crazy on power efficiency too and try to aim for 30-85% of rated capacity. I couldn't find any flex ATX with appropriate connectors exactly matching my setup to help reduce the amount of cable clutter, so I'm using one adapter.

The 300w FSP should do alright but may be overspecced and not as inefficient just by the nature of load on PSUs. Because picoPSUs didn't look like they'd work out for me at 150w (the , I picked out a Seasonic SS-250SU flex ATX PSU on ebay from a seller that included a bracket for a 1U opening. The fan noise is peculiar; it's not loud, but it's noticeable and distinctive with what I thought sounded like something bumping against the fan. I honestly am not sure if it's a defective fan since I've heard 20mm ball bearing fans that sound the same, but I'll find out soon enough since I've been running it for about a week straight now with moderate usage.

It's a challenging case, no question. If someone prebuilt machines with these cases, they could easily charge $900+ for it and be justified for charging that due to the labor of parts and install.

Odette
Mar 19, 2011

Combat Pretzel posted:

Here's something from some ex-worker of a Western Digital factory that posted on Reddit:

Can I have a link to the thread? I'm interested in reading more about that.

Adbot
ADBOT LOVES YOU

modeski
Apr 21, 2005

Deceive, inveigle, obfuscate.

Geemer posted:

Right now, there's a sale on WD Caviar Green drives at that same computer store and he's thinking of getting two 2 TB ones for the NAS. I seem to remember horror stories about Caviar Greens in servers and stuff and would like to know if that's still the case or not (or if it ever really was the case). And if so, if I really need to steer him towards the (much more) expensive Caviar Reds.

It's only anecdotal, but I've had my WHSv1 server for three years now, with 2Tb WD Caviar Green drives, and had to return/replace three of them over that time. I'll be going with Reds for my next server.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply