Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Droo
Jun 25, 2003

I buy 6TB and 8TB reds, because according to all the data I could find they are the best combination of cheap/low power/quiet. I buy them via the MyBook duo's so they are cheaper than the single drives. And because like all good Americans I literally hate the Earth so I participate in the ridiculous paradigm that sells a single 8TB red drive for $320 or two of them plus a useless enclosure for $500.

Adbot
ADBOT LOVES YOU

IOwnCalculus
Apr 2, 2003





How does the warranty work on those?

EconOutlines
Jul 3, 2004

IOwnCalculus posted:

How does the warranty work on those?

As soon as you rip them out of the enclosure, warranty is null and void AFAIK. Worth the gamble if you ask me for the savings and enclosed drives have sales much more frequently dropping them even lower vs stand-alone internals.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
Is it always a Red in the Mybook, or do they sometimes build them with whatever they have surplus of? i.e. sometimes you might get Blue?

Droo
Jun 25, 2003

IOwnCalculus posted:

How does the warranty work on those?

The drives in the mybook duo's are removable without causing any damage, and each drive carries a 2 year warranty. I believe I can warranty the drive individually (when I type in the serial number of an individual drive, I see a valid 2 year warranty that doesn't make any note of it being in a Duo). I haven't actually tried to warranty one yet though - but it makes sense that they wouldn't force you to send the whole thing back considering you could Raid 1 it and just be replacing a bad drive while continuing to use the other one.


apropos man posted:

Is it always a Red in the Mybook, or do they sometimes build them with whatever they have surplus of? i.e. sometimes you might get Blue?

They are always reds in the MyBook Duos now. On Amazon for example they even show a picture with half the enclosure made translucent and you can see the drive labels. I don't think the same is true for their single-drive arrays.

Platystemon
Feb 13, 2012

BREADS

Droo posted:

single-drive arrays.

[triggered]

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

Droo posted:

They are always reds in the MyBook Duos now. On Amazon for example they even show a picture with half the enclosure made translucent and you can see the drive labels. I don't think the same is true for their single-drive arrays.

Ooh. Definitely something I will bear in mind when upgrade time comes. I already have two Red's and one of them is approaching two years running pretty much 24/7 no problems.

IOwnCalculus
Apr 2, 2003





apropos man posted:

Ooh. Definitely something I will bear in mind when upgrade time comes. I already have two Red's and one of them is approaching two years running pretty much 24/7 no problems.

:smuggo:
code:
  9 Power_On_Hours          0x0032   085   085   000    Old_age   Always       -       11545
  9 Power_On_Hours          0x0032   062   062   000    Old_age   Always       -       28339
  9 Power_On_Hours          0x0032   073   073   000    Old_age   Always       -       19794
  9 Power_On_Hours          0x0032   073   073   000    Old_age   Always       -       19794
  9 Power_On_Hours          0x0032   062   062   000    Old_age   Always       -       28338
  9 Power_On_Hours          0x0032   069   069   000    Old_age   Always       -       23168
  9 Power_On_Hours          0x0032   073   073   000    Old_age   Always       -       19794
  9 Power_On_Hours          0x0032   082   082   000    Old_age   Always       -       13236
  9 Power_On_Hours          0x0032   085   085   000    Old_age   Always       -       11545
In other words, yes, I definitely need to start thinking about migration plans. With 6TB and 8TB drives getting inexpensive on a $/GB basis, and my own data growing slower than it used to, I could easily cut my drive count big time.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
code:
9 Power_On_Hours          0x0032   069   069   000    Old_age   Always       -       22865
9 Power_On_Hours          0x0032   069   069   000    Old_age   Always       -       22818
9 Power_On_Hours          0x0032   069   069   000    Old_age   Always       -       22683
9 Power_On_Hours          0x0032   069   069   000    Old_age   Always       -       22910
9 Power_On_Hours          0x0032   054   054   000    Old_age   Always       -       34256
9 Power_On_Hours          0x0032   052   052   000    Old_age   Always       -       35450
9 Power_On_Hours          0x0032   052   052   000    Old_age   Always       -       35450
9 Power_On_Hours          0x0032   054   054   000    Old_age   Always       -       34257
Two RAIDZ1 vdevs in a single pool. All the 34k+ hour drives are 2TB Reds, and the 22k hours are 4TB Reds. The 2TBs need to be swapped out, but I'm trying to decide if I roll the dice doing an expansion on the pool, or splurge and build a new Z2/Z3 pool of 6-8 disks in a single vdev. Anything irreplaceable on the pool is backed up to disc anyway, and the growth of data isn't that large at this point.

I'll definitely replace them by the time they hit 40k hours. Probably with 6TB Reds, unless the failure rates between the 4TB and 6TB disks are significantly different. 4TB disks would give me plenty of space to last until the 22k drives hit 35-40k hours.

Volguus
Mar 3, 2009

PitViper posted:

Two RAIDZ1 vdevs in a single pool. All the 34k+ hour drives are 2TB Reds, and the 22k hours are 4TB Reds. The 2TBs need to be swapped out, but I'm trying to decide if I roll the dice doing an expansion on the pool, or splurge and build a new Z2/Z3 pool of 6-8 disks in a single vdev. Anything irreplaceable on the pool is backed up to disc anyway, and the growth of data isn't that large at this point.

I'll definitely replace them by the time they hit 40k hours. Probably with 6TB Reds, unless the failure rates between the 4TB and 6TB disks are significantly different. 4TB disks would give me plenty of space to last until the 22k drives hit 35-40k hours.

These last few comments made me take a look at my NAS. The 4 drives I have in there (2x WD red and 2 HGST, 3TB each) have 23000 hours and 20000 hours respectively. But blackblaze reports lifetimes of 400k hours and more. Plus, my usage is nowhere near blackblaze's. Should I worry about replacing them before 40k hours if I still have space? The NAS reports them as being in perfectly good health and anything critical is backed up to a 3rd location.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
For consumer usage, spin-up-cycles is probably the more relevant metric, along with a general "how old is it". The rule of thumb used to be "plan on replacing them at around 5 years", I assume that's still true.

zennik
Jun 9, 2002

Paul MaudDib posted:

For consumer usage, spin-up-cycles is probably the more relevant metric, along with a general "how old is it". The rule of thumb used to be "plan on replacing them at around 5 years", I assume that's still true.

For consumer use, this is absolutely correct. For 24/7 running use, I've found a MBTF * .7 to be a good metric on when to really worry about the drives. Been using this metric for years and outside of external factors causing drive damage(excess vibrations/heat/power supplies exploding) I've yet to have a drive fail on me. Hell, I've still got a small 8-drive array of 120GB Seagates that all have well over 120k hours on them and it keeps chugging along.

For consumer use, generally one spin up/spin down a day with no outside factors can net you probably 6-7 years realistically. 5 years is a good 'safe bet'.
The only exceptions are the old Deathstars, Maxtors, and those god awful 1.5TB Seagate drives. Haven't really seen failure rates like those in a long time.

EDIT: Also, OCZ and Mushkin SSDs. Screw those. I've yet to have one last more than 18 months.

phosdex
Dec 16, 2005

zennik posted:

Hell, I've still got a small 8-drive array of 120GB Seagates that all have well over 120k hours on them and it keeps chugging along.

drat thats impressive. Thought my WD VelociRaptor with 65k was high.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

phosdex posted:

drat thats impressive. Thought my WD VelociRaptor with 65k was high.

For a VelociRaptor, it probably is. Higher RPM is more wear on the bearings/etc.

I have a laptop drive that has hit 10 years of actual age... although it hasn't seen a lot of spinup cycles or runtime in the last ~5 years since I moved to an SSD boot disk and it became the bulk storage drive.


zennik posted:

Been using this metric for years and outside of external factors causing drive damage(excess vibrations/heat/power supplies exploding) I've yet to have a drive fail on me.

The other thing to remember is that it's a bathtub curve... so for those reading along, you probably also shouldn't rely on a drive as the sole repository of data you care about for at least the first 3-6 months of its life, because apart from when the drive hits senility, the initial couple months are when the failure rate is most pronounced.

Backup always, ideally don't trust any drive ever. But especially during the first month or two you own it.

RoadCrewWorker
Nov 19, 2007

camels aren't so great

Paul MaudDib posted:

The other thing to remember is that it's a bathtub curve... so for those reading along, you probably also shouldn't rely on a drive as the sole repository of data you care about for at least the first 3-6 months of its life, because apart from when the drive hits senility, the initial couple months are when the failure rate is most pronounced.
How much of that bathtub graph applies to SSDs these days? Does the absence of moving parts that fail due to bad production or long time wear avoid this behavior, or do other parts of the drive just fill similar "roles"? Besides the natural cell write wear leveling, of course.

RoadCrewWorker fucked around with this message at 08:37 on Mar 7, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

RoadCrewWorker posted:

How much of that bathtub graph applies to SSDs these days? Does the absence of moving parts that fail due to bad production or long time wear avoid this behavior, or do other parts of the drive just fill similar "roles"? Besides the natural cell write wear leveling, of course.

I've literally never had an SSD fail on the left side of the bathtub curve, N=15 including some lovely mSATA pulls from eBay. Not scientific just FWIW.

Technical explanation, they are engineered to detect hosed up flash cells and mark them as bad, and they ignore the "partition level" map in favor of their own independent sector map, so they should in theory clean up the lovely stuff pretty quickly just through the natural wear levelling process.

Don't buy the OCZ or Mushkin SSDs though, those are legit hosed up and will wreck you.

Paul MaudDib fucked around with this message at 08:47 on Mar 7, 2017

eames
May 9, 2009

On the topic of drive spinups/spindowns, I have two month old WD80EFZXs with fairly aggressive spindown (1hr) and extrapolated the SMART data to see what it'll look like in five years.
That's just to make sure the drives aren't totally getting trashed because my last WD reds with the bugged firmware racked up 700k loadcycles in 4 years without me noticing.

43800h power on, ~11000h spinup time, ~9000 start stop counts, ~40000 load cycles and power off retract counts.

I know the drives are rated for 300k load cycles but does anybody have an idea about start/stop counts and retract counts?

psydude
Apr 1, 2008

Can anyone recommend a cheap RAID controller that works with vsphere 6?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
The ubiquitous M1015 fits both criteria, I believe.

phosdex
Dec 16, 2005

psydude posted:

Can anyone recommend a cheap RAID controller that works with vsphere 6?

I use an ibm m5015 which is an LSI 9261 that I got off ebay with a brand new battery for around $100 ~2 years ago in my vsphere 6 server.

BlankSystemDaemon
Mar 13, 2009




IOwnCalculus posted:

Power_On_Hours
On my 4x Samsung HD204UI drives, I got four identical lines of:
9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 49377

Oh boy do I want to replace them as they're in raidz1, but I want to wait until a mini-ITX Supermicro Denverton SoCs motherboard with 8 SATA ports (preferably more, via SAS Mini HD) comes out, so I can build a brand new server and setup everything properly, then rebuild my old server as a cold-storage raidz3 backup server with 5 disks that isn't meant to run for any longer than it takes to backup stuff.

Braincloud
Sep 28, 2004

I forgot...how BIG...
I'm looking to set up a NAS and was looking at the Synology DS416j with 2 4TB WD reds since they had good reviews and the price is right. Part of the need is for backing up all of my work files (lots of large designy photoshop files). The other half is a place to store my 'home movies'. I use an HTPC with Openelec/Kodi with a external HD on our main TV now, however, I've been running into issues with the Roku on our 2nd TV playing the media off the network consistently – i.e. I'm not good at being consistent with the audio codecs of my home movies and have been running into a lot of them not playing audio on the Roku. So, now I'm thinking I need to set up a Plex server on the NAS I set up so I don't have to worry about different media types. Unfortunately, it looks like the DS416j doesn't like Plex. Any suggestions on an alternate NAS box or a way to ensure our Roku can play all our home movies?

Thanks Ants
May 21, 2004

#essereFerrari


The DS416play would fit the bill if you like everything else about the DS416j.

IOwnCalculus
Apr 2, 2003





D. Ebdrup posted:

On my 4x Samsung HD204UI drives, I got four identical lines of:
9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 49377

Oh boy do I want to replace them as they're in raidz1, but I want to wait until a mini-ITX Supermicro Denverton SoCs motherboard with 8 SATA ports (preferably more, via SAS Mini HD) comes out, so I can build a brand new server and setup everything properly, then rebuild my old server as a cold-storage raidz3 backup server with 5 disks that isn't meant to run for any longer than it takes to backup stuff.

I hope your HD204UIs are more reliable than my HD154UIs were :ohdear: It might have been a batch thing though since most of mine were within 100 of each other on the serial number, so obviously from the same run of drives. I had at least three of them fail with the head dragging on the platter. This only happened as they aged, but it prompted me to do a wholesale replacement with the Reds I have now.

Braincloud
Sep 28, 2004

I forgot...how BIG...

Thanks Ants posted:

The DS416play would fit the bill if you like everything else about the DS416j.

Oh hey, how did I miss that? That seems to be what I'm looking for!

Edit: Pulled the trigger on the 416play. Anyone have recommendations on deals for the WD Red 4TB drives? Amazon is out of stock at the moment, but pricing seems to be consistently $145 each. The MyBook duos are like $320 so that's not really a deal.

Braincloud fucked around with this message at 00:33 on Mar 8, 2017

Hamelekim
Feb 25, 2006

And another thing... if global warming is real. How come it's so damn cold?
Ramrod XTreme
I finally bit the bullet and purchased a synology 8 Bay NAS.

I have a spare case, mb, CPU and ram but didn't want to mess with setup of a custom nas system. Off the shelf is simpler and I can afford avoiding the hassle.

Looking at the apps they have for it I am getting excited about having more data security plus storage space to put all my music and movies.

I've been using Google Drive for file storage, but realized that beyond personal documents I don't need it for books, music or movies. Almost all of our is downloadable at any moment. But having local storage and remote access is super useful when you need it.

I only have a single 6gb wd red for now but I figure that I can fill out the other 7 bays as needed and as prices drop on storage.

I'm really excited about the remote file access too. Not having to mess around with my firewall or ports and it addresses is extremely appealing from an access perspective.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
Between pre-clearing and rebuilding, these 6TB reds I inherited from work are getting a goddamn workout. No errors tho, so I'm not complaining :unsmith:

My m1015, and Fractal R5 should be here by the weekend too :3:

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

RoadCrewWorker posted:

How much of that bathtub graph applies to SSDs these days? Does the absence of moving parts that fail due to bad production or long time wear avoid this behavior, or do other parts of the drive just fill similar "roles"? Besides the natural cell write wear leveling, of course.

Paul MaudDib posted:

I've literally never had an SSD fail on the left side of the bathtub curve, N=15 including some lovely mSATA pulls from eBay. Not scientific just FWIW.

Technical explanation, they are engineered to detect hosed up flash cells and mark them as bad, and they ignore the "partition level" map in favor of their own independent sector map, so they should in theory clean up the lovely stuff pretty quickly just through the natural wear levelling process.

Don't buy the OCZ or Mushkin SSDs though, those are legit hosed up and will wreck you.

Actually one more thing, I've been meaning to correct this and just say that I've actually never had a SSD fail period. Apart from a set of very early SSDs (the aforementioned Mushkins/OCZs and a few others) you pretty much don't need to worry about them.

As a solid-state device (i.e. no moving parts) they are absurdly reliable and the only thing you can really do to hurt them is absurd amounts of writes. The only catch is to watch for situations where your write amplification is high (mostly this means don't run them literally 100% full all the time). But even still, this would only drop your reliability from like "10 years of normal workload" to "2-3 years of normal workload". Apart from that - there's nothing a consumer would be doing that's going to wear out the flash in a reasonable timeframe unless you're using one for a homelab database server or something.

The other thing that can sometimes be problematic is NVMe devices on older/weird hardware. For example, X99 motherboards need to play games to get the memory clocks stable above 2400 MHz, normally by increasing the BCLK strap, which also increases the PCIe clocks and can screw with some NVMe devices. But you won't hurt the device, you'd just get some data corruption/crashes. If you're SATA, though, you should always be fine.

Paul MaudDib fucked around with this message at 18:37 on Mar 8, 2017

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
Am I better off putting my unraid cache drive on my m1015 or off a motherboard Sata connector? Or does it not matter?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Matt Zerella posted:

Am I better off putting my unraid cache drive on my m1015 or off a motherboard Sata connector? Or does it not matter?

I'd say PCH is faster than any external controller, personally.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Paul MaudDib posted:

I'd say PCH is faster than any external controller, personally.

That'll work. Thanks!

Mr Shiny Pants
Nov 12, 2012

Paul MaudDib posted:

I'd say PCH is faster than any external controller, personally.

Do you think it would even be measurable? Honest question.

Especially since some mobos have some really crappy marvell chips in them.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Mr Shiny Pants posted:

Do you think it would even be measurable? Honest question.

Especially since some mobos have some really crappy marvell chips in them.

Probably not unless you were trying to do some super-high bandwidth/IO deal. If you're just using it as a file server/backup you'll likely never be able to tell.

Unless you get one with a really crappy Marvell deal. But it's easy to figure that out and then throw in a M1015 or whatever later.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Mr Shiny Pants posted:

Do you think it would even be measurable? Honest question.

Especially since some mobos have some really crappy marvell chips in them.

If it's going to be measurable anywhere, it would be with a cache drive.

Correct me if I'm wrong here but wouldn't the SATA channels be provided by the PCH directly in most cases? It has a little built-in SATA controller, right?

Also, I was actually thinking more that it might help you since the cache wasn't on the same controller as the drives, so that it could keep flushing to the drives at full speed even while the cache had its own pipe... but I don't know if that's really a bottleneck in practical terms or not, or if it would help.

Paul MaudDib fucked around with this message at 04:58 on Mar 11, 2017

IOwnCalculus
Apr 2, 2003





FreeNAS 10 RC1 is pretty drat slick. Only annoying thing is apparently, despite me finding some documentation at one point claiming that Extended Page Tables and Unrestricted Guest are the same, they are not. My X3450 supports EPT, but not UG. So I can't actually run any non-freeBSD VMs in it, including the Docker host. :saddowns:

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.
From what I can see, it appears that UG requires EPT but not vice versa. The interesting thing is that I found some old info that indicated that from ~2007 on, all Intel chips were supposed to support UG. I wonder if that just got rescinded.

Holy poo poo, the GUI for 10 is gorgeous. And while I knew they were going to have Docker support, I didn't realize it was going to be so visually slick. I wonder if users can reasonably jump into the VM and do stuff themselves. I work with Docker all day and have some pretty slick containers set up the way I like them on my HTPC here at home, and would love to migrate them over to FreeNAS so I can purge that box and start fresh.

G-Prime fucked around with this message at 13:25 on Mar 14, 2017

IOwnCalculus
Apr 2, 2003





Yep, it seems like I based my assumption on some poorly-researched forum posts that didn't understand that EPT and UG are not the same. It doesn't help that as far as I can tell, the X3400 series and the E5500 series are the only two sets of Xeons that support EPT, but don't support UG. E5500s can be swapped out for E5600s, but there's no LGA1156 upgrade from a X3450 that supports UG.

On the plus side, E5600 boards and CPUs are cheap as gently caress.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Having last tried 10 a few months ago, how responsive is the web GUI now? Back in September or whenever it was it would often take 10-30s to get any sort of response out of a lot of the GUI panels. Are Docker containers still one-shot deals where you cannot make any changes to them whatsoever once made (particularly looking at the networking aspect--previously you couldn't even change one between static/DHCP addressing without having to delete and recreate the container entirely)?

I'd really like to jump over from 9.x to 10, but last time I did the performance was just so terrible that I had to downgrade. Plex should not hiccup on a wired GigE network and a E3-1225 doing literally nothing else.

Ziploc
Sep 19, 2006
MX-5
How much of a dumbass was I for grabbing this?

https://www.newegg.ca/Product/Product.aspx?Item=N82E16813182821

I had a Intel g3258 laying around which for some reason supports ECC ram. Newegg had ECC 16gb kits on sale as well.

Sounds like it's time to learn how to roll my own.

Ziploc fucked around with this message at 15:57 on Mar 14, 2017

Adbot
ADBOT LOVES YOU

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
Supermicro makes decent stuff. If I wasn't using unraid, i'd go with one of their mobos.

IPMI with KVM is freaking sweet. Might be overkill for you but you're not going to be wanting for features. You even have an internal USB for hiding away your USB key.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply