Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Pryor on Fire posted:

I love the way you old fucks treat computing like a battle scene in Star Trek. Doesn't matter how mundane the activity or device or system, you find a way to turn something easy into a soap opera for absolutely no reason.

It's loving awesome, never change.

Lock phasers on "ZFS", fire when ready.

Adbot
ADBOT LOVES YOU

Mr. Crow
May 22, 2008

Snap City mayor for life

Paul MaudDib posted:

Lock phasers on "ZFSvirtual machines", fire when ready.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Pryor on Fire posted:

I love the way you old fucks treat computing like a battle scene in Star Trek. Doesn't matter how mundane the activity or device or system, you find a way to turn something easy into a soap opera for absolutely no reason.

It's loving awesome, never change.

As an aside...

Isn't it also possible that some view the online interactions you consider a soap opera as just casual conversation?

I'm always surprised when people consider some online discussion or the other to be an intense battle when I view it as just a super minor disagreement.

Platystemon
Feb 13, 2012

BREADS

Thermopyle posted:

As an aside...

Isn't it also possible that some view the online interactions you consider a soap opera as just casual conversation?

I'm always surprised when people consider some online discussion or the other to be an intense battle when I view it as just a super minor disagreement.

I grew out of it as a preteen.

IOwnCalculus
Apr 2, 2003





NO, YOUR OPINION IS BAD AND YOU ARE BAD TOO :bahgawd:

I've run probably every variation on this except for jails - they look obnoxious enough that I've never tried. I'm definitely looking forward to Docker on FreeNAS 10.

PerrineClostermann
Dec 15, 2012

by FactsAreUseless

IOwnCalculus posted:

NO, YOUR OPINION IS BAD AND YOU ARE BAD TOO :bahgawd:



Techgoons.txt

monster on a stick
Apr 29, 2013
Would something like a Synology DS216j be good enough for home NAS and making sure multiple OSs (Windows and Linux) can access the drive?

BlankSystemDaemon
Mar 13, 2009



The thing about containers is that there are a lot of different implementations, most of which don't share goals and features, but here's the short of it as I understand it:
1) Jails, as you might guess from the name of the original paper in which they were described, exist to isolate and confine root (and as a result, everything else). This means it's very good for security (the few methods of jailbreaking that are known generally involve sockets, modifying the shell executable that a bad superuser will jail_attach to, or using symlinks - and all these methods are easily addressable by not allowing socket access (which is the default), sshing into the jail instead of using jail_attach, and using zfs datasets). In fact, jails have a pretty good track record for security, with PHK having quibbed that he's very interested in hearing from people who manage to jailbreak through other means, because he hasn't heard about it yet and he doesn't believe they're completely secure.
a) Zones are not as completely isolated as jails are (though they're very much inspired by them), and are often used to provide cross-platform compatibility through branded zones (implementing various system call stacks so that you get L(inu)x zones, cluster zones (which can be used by Manta and TRITON), and so forth. Something else zones did was first was multiple network stacks (via Crossbow, which in turn inspired VIMAGE on FreeBSD).
α) Docker exists to allow developers to control the environment that their programs run in, with shared libraries and configuration being managed from the outside, making updating of them very simple, but also meaning that what security they offer is more of an afterthought than something it was made to do to begin with. This means they're excellent for cross-platform portability.

Containers also have the excellent feature that they can be run on bare-metal instead of in a vm, which mean they run faster and don't segment resources (something I mentioned in my earlier post), but a certain subset of people have decided (probably through not knowing better, rather than as an act of defiance) that they must be run on vms with all the downsides that that implies.
It's also worth mentioning that jails and zones predate docker (jails were made pre-2000, zones were made ~2004 if I recall correctly whereas docker is from 2013, after it was re-branded from a failed cloud/PAAS initiative), but before that not a lot of people had really considered using containers as a way to empower developers, it was more of a tool in a sysadmins toolbox. Both jails and zones can do what docker does (although it requires a bit of tooling to enable it), but docker did it first and did it so well that it changed the entire landscape.

monster on a stick posted:

Would something like a Synology DS216j be good enough for home NAS and making sure multiple OSs (Windows and Linux) can access the drive?
That's what it's made for - but take my warning, as that's kind of how I started out: it's a slippery slope if you're any kind of a nerd - eventually you'll find yourself effort-posting on subjects only very losely related to them because someone else insists on using them for something they're not designed for.

BlankSystemDaemon fucked around with this message at 13:16 on Feb 5, 2017

Mr Shiny Pants
Nov 12, 2012

IOwnCalculus posted:

NO, YOUR OPINION IS BAD AND YOU ARE BAD TOO :bahgawd:

I've run probably every variation on this except for jails - they look obnoxious enough that I've never tried. I'm definitely looking forward to Docker on FreeNAS 10.

All computing sucks, some less than others.

For serious: One thing I've come to admire in computing is the Quality Without A Name aspect of stuff. It is a term from architecture that describes certain properties (qualities ) of things that you can't quite put your finger on but makes it wonderful.

For me ZFS oozes this, everytime I use it I get the feeling that who made it took great pleasure in making it and it shines through in everything it does. Same with programming languages, F# has this as does Erlang. The first iPhone etc. etc.

I just don't have the tolerance for poo poo software/hardware anymore it seems. :)

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
In case of hardware, quality raises the cost, something a lot of people balk at. Honestly, every person I know, who complains about quirky behavior with their PCs/laptops, all bought budget stuff. The few persons that spend considerable money or did DIY with quality components, none of these have any issues at all, regardless of how good they're with computers or not.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

D. Ebdrup posted:

Containers also have the excellent feature that they can be run on bare-metal instead of in a vm, which mean they run faster and don't segment resources (something I mentioned in my earlier post), but a certain subset of people have decided (probably through not knowing better, rather than as an act of defiance) that they must be run on vms with all the downsides that that implies.
Most of the arguments I've seen for running containers inside VMs is based around greater isolation than what containers or jails offer or because the applications running in the containers are not quite 100% stateless and people want to be able to live migrate all the workloads to another physical location or want CPUs in lock-step kind of fault tolerance. In these situations typically cost and even latency issues are far secondary to reliability and almost invariably whoever is running the show has concepts of software and systems reliability predating the mid-90s with engineering practices matching (such as continuous delivery and deployment being worse for reliability despite the numerous successful companies showing the contrary).

Docker came to prominence around the time that Vagrant showed up and addressed the very real issue of developers not wanting to be sysadmins despite complicated application stacks becoming the norm in industry. Developer-centric tools for systems management are rather rare if you look at the landscape of tools, but not as many developers in the 90s were concerned with bringing up really complicated stacks exactly as they are today.

BlankSystemDaemon
Mar 13, 2009



necrobobsledder posted:

Most of the arguments I've seen for running containers inside VMs is based around greater isolation than what containers or jails offer or because the applications running in the containers are not quite 100% stateless and people want to be able to live migrate all the workloads to another physical location or want CPUs in lock-step kind of fault tolerance. In these situations typically cost and even latency issues are far secondary to reliability and almost invariably whoever is running the show has concepts of software and systems reliability predating the mid-90s with engineering practices matching (such as continuous delivery and deployment being worse for reliability despite the numerous successful companies showing the contrary).

Docker came to prominence around the time that Vagrant showed up and addressed the very real issue of developers not wanting to be sysadmins despite complicated application stacks becoming the norm in industry. Developer-centric tools for systems management are rather rare if you look at the landscape of tools, but not as many developers in the 90s were concerned with bringing up really complicated stacks exactly as they are today.
Yeah, you're absolutely right - that is one case where virtual machines have the upper hand, but only because they have existing implementation whereas containers still lack those features. I wish the German dude who was working on VPS for FreeBSD hadn't seemingly disappeared off the face of the earth, because that was one feature that that offered - and I don't believe that jails can be live-migrated without fundamentally changing the way they work, therefore making them potentially less secure.

And speaking of Vagrant, there's currently work being done to enable Vagrant to use bhyve as a backend - the only known blocker to which, as far as I remember, is suspend/resume.

BlankSystemDaemon fucked around with this message at 19:03 on Feb 5, 2017

Mr Shiny Pants
Nov 12, 2012

Combat Pretzel posted:

In case of hardware, quality raises the cost, something a lot of people balk at. Honestly, every person I know, who complains about quirky behavior with their PCs/laptops, all bought budget stuff. The few persons that spend considerable money or did DIY with quality components, none of these have any issues at all, regardless of how good they're with computers or not.

One thing that made me aware of this again is that on my machine at home hibernation works and on the crappy HP machine at work it just craps out all the time. I have a quality Intel board and everything works as it should, you would think the same from a HP machine but you would be wrong. Don't get me started on the cheap plastic power button that stays depressed when you press it to turn the machine on because it is a cheap piece of poo poo and it turning the machine off again.

Way to ruin your brand HP.

IOwnCalculus
Apr 2, 2003





Mr Shiny Pants posted:

All computing sucks, some less than others.


For me ZFS oozes this, everytime I use it I get the feeling that who made it took great pleasure in making it and it shines through in everything it does. Same with programming languages, F# has this as does Erlang. The first iPhone etc. etc.


You might say your OS is a piece of poo poo.

I will gladly take the downside of ZFS (poor options for expansion, in a consumer / home user context) in exchange for the data protection. The only time I've lost significant data is when I hosed up trying to do a ZFS send/receive and somehow managed to wipe the array of any visible data in the process. Definitely going to practice that a few times with some dummy arrays before I try that again.

Pryor on Fire
May 14, 2013

they don't know all alien abduction experiences can be explained by people thinking saving private ryan was a documentary

Thermopyle posted:

As an aside...

Isn't it also possible that some view the online interactions you consider a soap opera as just casual conversation?

I'm always surprised when people consider some online discussion or the other to be an intense battle when I view it as just a super minor disagreement.

I don't really see any battle going on, that's not the angle I was going for. I was just more commenting on how neat all this discussion is from the perspective of someone who hasn't had time to dive into filesystems or building packages on BSD in like 10 years. It's a tremendous waste of time for me nowadays, but I'm glad someone out there is still doing it.

Mr Shiny Pants
Nov 12, 2012

IOwnCalculus posted:

You might say your OS is a piece of poo poo.

I will gladly take the downside of ZFS (poor options for expansion, in a consumer / home user context) in exchange for the data protection. The only time I've lost significant data is when I hosed up trying to do a ZFS send/receive and somehow managed to wipe the array of any visible data in the process. Definitely going to practice that a few times with some dummy arrays before I try that again.

It's a tradeoff sure but one the designers at least thought about and made clear.

BlankSystemDaemon
Mar 13, 2009



It's not even that pool expansion through adding disks to existing vdevs (other than mirrors) is impossible, it's just that doing so requires block pointer rewrite, which - as I understand it - is quite difficult (though also enables other features like pool defragmentation), as evident by some of Matt Ahrens off-the-cuff remarks on it, but there isn't exactly a huge pool of software developers talented enough to work on it; one can hope that this will change now that Solaris is basically on life-support.

IOwnCalculus
Apr 2, 2003





Block pointer rewrite would really help in the production world, too. I've seen this a few times - a vdev gets full while also sustaining a heavy workload. Adding another vdev gives you more room, but doesn't do much to relieve the workload on the existing disks.

Unrelated, I've found something... odd? going on with the boot SSD on my Linux box. The boot SSD dumped a bunch of "WRITE FPDMA QUEUED" errors to dmesg, which I'd normally take as a sign of a trashed drive. However, it did this about twelve hours after boot, and this was nearly two weeks ago before I even noticed the errors in dmesg. The system still seems perfectly stable with writes and reads on that drive.

I should probably replace the drive anyway since it's a five-year-old Sandisk 120GB that has apparently sustained over 5 TB written to it in its lifetime, if the SMART data is accurate.

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.
I seem to recall getting that message periodically with a lovely SATA cable, might be worth swapping that first just to see where you stand.

Ika
Dec 30, 2004
Pure insanity

I want to replace a 2TB drive in my sinology NAS (2bay, JBOD) with a 3TB drive. Can I just connect both drives to my PC, clone the data, plug in the new one and tell the NAS to use the additional capacity?

Mr Shiny Pants
Nov 12, 2012

D. Ebdrup posted:

It's not even that pool expansion through adding disks to existing vdevs (other than mirrors) is impossible, it's just that doing so requires block pointer rewrite, which - as I understand it - is quite difficult (though also enables other features like pool defragmentation), as evident by some of Matt Ahrens off-the-cuff remarks on it, but there isn't exactly a huge pool of software developers talented enough to work on it; one can hope that this will change now that Solaris is basically on life-support.

So I've read this and I don't get some of the problems mentioned. He's talking about the space accounting of a block but that does not change when you move a block from one vdev to another right?

The only thing that I think that is really tricky is doing it one a live system and not running out of space because of the translation table you need to keep. Maybe you could even use the dedupe logic for it? Since it traverses the whole system and rearranges pointers already.

So you would offline the whole shebang and run it offline maybe and have the translation map on a secondary scratch drive.

I understand it is probably very difficult but some things seem weird.

Mr Shiny Pants fucked around with this message at 13:26 on Feb 6, 2017

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.
The reason it's a problem is that ZFS was designed for business use, and no business can afford to offline their array for days to add more capacity with BPR. Hence his statement about changing your pants while running. The scenario that they care about is making it so you can do a live resize, and that's an incredibly difficult use case.

Mr Shiny Pants
Nov 12, 2012

G-Prime posted:

The reason it's a problem is that ZFS was designed for business use, and no business can afford to offline their array for days to add more capacity with BPR. Hence his statement about changing your pants while running. The scenario that they care about is making it so you can do a live resize, and that's an incredibly difficult use case.

True, but having the option, albeit offline, is better than no option I would venture.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I think the biggest problems is that blocks are being pointed to from a lot of different places, and to change them all at once atomically is a big pain in the rear end doing it live.

Also, dedupe is loving terrible and memory intensive.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Combat Pretzel posted:

I think the biggest problems is that blocks are being pointed to from a lot of different places, and to change them all at once atomically is a big pain in the rear end doing it live.

Also, dedupe is loving terrible and memory intensive.

The way blocks, slabs, metaslabs, and the various pointers within each one are constructed makes it a monumental pain in the dick to add BPR after the fact. Each of the various layers needs to be aware of what's going on, a ton of poo poo would have to be added to every file transaction, and making everything kosher would be very challenging. As an offline only tool to unfuck datasets or to do things like transition raid levels I could see, but most ZFS use cases are people that can afford to throw an extra shelf at the problem and do a ZFS send/receive and blow out the datastore if things go wonky.


Yes, dedupe is literally terrible, tried it once on my machine, and ended up having to copy all my data to a 4tb external and delete the entire pool to get the cancer to go away.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
ZFS' combination of prefetch, that uses linear and stride read detection to figure out how much to prefetch, combined with escalator sorting, should mitigate the issue of fragmentation for a lot of workloads. For those that do truly random IO, whether the data is fragmented or not, doesn't matter two shits. With OpenSolaris' default ZFS configuration, watching movies for instance, the filesystem started to prefetch in 200MB blocks pretty quickly, because it spotted continuous linear reading.

Mr Shiny Pants
Nov 12, 2012

Combat Pretzel posted:

I think the biggest problems is that blocks are being pointed to from a lot of different places, and to change them all at once atomically is a big pain in the rear end doing it live.

Also, dedupe is loving terrible and memory intensive.

Well my remark was more in that dedupe already works on all the blocks and has to know where every block is otherwise it could never detect duplicate blocks and keep the filesystems kosher.
Maybe they could leverage the same subsystem to figure out the necessary pointers when stuffing them into a position.

Netapp has dedupe scheduled, maybe something like a scheduled job that does all this when the filesystem is idle.

IOwnCalculus
Apr 2, 2003





G-Prime posted:

I seem to recall getting that message periodically with a lovely SATA cable, might be worth swapping that first just to see where you stand.

I'm hoping this is the case, even though new SSDs are loving cheap. I just can't fathom why it would only go flaky after months of undisturbed operation.

BlankSystemDaemon
Mar 13, 2009



Methylethylaldehyde posted:

Yes, dedupe is literally terrible, tried it once on my machine, and ended up having to copy all my data to a 4tb external and delete the entire pool to get the cancer to go away.
No, you enabled dedup because you thought it was a magic pill that'd fix your troubles. It doesn't, it's a specific thing made for a specific use-case.
Dedup is quite simple, it's a simple table consisting entries for each block, taking up just around 340 bytes each, which as you can imagine quickly makes that table very large (in fact, the recommendation of 5GB / ram per 1TB of diskspace for dedup is not really excessive, it's pretty conservative). On Solaris, where ZFS was made, this isn't really an issue. OpenZFS, meanwhile, are trying to address it by adding a device type to store the dedup table on, so instead of storing it on memory and then moving it to disk when the memory eventually runs out on x86 (which is what I'm positive happened in your case, because it happens for everyone unless you're running Xeon E7s), you can use an SSD.

ZFS is an utterly game-changing filesystem, but it has its downsides just like anything else.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Combat Pretzel posted:

the filesystem started to prefetch in 200MB blocks pretty quickly, because it spotted continuous linear reading.

Just out of curiosity, how do you see it's doing this? Seems like a useful thing to know how to do!

Hughlander
May 11, 2005

This probably isn't the right thread for it so if people want to point me elsewhere do so...

my NAS just lost a SSD that wasn't mirrored (Or really used for anything, I was going to use it for L2ARC before realizing how stupid that was.) At the same time my Windows 10 desktop lost it's non-system rotational drive. As I replaced it with a 2TB drive and restored from crashplan, I put the two together and wondered how annoying it'd be to lose the System SSD.

Is there an easy way on Windows 10 to mirror an existing SSD to a partition of a much larger rotational so I could just continue to boot if the SSD died?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

fletcher posted:

Just out of curiosity, how do you see it's doing this? Seems like a useful thing to know how to do!
Back when I was using OpenSolaris, my obnoxious HDD LED still worked, and I started wondering why it only lit up permanently for a moment very occasionally when watching movies, whereas in Windows or Linux it usually was some disco light. Prompted me to check out what ZFS does.

Now I just run ZFS on my NAS. Haven't really checked what it does overthere. Prefetching is probably limited, similar to write buffering. OpenSolaris does buffer writes up to 30 second, if memory allows, FreeNAS had it configured to 2 seconds.

Mr Shiny Pants
Nov 12, 2012
Heads up: Intel Atoms seem to be dying because of a chip clock failure. Cisco products are affected and they also mention Synology.

https://www.theregister.co.uk/2017/02/06/cisco_intel_decline_to_link_product_warning_to_faulty_chip/

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!

Mr Shiny Pants posted:

Heads up: Intel Atoms seem to be dying because of a chip clock failure. Cisco products are affected and they also mention Synology.

https://www.theregister.co.uk/2017/02/06/cisco_intel_decline_to_link_product_warning_to_faulty_chip/

drat. I planned to refresh my NAS late next year (built it about this time in 2015) but maybe I should consider making that happen this year if the prices are good. I was hoping another two generations would bring SSDs down into the range where I could use them to make a completely silent NAS.

I suppose the c2550 is still on Amazon with prime shipping so I can hold off panicking.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Mr Shiny Pants posted:

Heads up: Intel Atoms seem to be dying because of a chip clock failure. Cisco products are affected and they also mention Synology.

https://www.theregister.co.uk/2017/02/06/cisco_intel_decline_to_link_product_warning_to_faulty_chip/

Yeah, saw this mentioned in one of the other megathreads, with a note that it's probably gonna take a silicon change to fix them. There's some mention of a "board level workaround," but no definitive note on whether a software/firmware downloadable fix is possible. No word yet on what the crap Intel plans to do about it, especially since those Atom chips are in a looooot of random products.

Basically, if you have a B0 stepping C20xx Atom, best to assume it'll die at roughly 18 months of use.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Desuwa posted:

was hoping another two generations would bring SSDs down into the range where I could use them to make a completely silent NAS.

Unless you have a fairly small NAS (at which point maybe you could get away with a simple mirroring arrangement on another computer you've got), or you really don't care whatsoever about prices, you're going to have to wait a good bit longer than two generations:

In 2013 you could get a 250GB SSD for ~$160, or $0.64/GB.
In 2017 you can get a 960GB SSD for ~$220, or $0.23/GB.

Assuming the same linear progress, in 2021 you should be able to get a 4TB SSD for $0.06/GB, or $230. In the meantime you could probably get a 10TB HDD in 2021 for $100.

Of course, you can always just say "gently caress you, I'm rich" and buy the 4TB SSD's already available...for a cool $1500.

redeyes
Sep 14, 2002

by Fluffdaddy

DrDork posted:

Yeah, saw this mentioned in one of the other megathreads, with a note that it's probably gonna take a silicon change to fix them. There's some mention of a "board level workaround," but no definitive note on whether a software/firmware downloadable fix is possible. No word yet on what the crap Intel plans to do about it, especially since those Atom chips are in a looooot of random products.

Basically, if you have a B0 stepping C20xx Atom, best to assume it'll die at roughly 18 months of use.

Jesus! What the gently caress Intel. Those stupid things are for usage in server poo poo. I guess i'm lucky I never used Atom builds for NAS stuff. Never trusted it to perform good enough.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

redeyes posted:

Jesus! What the gently caress Intel. Those stupid things are for usage in server poo poo. I guess i'm lucky I never used Atom builds for NAS stuff. Never trusted it to perform good enough.

Best hope your server has at least a two year warranty! :cripes:

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

DrDork posted:

Unless you have a fairly small NAS (at which point maybe you could get away with a simple mirroring arrangement on another computer you've got), or you really don't care whatsoever about prices, you're going to have to wait a good bit longer than two generations:

In 2013 you could get a 250GB SSD for ~$160, or $0.64/GB.
In 2017 you can get a 960GB SSD for ~$220, or $0.23/GB.

Assuming the same linear progress, in 2021 you should be able to get a 4TB SSD for $0.06/GB, or $230. In the meantime you could probably get a 10TB HDD in 2021 for $100.

Of course, you can always just say "gently caress you, I'm rich" and buy the 4TB SSD's already available...for a cool $1500.

I don't think that we can expect the same linear progress anymore. Future HDD capacity improvements look extremely expensive and complex enough that they won't decrease in price dramatically, it might be more like we see 40TB Enterprise HDDs for $600 using HAMR.

In the consumer space, it looks like there's a crossover coming where SSDs are going to be cheaper per GB than HDDs.

Adbot
ADBOT LOVES YOU

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
I don' think we'll see linear progress, either, but in the other direction: I doubt SSD price:GB will continue to drop at the same rate it has, but rather will slow as we start needing more esoteric layering strategies to push density up. Or maybe we'll go back to 3.5" SSDs!

Super-massive Enterprise HDDs will probably happen, but 40TB for $600 is probably generous: with current drives at 10TB for $500ish, that's a poo poo-ton of improvement for "free." Maybe in 4-5 years. Maybe.

I also doubt we'll see a 40TB SSD for $600 anytime soon: Seagate's 16TB SSD launched last year with prices reported around $6,000, and Samsung's 15TB SSD was $10,000. Reports on Seagate's new 60TB monster suggest a $30-40,000 price tag. Even if you assume prices would drop by 2/3 shifting from enterprise to consumer (which is a generous drop), you're still talking $10,000 for 60TB. Prices will come down, to be sure, but that's a long way to go before you hit sub-$1k. Like 5+ years.

HDD's will likely eventually stonewall at some point, but that point seems to be a decent bit away. 10TB consumer-grade drives are already on the shelves, and larger ones are being aimed at for the near future. Pricing is pretty aggressive on them, too, with those 10TB drives going for $0.05/GB or less, which is still 5-6x cheaper than large-format SSDs (eg, a 1TB 850 Evo is ~$325, or $0.325/GB).

So, yeah, some day SSDs may be able to compete on a price:GB basis, but it ain't gonna be any time soon unless you're talking very small (<1TB) drives, where they're already competitive.

tl;dr SSDs will remain substantially more expensive per GB for at least the next 3-4 years.

e; On the other hand, we're already past the point where "average users" can get a SSD at a price competitive with high-performance HDDs that is sufficiently large for normal use. I.e., a 500GB/1TB SSD is enough for probably 90% of users out there, relegating HDD's to mass-storage for movie pirates and NAS installations.

DrDork fucked around with this message at 16:52 on Feb 7, 2017

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply