Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
thechosenone
Mar 21, 2009

Nevvy Z posted:

Fold at home?

So like one could do the whole thing with one computer? Sounds good,


silence_kit posted:

? And your point here is?


Yes, Intel enjoys a healthy profit margin on its server chips but it does cost Intel quite a bit more to manufacture their top-of-the-line Xeons when compared to their smaller Core or Atom products. This is because Xeon die sizes are larger.

That makes me think about something: It seems like desktops have not really been getting the best out of the last few generations of processors, since they don't really benefit from the efficiency improvements like battery powered mobile units and super computer sites which can only feed a supercomputer with so many megawatts of power. Do you think Ryzen might let desktops get back in on the game a little by getting some of the benefits of intel having to fork out some more power to maintain power dominance?

Also, since they haven't quite gotten ubiquitous yet, does anyone have any speculation as to what servers and super computers could do with huge amounts of SSD storage? seems like less power and heat for more speed and reliability would be just what the makers and buyers of those computers would want.

Adbot
ADBOT LOVES YOU

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

silence_kit posted:

You are conflating two things here—the amount of money customers are willing to pay for server chips when compared to chips for consumer desktops and the production cost of an integrated circuit.

I think you are doing this either accidentally because you are looking at this in a muddled way or intentionally because you want to change the goalposts.

The physical chip costs barely anything. The price of a chip is r&d and the cost of the factory (plus the price of intel being a near monopoly that charges what it wants).

In the sci-fi story technology is over and there is no new r&d and the same factory is good forever the price of chips is going to freefall.

twodot
Aug 7, 2005

You are objectively correct that this person is dumb and has said dumb things

thechosenone posted:

Also, since they haven't quite gotten ubiquitous yet, does anyone have any speculation as to what servers and super computers could do with huge amounts of SSD storage? seems like less power and heat for more speed and reliability would be just what the makers and buyers of those computers would want.
At that scale, you just factor in hardware failure as a cost of doing business no different from power or what have you. Like SSDs might lower your costs, but these distributed systems have failures built into the design, even with perfect computers you still are only getting a certain number of 9s from your power company or what have you.

At 1000x I'm guessing you'd see certain distributed services come onto devices, things like voice recognition or route planning (why would Google bother to build servers to do this, when they can just run it in your browser?), but it'd only happen if the change was transparent to users, which means there's little benefit beyond getting cheaper (which is still good). Obviously AIs would get better, I suspect there's some research specifically into poker AIs that could use some extra power. The truly difficult problems integer factorization, cryptographic hashing, NP-complete problems, et cetera wouldn't be affected. Typically if someone is thinking "I'm using one computer right now, but if I had 1000 computers, I could do something much cooler" they just buy 1000 computers. Letting the people who currently need 100,000 computers scale down to 100 computers is nice, but I don't think there's any fundamental shift that could happen.

Morbus
May 18, 2004

Phyzzle posted:

If you mean that a 10x improvement in our ability to solve a given computation problem will now cost many more human hours than it did before, then I suppose I agree. But I'm not sure what you mean by fundamental information theoretic boundaries of a problem. There are hard fundamental limits to a jet engine's efficiency, but I don't think such a thing exists for the 'parallelizability' or 'trickability' of a problem.

Of course there is. And I'm not talking about fundamental thermodynamic limitations of computation like OOCC brought up with the Margolus–Levitin theorem because that's stupid. My point is that any computational task can be fundamentally described in terms some collection of bit manipulations. To solve a problem faster you can:

1a. Find ways to do as many steps as possible at once, and then add compute units to do things faster
2. Improve the hardware to move through the steps more quickly
3. Improve the hardware architecture so even though you arent adding compute units or increasing their intrinsic speed, you are arranging the steps in a more efficient way
4. Find a better method of solving the problem

Methods 1 and 2 rely primarily on intrinsic hardware improvements. Parallelization requires that you add more compute capacity. Raw speed requires you make the computation sequentially faster. These are your "Moore's Law" methods. It's also worth pointing out that there is a fundamental limit for "parallelizability", since any given algorithm fundamentally has only so many non-interdependent steps which can be carried out independently. In any case, realizing massive performance gains from parallel computing inherently relies on being able to throw more and more compute units as a problem, which in turn depends on the ever-increasing ability to put more computational capacity on a chip for a given cost.

Method 3 is basically a catch-all for "better architecture". With things like better branch prediction, pipelining, parallelization, chip layout, integration, or a full-out application specific integrated circuit, you can improve performance by utilizing process technology you have as efficiently as possible. But there is such a thing, in principle, as a perfect design; algorithms are abstract constructs and can be broken down into a series of steps, with varying degrees of interdependence on each other. Any given algorithm simply requires X amount of computational steps to complete, and you don't get orders-of-magnitude improvements in performance unless your previous implementation was inefficient garbage. The comparison to jet engines is apt because there is a limit.

Method 4 doesn't really have anything to do with hardware. The above 3 methods all involve using hardware to improve the speed of a particular well defined algorithm. If you improve the algorithm itself, you can obviously improve performance even on the same hardware. But for reasons that I think are obvious, we are not generally going to see massive, orders-of-magnitude improvements in performance from people suddenly realizing the algorithm they were using is 10,000x slower than some alternative. As with method 3, there is fundamentally such a thing as a perfect algorithm and in certain well definied and simple problems this is even provable.

Something important to point out with both "Better Architectures" and "Better Algorithms" is that regardless of whether you can improve the speed at which you solve an existing problem, these sorts of improvements are not capable of fundamentally altering the time or memory scaling of a problem as input size or complexity grows. A lot of the actual technological progress you see is a result of the enabling technologies getting so much better that we can solve problems literally millions of times larger than what we could decades ago. Imagine going back in time to someone in 1987, showing them something like Netflix or IBM Watson, and then telling them to implement it using just "better architectures and better algorithms" using a 700 nm process node and 500 MB of storage. There is a reason things like iPods and digital cameras became popular when they did and its not because some design engineer at Apple or Canon or whatever suddenly had an epiphany.

The point I am trying to make is that for the last 50-60 years the core enabling technologies of computation, data storage, and digital communications have all seen persistent exponential growth at double-digit rates--sometimes as high as 40-50% per year. That kind of growth, sustained over decades, has resulted in truly enormous performance gains across the board in all kinds of information technology. 20 years ago, in 1997, the largest desktop hard drive had a capacity of ~16 GB. Today it is ~16TB. That's a 1000 fold improvement in a generation, over a time period where the growth rate of underlying technology has been actually significantly slowing down. That didn't happen because a bunch of really clever engineers found a "trick" to solve a problem 1000x better than anyone before them, it happened because HDD bit sizes went from 100 nanometers wide to 30. That's it! That's all we had to do! And yes, actually doing that is a lot more complicated but the point is that it was always clear that it was possible and it was always clear that that was the path we needed to be heading down. Going from 100 to 95 to 90 all the way down to 30 is a much easier order than "geez, maybe the basic algorithms and architectures people have been working on for the last 40 years are actually 1000 times shittier than they could be, let's try and see!".

I'm not saying that the Age of Scaling is even going to end abruptly, or soon. Clearly there is still room for scaling to continue, albeit at a slower rate and with far more proximal fundamental limits. My point is really that the overwhelming improvements we have seen in all kinds of technologies has been because there is plenty of room at the bottom. But there was a LOOOOT more room at the bottom back in 1987 or 1997 than there is in 2017, and there will be even less in 2027 or 2037. This will have really important ramifications, and although they will occur gradually, it's incredibly wrong to handwave them away while mumbling about spintronics or non Von Neumann architectures or something.

thechosenone
Mar 21, 2009

twodot posted:

At that scale, you just factor in hardware failure as a cost of doing business no different from power or what have you. Like SSDs might lower your costs, but these distributed systems have failures built into the design, even with perfect computers you still are only getting a certain number of 9s from your power company or what have you.

At 1000x I'm guessing you'd see certain distributed services come onto devices, things like voice recognition or route planning (why would Google bother to build servers to do this, when they can just run it in your browser?), but it'd only happen if the change was transparent to users, which means there's little benefit beyond getting cheaper (which is still good). Obviously AIs would get better, I suspect there's some research specifically into poker AIs that could use some extra power. The truly difficult problems integer factorization, cryptographic hashing, NP-complete problems, et cetera wouldn't be affected. Typically if someone is thinking "I'm using one computer right now, but if I had 1000 computers, I could do something much cooler" they just buy 1000 computers. Letting the people who currently need 100,000 computers scale down to 100 computers is nice, but I don't think there's any fundamental shift that could happen.

I guess most of the factors of ssds probably aren't too crazy, so while they would be nice, they aren't earth shattering. I guess the only real thing to think of is that they would have much higher read and write speeds for random access of stuff in secondary storage, would that benefit any particular practices that are currently not so good for rows of HDDs?

silence_kit
Jul 14, 2011

by the sex ghost

Owlofcreamcheese posted:

The physical chip costs barely anything. The price of a chip is r&d and the cost of the factory (plus the price of intel being a near monopoly that charges what it wants).

In the sci-fi story technology is over and there is no new r&d and the same factory is good forever the price of chips is going to freefall.

As manufacturing technology no longer improves and computer chips no longer greatly improve, and become more of a commodity like steel, the prices they command will greatly drop. In this world, production cost will start to become a greater fraction of the overall sales price and "Continuing Moore's Law by increasing die area" is not really a slam dunk strategy.

silence_kit fucked around with this message at 22:17 on Sep 29, 2017

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

silence_kit posted:

As manufacturing technology no longer improves and computer chips no longer greatly improve, and become more of a commodity like steel, their profitability and prices they command also will greatly drop. In this world, production cost will start to become a greater fraction of the overall cost and "Continuing Moore's Law by increasing die area" is not really a slam dunk strategy.

moore's law is number of transistors per IC per dollar. The original point of this derail was that it's three axis so the end of transistor shrink wouldn't end moore's law and a bigger IC or a lower price also would be moore's law continuing, and only one of the three needs advanced alien technology or something to happen.


quote:

20 years ago, in 1997, the largest desktop hard drive had a capacity of ~16 GB. Today it is ~16TB. That's a 1000 fold improvement in a generation, over a time period where the growth rate of underlying technology has been actually significantly slowing down. That didn't happen because a bunch of really clever engineers found a "trick" to solve a problem 1000x better than anyone before them

hard disks are a really bad example since the size of hard disks hit their actual physical limit in the mid 2000s and almost all the growth for a long time was just cost drops and more platters and it was literally the discovery of a trick that started real growth again. they even made an extrmely stupid video about it.

https://www.youtube.com/watch?v=xb_PyKuI7II

twodot
Aug 7, 2005

You are objectively correct that this person is dumb and has said dumb things

thechosenone posted:

I guess most of the factors of ssds probably aren't too crazy, so while they would be nice, they aren't earth shattering. I guess the only real thing to think of is that they would have much higher read and write speeds for random access of stuff in secondary storage, would that benefit any particular practices that are currently not so good for rows of HDDs?
Having read Morbus posts, I'm realizing that improving voice/facial/gesture recognition could mean big changes in how we interface with devices, the future is hard to predict. Changing a Kinect from a thing you use to awkwardly slash falling fruit into a serious/normal interaction seems like a big deal.

Regarding SSDs, one time I'm curious about and hopefully someone here knows, I know any given SSD has a finite of read/writes in it, what's the lifetime of an offline SSD? Basically all digital media sucks for long term (I'm thinking > 100 years) storage, and it doesn't help that we come out with a new cord/interface to storage media every couple years. If SSDs came down in price would they replace tape drives? Future historians are going to be pissed at us for the amount of data we just pissed away while carefully maintaining Wikipedia articles about Goku. We're already running into problems with recovering film from early in the movie era, and there's a number of known to be lost films.
edit:
Did some reading it appears HDDs are significantly better than SSDs for offline lifetime.

twodot fucked around with this message at 22:27 on Sep 29, 2017

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

twodot posted:

At that scale, you just factor in hardware failure as a cost of doing business no different from power or what have you. Like SSDs might lower your costs, but these distributed systems have failures built into the design, even with perfect computers you still are only getting a certain number of 9s from your power company or what have you.

At 1000x I'm guessing you'd see certain distributed services come onto devices, things like voice recognition or route planning (why would Google bother to build servers to do this, when they can just run it in your browser?), but it'd only happen if the change was transparent to users, which means there's little benefit beyond getting cheaper (which is still good). Obviously AIs would get better, I suspect there's some research specifically into poker AIs that could use some extra power. The truly difficult problems integer factorization, cryptographic hashing, NP-complete problems, et cetera wouldn't be affected. Typically if someone is thinking "I'm using one computer right now, but if I had 1000 computers, I could do something much cooler" they just buy 1000 computers. Letting the people who currently need 100,000 computers scale down to 100 computers is nice, but I don't think there's any fundamental shift that could happen.

I think the answer of what future computers would do is to just think of any specific task an AI might do, then strip out the spooky mysticism where the machine had any mind or soul or there was any sort of breakthrough in any general "AI" or anything and assume a much faster computer or set of computers could 'fake it' by brute force or whatever to do the specific task. Then it could do it 24/7 on thousands of things. Like machine vision or audio processing or whatever.

On the other end future computers move everything down a step, so laptops will be as good as desktops and phones will be as powerful as laptops and that is pretty meh, but like microcontrollers will be as powerful as phones are now and like, simple circuits will be able to throw entre computers in as good as a microcontroller is now. Or whatever. The bottom lifts up more than the top does.

silence_kit
Jul 14, 2011

by the sex ghost
[quote="“Owlofcreamcheese”" post="“476898021”"]
moore’s law is number of transistors per IC per dollar.[/quote]

I have no idea why you keep invoking Moore's Law when it comes to increasing computer chip die sizes, because increasing die sizes does nothing to reduce cost/function.

Morbus
May 18, 2004

Owlofcreamcheese posted:

Past that there is other things, 3D chip design and spintronics and optronics and non-digital math and non-von neumann computers. And it's easy to hand wave that as sci-fi gibberjabber that only exists in labs, but like look at hard disks, they have literally hit their physics absolute limits several times and they manage to take sci-fi gibberjabber out of labs, like in the 90s when physically they made magnetic cells as small as physics allowed to exist and then they dragged their feet for a couple years and then spent the money and turned superparamagnetic effects into real hard disk technology, then later did vertically aligned magnetic cells then just moved to flash technology (and now samsung is showing magnetic hard disks that generate plasma on their read/write head).

This really isn't how it works. The absolute limits have always been more or less clear, and it has always been clear that there was still a ways to go before we reach them. What has been less clear is how to solve certain engineering issues--mostly involving materials--when a particular approach starts to crap out.

I've worked on disk drives for the last 14 years, specifically in magnetic media development, and I can tell you at no point in the 90's or at any other time did we "make the magnetic cells as small as physics allowed them to exist". While the physical grain size of the nanocrystalline alloy that comprises the recording layers started to brush up against superparamegnetic limits in the late 90's / early 2000s, the actual magnetic "cluster size" was far larger than the grain size. Because of this and other reasons, it was always clear that there was still plenty of room before we reached any fundamental physics limitations, even if the grain size stayed fixed.

Actually, this is a good illustration of how the actual slow death of scaling is actually playing out. A lot of people, like yourself apparently, think that "oh, well, we thought these were the limits, but then some boffins conjured up some magic and then those were the limits, and then some other breakthrough was made and now this is the limit. In fact, the fundamental limits have always been clear: getting to a magnetic bit size of ~5-10 nm by 5-10nm is very likely the fundamental limit of a magnetic disk drive; This corresponds to around roughly 100 terabits per square inch, ballpark. Today we are at around 1 Tb/sq. in., and when superparamagnetism first because an issue at all it was maybe around 100 Gb/sq in.. So it was equally clear that there was plenty of room.

Now, as far as getting to that limit, back in. say, 1992, there were several different avenues of progress you could pursue. Once was simply shrinking the physical grain size of the magnetic alloy. Another was better isolating the magnetic grains so they form smaller clusters. Another was improving the switching field distribution of the grains so you can write sharper, less blurry transitions. Another was texturing the substrate to better line up the grains in a single direction, which also reduces the switching field distribution (since switching field depends on angle). You could also improve the intrinsic dispersion of the crystallographic axis that determines the magnetic easy axis. And several other things.

For various engineering reasons, reducing the physical grain size was very easy, and therefore this was the most relied upon "trick" in the early to mid 90's. By the late 90's / early 2000s, this approach crapped out, as the grains had gotten as small as they could be before becoming superparamgnetically unstable without using radically new materials--around 8-10nm. But that was OK, because there was a lot of improvement to be made in better segregation chemistry to isolate the grains, better techniques to orient them, and material optimizatoins to make more uniform crystallographic properties. Soon people realized that improving grain orientation was giving the most bang for the buck, and the easiest way to get perfect orientation is to stand the grains up vertically instead of longitidinally (this was the real reason for the change to perpendicular recording, not some stupid bullshit about being able to fit more bits by standing them up somehow). Once perpendicular recording became mature by the mid 2000's, grains were all oriented within +/- 1 degree and it's hard to do much better than that, so, as with physical grain size, that avenue of improvement was now finished. As more and more paths of improvement were closed, the annual improvement rate slowed, from 40 (!!!) % in the early 90s, to 20, to 10...

By now, all of the "easy" paths to improvement are getting exhausted, and continuing requires, for example, moving away from the CoPt alloys that people have 20+ years of experience manufacturing into new exotic materials that we haven't perfected yet. This is an engineering challenge, not a physics one. And even if we do, eventually, get to 100 TB/in (I don't think we will ever get much past 10), that will still be less improvement than what we saw between 1997 and 2017. So rather than people continually hitting "fundamental" limits and the breaking through them with ingenuity and technomagic, the real picture is one of a long, long road that has always been in front of us. But over time, the road has gotten steeper, and narrower, and the once extremely distant end destination is now getting closer and closer.

Morbus
May 18, 2004

Owlofcreamcheese posted:

moore's law is number of transistors per IC per dollar. The original point of this derail was that it's three axis so the end of transistor shrink wouldn't end moore's law and a bigger IC or a lower price also would be moore's law continuing, and only one of the three needs advanced alien technology or something to happen.


hard disks are a really bad example since the size of hard disks hit their actual physical limit in the mid 2000s and almost all the growth for a long time was just cost drops and more platters and it was literally the discovery of a trick that started real growth again. they even made an extrmely stupid video about it.

https://www.youtube.com/watch?v=xb_PyKuI7II

See my post above, but no. In fact literally every sentence of your paragraph on disk drives is wrong.

-The superparamagnetic grain size limit for CoPt alloys was approached well before the mid 2000's,
-Longitudinal media nonetheless continued to experience double digit growth rates due to orientation ratio improvements and also the development of antiferroagnetically coupled media
-The growth rate since perpendicular recording was introduced is actually slower than it was during the end-days of longitudinal recording
-If anything, the share of growth due to simply increasing platter count is higher 2005-2015 than it ever was before then.
-The PMR "trick" was well known and constantly examined over a period of literally more than half a century, and even developed into actual products as early as the ~1960's. It was not "discovered" in the 2000s
-That video is super dumb and completely misconstrues the actual advantages of PMR or why it took over. Probably because the real reasons are complicated and poorly understood even by most experts.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Morbus posted:

-The PMR "trick" was well known and constantly examined over a period of literally more than half a century, and even developed into actual products as early as the ~1960's. It was not "discovered" in the 2000s

Isn't that exactly the case with CPUs though? RIght now everyone gets to say that all the spintronics and optical interconnects and graphite stuff and memristor is all sci-fi mumbo jumble nonsense because it doesn't exist in commercial products and only in labs and then in 20 years when all that stuff is common everyone can just do the "heh, it wasn't a surprise cpu technology continued, we had that stuff in labs since the 70s and it didn't surprise anyone who was serious". You can always say that every not implemented technology is just speculative magic until it happens then you can talk about the decades where it was just theoretical solutions as being proof it was so obvious all along it was real.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord
I hate that I got dragged into and have contributed to a "actually technology is over now" conversation after specifically saying not to have that conversation

Morbus
May 18, 2004

Owlofcreamcheese posted:

Isn't that exactly the case with CPUs though? RIght now everyone gets to say that all the spintronics and optical interconnects and graphite stuff and memristor is all sci-fi mumbo jumble nonsense because it doesn't exist in commercial products and only in labs and then in 20 years when all that stuff is common everyone can just do the "heh, it wasn't a surprise cpu technology continued, we had that stuff in labs since the 70s and it didn't surprise anyone who was serious". You can always say that every not implemented technology is just speculative magic until it happens then you can talk about the decades where it was just theoretical solutions as being proof it was so obvious all along it was real.

Yes, sort of. A lot of speculative stuff is speculative mainly because it doesn't get aggressively funded or pursued, and it's only when conventional methods run out of steam that these things get pulled off the bench. Which is why there always seems to be some just-in-time breakthrough. My point with respect toPMR is that the story is not one of a gradually stagnating technology that was saved by a new discovery; The PMR option was being constantly evaluated, and a confluence of reasons in the early 2000's finally made it cost effective and advantageous to pursue.

The other, greater point I'm trying to make is that PMR or no, the underlying technology is and has been on the same scaling path with the same fundamental limitations since the 1950's (or at least the late 1980's when thin-film alloy media and GMR heads were a thing).

Forgetting completely about any actual technology, speculative or otherwise, making a disk drive with a bit size smaller than say 5nm by 5nm is probably not possible; once you get to those sizes you really need some kind of molecular technology to take over, and doing that will require a new technological paradigm as significant as the original invention of the integrated circuit. In any case, that's a hard limit with order of magnitude 100 Tb/sq. in. In the 1980's that limit was seven orders of magnitude away. Today it's two orders of magnitude away. Even if I'm wrong about the 100 Tb/sq in. limit [I]there is definitely no 7 orders of magnitude of room like there was 30 years ago!]/i] That would require bit sizes 100 times smaller than an atom!

That's really the key point I'm trying to make. Forget about any specific technology, there just isn't the huge amount of room for growth there used to be--we are running out of space for improvement on a fundamental level, slowly but surely. The truly massive improvements in data storage, computation, etc. that we have seen over most of our lifetimes is a one time thing that will not repeat itself, unless a TRULY radical new technological paradigm is introduced. Like not graphene or memristors but some kind of giant molecular computer.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

quote:

Yes, sort of. A lot of speculative stuff is speculative mainly because it doesn't get aggressively funded or pursued, and it's only when conventional methods run out of steam that these things get pulled off the bench. Which is why there always seems to be some just-in-time breakthrough.

Isn't the same true of CPUs?

Morbus posted:

That's really the key point I'm trying to make. Forget about any specific technology, there just isn't the huge amount of room for growth there used to be--we are running out of space for improvement on a fundamental level, slowly but surely. The truly massive improvements in data storage, computation, etc. that we have seen over most of our lifetimes is a one time thing that will not repeat itself, unless a TRULY radical new technological paradigm is introduced. Like not graphene or memristors but some kind of giant molecular computer.

What the heck is a giant molecular computer?

Also how are memristors not a radically new technological paradigm shift? Like every single thing about them was some long heralded mystical impossible device that someone invented by making a chart then drawing a bunch of lines till one set of lines was missing and declaring "that must be a thing" until 40 years later someone discovered it was a thing pretty much by surprise when no one was really looking for them and there is like 50 low hanging fruit ideas that it's useful for.

qkkl
Jul 1, 2013

by FactsAreUseless
If there is an undiscovered way to get more scaling, it WILL be discovered if an investor with some spare cash thinks it will make them money so they pay some scientists to do some research.

R. Guyovich
Dec 25, 1991

Blue Star posted:

Why are you interested in technology? What does it matter if our devices get thinner or slightly longer battery life or slightly better screen resolution? Is this something that will really change people's lives? Is it important? Is it even interesting?

*wanders into the science and technology megathread* Lol. Wow. Like technology much?

silence_kit
Jul 14, 2011

by the sex ghost

Owlofcreamcheese posted:

Is that why you think they cost more? An entire wafer costs less than one low end chip

I missed your edit. First of all that's not true--Xeons don't start at a price higher than the cost of a 14nm processed wafer, and there are a tonne of chips for lower-cost applications where chip production cost currently is a sizable portion of the chip sales price. For example, Qualcomm is selling a new 14nm $10 chip, the Snapdragon 450, for low cost cell phones which probably costs a couple of bucks per unit to manufacture, assuming a very small chip size of 50 mm^2. Doubling die area and doubling production cost to increase functionality is not really a great option for these types of products.


I have heard that the hard drive industry is betting the farm on Heat-Assisted Magnetic Recording (HAMR) and it is kind of their last hope to continued improvement of storage density. Do you agree and/or have any thoughts on the subject you would be willing to share?

silence_kit fucked around with this message at 03:04 on Sep 30, 2017

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

silence_kit posted:

I missed your edit. First of all that's not true--Xeons don't start at a price higher than the cost of a 14nm processed wafer, and there are a tonne of chips for lower-cost applications where chip production cost currently is a sizable portion of the chip sales price. For example, Qualcomm is selling a new 14nm $10 chip, the Snapdragon 450, for low cost cell phones which probably costs a couple of bucks per unit to manufacture, assuming a very small chip size of 50 mm^2. Doubling die area and doubling production cost to increase functionality is not really a great option for these types of products.

I think the double the size conversation has basically run it's course, The original point I was making is that moore's law isn't "transistors shrink" it's "the number of components on an integrated circuit goes up as the price decreases and doubles every two years" which is a gameable metric. SImply making a bigger chip or lowering the cost of an existing chip continues moore's law exactly as much as some sci-fi techno breakthrough delivered by singularity aliens.

thechosenone
Mar 21, 2009
You know what Owl, whatever nonsense your on about with doubling chip area without using twice as much of the wafer, keep doing it. Because it seems to be making goons crawl out of the woodwork to talk about why your wrong and lets me ask them cool rear end questions about actual upcoming advancements.

So, what about wireless charging? Is it reasonable to be able to put your phone on a platform to charge it without using huge gobs of energy compared to just using a normal charger? It would be nice for charging where you don't want to risk losing your normal charger.

Morbus
May 18, 2004

Owlofcreamcheese posted:

Isn't the same true of CPUs?


What the heck is a giant molecular computer?

Also how are memristors not a radically new technological paradigm shift? Like every single thing about them was some long heralded mystical impossible device that someone invented by making a chart then drawing a bunch of lines till one set of lines was missing and declaring "that must be a thing" until 40 years later someone discovered it was a thing pretty much by surprise when no one was really looking for them and there is like 50 low hanging fruit ideas that it's useful for.

Suppose someone invents a robust, manufacturable memristor based memory tomorrow. It's going to be implemented using the same basic VLSI technology that everything else uses; the fundamental feature sizes are still going to be dictated by your lithography, resist, doping, etch, and deposition processes. The cell size for a memristor based memory might be smaller than the cell size for e.g. NAND flash, but how much smaller? Twice as small? 10 times as small?

No matter WHAT kind of circuit or gimmick you are using to save information, whether it's with memristors, or trapping charge in floating gate transistors, or with bits of magnetized material, or by changing the phase of an chalcogenide glass, you can, for obvious reasons, NEVER make a bit cell smaller than the minimum linewidth of your process squared (F^2). Flash is already at ~10F^2 for the most common architectures, so the room for improvement is clearly bounded and not gigantic.

Let's say they find a way to scale VLSI manufacturing down to 5nm. That would be truly incredible. And lets say you design a "perfect" memory technology using unicorn dicks and pixie dust where the bit size is F^2 = 25 square nm--the best possible for a 5nm process. You are now about 1-2 orders of magnitude better than anything that exists today, which is great! 10 TB micro SD cards here we come! But it's also a lot less than the 4-5 orders of magnitude improvement that has occurred since you or I were in middle school. A problem that is computationally intractable today by a factor of several orders of magnitude (like say global weather simulation with a 100 meter grid resolution), will remain intractable with these kinds of improvements. Information storage tasks that are impractical with today's technology (like a complete atomic resolution model of a living cell), will remain impractical.

A "truly radical" technological paradigm shift would be one that allows us to escape these limitations, and which opens up (hopefully) several orders of magnitude of potential growth. There are really only three ways to do this:

1. Continue scaling lateral dimensions using molecular or atmoic scale technology. Once we get into the low-nanometer ranges, continued lateral scaling will only be possible by moving away from devices which use piles and piles of atoms and molecules heaped into layers and lumps and lines, and instead which operate on a molecular level. Nobody has any real clue how to do this.

2. Open up the third dimension. If there were a way to implement something similar to our current mostly-planar nanotechnologies, but scale it in the depth direction not 10x or 100x but virtually indefinitely--i.e, if we could pack as many bit cells in a centimeter vertically as we can along a centimeter of substrate, that would allow for huge improvements and continued scaling for at least a few orders of magnitude. Doing this with anything at all resembling current or foreseeable technology runs into showstopping thermal management, interconnect, and throughput/cost problems, among others. Things like 3D Flash don't even remotely count, by the way, neither does "hey we will make a disk drive with 82 platters!".

3. MMMMMMMMMLC. If there was a way to store (way) more than two levels onto feature sizes comparable to our current technologies--not 3, or 10, but a hundred thousand or a million. That obviously would enable continued scaling. Nothing like this is known or foreseeable, though.

Absent some revolution in one of those 3 areas, you can't really escape the fact that our current bag of tricks isn't going to keep giving us the exponential gains we have become accustomed to for much longer. In a nutshell, if you want things to keep scaling, we need fundamental improvements not at the device, logic, or system level but revolutions in the underlying fabrication and material technologies.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Morbus posted:

Suppose someone invents a robust, manufacturable memristor based memory tomorrow. It's going to be implemented using the same basic VLSI technology that everything else uses; the fundamental feature sizes are still going to be dictated by your lithography, resist, doping, etch, and deposition processes. The cell size for a memristor based memory might be smaller than the cell size for e.g. NAND flash, but how much smaller? Twice as small? 10 times as small?

No matter WHAT kind of circuit or gimmick you are using to save information, whether it's with memristors, or trapping charge in floating gate transistors, or with bits of magnetized material, or by changing the phase of an chalcogenide glass, you can, for obvious reasons, NEVER make a bit cell smaller than the minimum linewidth of your process squared (F^2). Flash is already at ~10F^2 for the most common architectures, so the room for improvement is clearly bounded and not gigantic.


The point of memristors isn't just that they are another incremental shrink, they are the first thing in a while that allows a new type of circuit instead of just the same circuit but smaller since they allow non volatile memory and cpu processing to be made using the same process on the same chip. (and because they can do analog processing which probably won't matter much but might let someone do some signal processing or something niche)

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

thechosenone posted:

You know what Owl, whatever nonsense your on about with doubling chip area without using twice as much of the wafer, keep doing it. Because it seems to be making goons crawl out of the woodwork to talk about why your wrong and lets me ask them cool rear end questions about actual upcoming advancements.

So, what about wireless charging? Is it reasonable to be able to put your phone on a platform to charge it without using huge gobs of energy compared to just using a normal charger? It would be nice for charging where you don't want to risk losing your normal charger.

wireless charging is an upcoming technology from like 8 years ago.

thechosenone
Mar 21, 2009

Owlofcreamcheese posted:

wireless charging is an upcoming technology from like 8 years ago.

Ah, so its not interesting because it is actually here and something we can talk about. I'd like to hear everyone elses opinions on the technology as it exists, and how it could exist.

Morbus
May 18, 2004

silence_kit posted:

I missed your edit. First of all that's not true--Xeons don't start at a price higher than the cost of a 14nm processed wafer, and there are a tonne of chips for lower-cost applications where chip production cost currently is a sizable portion of the chip sales price. For example, Qualcomm is selling a new 14nm $10 chip, the Snapdragon 450, for low cost cell phones which probably costs a couple of bucks per unit to manufacture, assuming a very small chip size of 50 mm^2. Doubling die area and doubling production cost to increase functionality is not really a great option for these types of products.


I have heard that the hard drive industry is betting the farm on Heat-Assisted Magnetic Recording (HAMR) and it is kind of their last hope to continued improvement of storage density. Do you agree and/or have any thoughts on the subject you would be willing to share?

So, back around 2005-2010, the disk drive industry started to seriously look for the Next Thing that would allow continued scaling. They essentially had two ideas, 1.) bit-patterned media, and 2.) write assist. The former was scrapped at least for now, for several mostly good reasons, if you want to hear more about that I can address it in another post. So HAMR is really the only other thing left.

The initial idea behind HAMR was simple, and dumb and wrong, but simple:

1. We need to reduce the physical grain size of the media to store more data
2. We can't because if we make the CoPt based alloys we use today much smaller than their present ~8nm they become thermally unstable and randomly flip their magnetization
3. There are more stable materials, like L10-FePt we can use, but they have higher switching fields than the max ~2.5 Tesla a write head can generate
4. If we use some kind of write-assist strategy, we can make small grains of a high-energy thermally stable material and store loads of data, hooray!

HAMR is an aggressive write-assist strategy where the media is quickly and briefly heated by a near-field infrared transducer to above its Curie temperature during writing. This allows the magnetic field from a more or less conventional write head to set the magnetic orientation of the cooling grains as it flies over them immediately afterward. Since we no longer need to rely on the magnetic field strength of the write pole, we can use high energy materials that are very thermally stable but which would otherwise be impossible to write. In the case of the chemically ordered L10 phase FePt alloy that is universally used for HAMR, you can have down to a ~4nm grain size before things become thermally unstable, vs. the current 8nm. So HAMR should allow up to a ~4x improvement in data density by that logic.

There are a number of problems with this picture, and even though this is more or less what continues to be reported even in the scientific literature, it gives a misleading idea both of the benefits and limitations of HAMR:

1. You can totally reduce the grain size of current CoPt conventional PMR media and not have thermal stability problems. Not down to 4nm, no way, but certainly it is possible to reduce things from ~8 to ~6nm without having things fall apart in terms of thermal stability. But doing so in fact confers no advantage and actuality makes the media worse! People have tried and tried over the last 5-10 years to make a "small grained" conventional PMR media that demonstrates any kind of performance advantage--even under extremely relaxed or non-existent thermal stability requirements, and nobody has succeeded. There are various ideas as to why this is, and it isn't completely understood, but by now we have a pretty good idea:

The grains in a modern disk are magnetically decoupled from each other by oxide grain boundaries. So for example your alloy will be something like CoPt-SiO2, with sort of cylindrical ~8nm diameter CoPt metallic grain cores with a ~1nm thick SiO2 grain boundary that wraps around them and provides a sort of intergranular buffer to magnetically isolate one grain from the next. This segregated microstructure is critical, else the individual 8nm grains magnetically aggregate into much larger magnetic clusters that act as your effective "grains". If you just scale things down proportionately, the grain boundaries get smaller with the grains. Since the magnetic exchange coupling between grains has an exponential dependence on grain boundary thickness, at some point the boundaries become too thin and your effective cluster size starts to become bigger even though your grains are becoming smaller. Additionally, irregularities in grain boundary thickness have exaggerated effects as the overall thickness shrinks, due to the aformentioned exponential dependence. The net effect is it becomes hard to effectively and uniformly decouple the grains as the boundary thickness gets low.

OK, no problem, we just reduce the grain size but increase the grain boundary thickness. There are two problems with this. One, adding more oxide into the alloy tends to cause crystallographic defects in the grain core which end up degrading performance. There is a maximum amount of oxide the alloy can tolerate before it starts doing more harm than good. Due to surface:volume scaling and some interface effects, this critical amount is lower for small grains. At the same time, certain other interface effects require that the amount of grain boundary needed to effectively decouple small grains is actually higher than what is required for larger grains. The net effect of all of this is, without some significant improvement in the underlying materials technology, reducing the grain size simply stops being beneficial due to the increasing difficulties in maintaining independent, well isolated grains. Lots of people have worked many years trying to get around this and it is incredibly difficult.

Which brings us back to HAMR. If making smaller grains isn't beneficial for conventional granular media, why is it suddenly going to help for HAMR media? The problem HAMR allegedly solves, the superparamagnetic limit, isn't the limit that is holding us back. And the limits we are encountering don't care how thermally stable your material is or how you write to it.

2. HAMR grains aren't smaller anyway. The material of choice for HAMR is L10 phase FePt. The "L10" (it's L1-naught but I'm too lazy for subscript) designates that this is a chemically ordered phase. Which means its not just X% of iron and (1-X)% of platinum mixed together randomly, but particular Fe and Pt atoms need to be at particular lattice sites. Non-ordered FePt is worth exactly gently caress-all as a magnetic recording material, so the ordering quality needs to be very good indeed or you end up with chunks of material that are useless garbage and screw up everything around them. Ordered FePt is a bitch. First, it must be grown at high temperature (500-700 C). Second, due to a combination of the high growth temperature and some chemical reasons, it is pretty incompatible with conventional oxide based segregants. Instead, something like carbon, carbides, or nitrides mus be used, with carbon seeming to be the best. But these are all lovely at magnetically decoupling grains compared to oxides, which makes our whole small grain problem even worse since segregation is the major issue. Apart from that, it's simply very hard to engineer a proper seed material that will form a good, small grained template at the high temperatures involved. Grain sizes of ~5-6nm are comparable to the diffusion lengths of many of these metals at the temperatures being used. And even if you solve all those problems, various interface effects (which again, become more pronounced at small sizes due to surface:volume scaling) tend to cause all kinds of problems at very small grain sizes at 5nm. In particular the Curie temperature distribution becomes very wide at small grain sizes, which is a huge problem for HAMR media. So far today, all of the best HAMR technology demonstrations have grain sizes the same as conventional PMR media

3. Integration challenges. This is really the big one. There are so many issues here I don't even know where to start. Bear in mind that in a disk drive, the recording head, which is attached to a macroscopic air bearing on a macroscopic armature flying over a macroscopic disk, is doing so at a height of ~5 nanometers. Obviously, this requires an insanely precise and meticulously consistent head-disk interface. Even sub-picogram quantities of certain contaminants can cause catastrophic failures if present on the disk surface. With HAMR, you are constantly zapping this surface with a high power laser, thermally cycling it to several hundred degrees C and back down to room temperature at a frequency of a few gigahertz. Nobody really understands what this does in terms of the head-disk interface chemistry. There are undoubtedly going to be major issues that get uncovered slowly as they only reveal themselves with time. For example it has recently become apparent that the protective carbon overcoat, which is a critical top layer used to protect media from corrosion and offer a good tribological interface, can be significantly damaged by the laser in HAMR write heads.

4. Head blows up. Nobody has made a HAMR optical transducer that doesn't basically melt itself after way too few write cycles to be useful. Significant improvement has been made from transducers that last seconds to ones that sometimes last hours, but obviously there is still a huge improvement needed.

5. The real improvement from HAMR has not much to do with it's originally proposed concept. Despite having grain sizes comparable to conventional PMR media, there have been HAMR technology demonstrations at densities roughly 30-50% higher than the best PMR disks. Essentially, current PMR media is not "grain size limited". Instead, the limiting factor in how narrow we can write tracks and how sharp we can write transitions is a combination of switching field distributions, write field gradients, and lateral exchange coupling. Basically each grain switches approximately independent of it's neighbors, at some intrinsic switching field, and there is some variability in what that switcihng field is. One grain may flip when you apply a field of 6000 Oe, another at 5800, and another at 6200, for example. At the same time, the write field is not a perfectly sharp step function from 0 to 20,000 Oe, but has some finite extent in space. The combination of a finite switching field distribution (SFD)and finite write field gradient means there is going to be some inherent blurriness in the transitions you can write even if your grain sizes were infinitely small. As a result, if you can reduce the SFD even at the expense of grain size, it can be a net gain. In conventional PMR media, each 8nm physical grain is deliberately partially magnetically coupled to 1-2 of it's neighbors by introducing a special layer that introduces exchange coupling in a controlled and uniform way. This creates an effective magnetic cluster size closer to 15nm, but dramatically reduces the SFD, for a substantial net gain overall, by sort of "averaging" together the individual switching fields of each grain in a cluster. This is a big oversimplification but it's essentially how it works.

In HAMR media, the effective switching field distribution is the Curie temperature distribution of the media. The effective write field gradient is a convolution of both the thermal gradient created by the infrared transducer and the magnetic field from the writing pole tip. It turns out that Curie temperature distributions tend to be equal or substantially better than magnetic switching field distributions. As a result, even with no exchange averaging, 8nm HAMR media grains have effective SFDs similar to the 15nm clusters in conventional media. So even though the grain sizes are similar, the effective cluster size of HAMR media is smaller since no lateral exchange coupling needs to be introduced to bring down the already narrow SFD. At the same time, the thermal gradient that can be created by a HAMR head is very sharp, and the intersection of this with the magnetic write field gradient produces an extremely sharp effective write fiend gradient. The result is that HAMR media can write sharper transitions and narrower tracks even at the same grain size, for a ~50% overall performance increase. With further improvements, it could get as high as 100%, resulting in ~2 Tb/sq. in. media before the media would become grain size limited.

So the 10 billion dollar question is, if all HAMR is doing is allowing us to attain the grain-size limited performance of 8nm grains, by improving effective write field gradients and SFD...is there a way we can just do that with conventional PMR media? Nobody initially proposed HAMR for reasons of SFD or write field gradients, they proposed it to have smaller grains. And if HAMR ultimately runs into the same problems scaling down physical grain size below 8nm as conventional PMR media, is it really worth all the cost and expense just to take us from ~1Tb/in2 to 2 Tb/in2?

My personal feeling is that the challenges to HAMR are enormous, and that nobody has demonstrated any way to realize gains from smaller grain size media. At the same time, I believe there are strategies which can be explored to significantly reduce the SFD and improve the effective write field gradient of conventional PMR. By the time HAMR becomes ready for a real product, which will be years away, conventional PMR media will have likely improved at least 25% from where it is today, at the same 8nm grain pitch, further reducing the benefit/cost ratio for HAMR unless it can demonstrate some strategy to achieving small grained media. At the same time, if anyone figures out a way to make grains smaller than 8nm "work", it seems likely that those techniques could be easily applied to conventional media as well, since those materials are much easier to grow and control in a high quality way.

In the end, it will really depend on whether or not HAMR can acheive a real product before someone else figures out a better way of generating sharper SFDs and write field gradients. Maybe no such alternative will be found, and HAMR will accidentally turn out to be the best solution for a problem it was never invented for (this is actually quite common). Or maybe it will take a decade to figure out all the integration and manufacturability problems and by then any performance advantage will have eroded. Or maybe someone will find the Secret To Small Grains and for some reason it will only work well in FePt-C media. Maybe SSD will totally displace disk drives before any of this is an issue. I'm not sure. The only thing I can guarantee is that we are a long way off from seeing any HAMR product, and it may never come.

Morbus
May 18, 2004

Owlofcreamcheese posted:

The point of memristors isn't just that they are another incremental shrink, they are the first thing in a while that allows a new type of circuit instead of just the same circuit but smaller since they allow non volatile memory and cpu processing to be made using the same process on the same chip. (and because they can do analog processing which probably won't matter much but might let someone do some signal processing or something niche)

Dude, "just another incremental shrink" is better than any other possible thing. That's what you don't get. The ability to "just incrementally shrink" things, again and again and again, and again, is almost exclusively responsible for all the things we can do today that we couldn't do 20 or 30 years ago. When you literally cannot do better than F^2, and there are things in WalMart today at 4F^2, shrinking F is a lot more important than dicking around in the space between 1F and 10F cell sizes just so you can put more poo poo on the same die and play with analog signals.

I mean you can already put non volatile memory and CPU on the same process. Woohoo. NAND Flash, DRAM, and GPUs are all made using literally the exact same process that CPUs use. You can totally put 8000 GB of nonvolatile memory right in the very beating heart of your CPU if you wan't to. Today. Go hog wild. What does this have to do with anything? And how are memristors supposed to make this easier instead of harder???

Xarn
Jun 26, 2015
It's almost as if dairy products made bad posters.

(USER WAS PUT ON PROBATION FOR THIS POST)

Phyzzle
Jan 26, 2008

Morbus posted:

Of course there is. And I'm not talking about fundamental thermodynamic limitations of computation ...

It's also worth pointing out that there is a fundamental limit for "parallelizability", since any given algorithm fundamentally has only so many non-interdependent steps which can be carried out independently.

Has it been shown that there is a fundamental limit for "parallelizability"? An algorithm's interdependent steps can still be computed in parallel. Maybe the oldest way of doing parallel computing is a Message Passing Interface, where separate, interdependent processes are constantly talking to each other.

Morbus posted:

In any case, realizing massive performance gains from parallel computing inherently relies on being able to throw more and more compute units as a problem, which in turn depends on the ever-increasing ability to put more computational capacity on a chip for a given cost.

Of course, parallel computing doesn't rely on putting more capacity on one chip; many chips is an option. Performance gains can come from shrinking the circuitry 100-fold or making 100 chips. The second option does not appear to improve the computational power per cost. But is there a fundamental limit to how low the cost of a chip can go, and are we approaching that fundamental limit now?

Xarn
Jun 26, 2015

Phyzzle posted:

Has it been shown that there is a fundamental limit for "parallelizability"? An algorithm's interdependent steps can still be computed in parallel. Maybe the oldest way of doing parallel computing is a Message Passing Interface, where separate, interdependent processes are constantly talking to each other.


Yes and no, depending on whether you prefer Amdahl's law or Gustafson's.

Amdahl posits that there is a rather sharp limit on speed up achieved via parallel computation, imposed by the serial parts of the computation.

Gustafson posits that the computation limit is the time we are willing to wait for results and thus, as our ability to perform parallel computation increases, the problem size we want to solve also increases. This would then mean that there is no limit to "parallelizability", but that doesn't mean it necessarily speeds up the computation as a whole.

Obviously, which one applies depends on your problem domain and it is usually some mixture of the two.

---edit---

To expand a bit further, I'll take three CS fields I've worked in as an example.

1) Control engineering, or making sure that your insulin pumps pump insulin properly, your motor doesn't just decide to accelerate randomly (hi Toyota) and so on. In general we do not touch threads, because lol non-determinism. Even when we do, then they are usually there to provide non-critical parts and concurrency, guarded by strict scheduler that absolutely prioritizes the thread that matters. Neither Amdahl nor Gustafson really apply here.

2) Computer Vision, or trying to figure out wtf is on a picture. Practical applications fall very strictly under Gustafson's law, as most of our work is easy to parallelize and is strictly timeboxed -- a good example is finding faces in a picture you are about to take. There is a fairly strict time limit on your work (the detection should be perceived as near instant by the user) and you have to fit in. As computational power got cheaper (both in $$ and in Watts sucked from the battery), this feature got better: now your camera can wait until all people it sees in the picture have open eyes (or are smiling, etc). At some point we might find ourselves with so much computing power we cannot add features anymore, but that won't happen at any time soon.

3) Solving NPC problems, or my current job (that is under long-rear end NDA, so no more details). It doesn't really parallelize well, and even if it did, there is not enough parallelism on Earth to be really useful. Computational complexity scales* exponentially with input size, so any amount of computational power is consumed with a trivial input size increase



Of course, you can often restate a problem to enable parallelism where there previously wasn't any (ie Merkle trees for cryptographic hashes), but that doesn't always work either.


* At least is assumed to

Xarn fucked around with this message at 12:51 on Sep 30, 2017

silence_kit
Jul 14, 2011

by the sex ghost

Owlofcreamcheese posted:

wireless charging is an upcoming technology from like 8 years ago.

No it is more like 100 years old. Tesla and others demonstrated powering of devices by inductive coupling in the early 20th century.

Edit: Oh yeah, I was totally right earlier--blue LEDs can also be said to be 100 years old as well, although at that time they weren't very bright or efficient. See link below for recreation of H.J. Round's experiment where he observed blue electroluminescence from a crude form of silicon carbide:

https://www.popsci.com/diy/article/2010-02/gray-matter-light-mystery

silence_kit fucked around with this message at 14:20 on Sep 30, 2017

silence_kit
Jul 14, 2011

by the sex ghost

Thanks for your post. I had heard about items 3. & 4. in your list, (basically, when people talk about skin effect loss of conductors at radio & microwave frequencies it is just a nuisance, but at optical/near-IR frequencies skin effect loss creates incredible attenuation and therefore heat at high input power densities in the metal optical antenna/transducer) but did not know anything about 1., 2., & 5. which actually get more into the heart of the technology of hard drives.

Morbus posted:

So, back around 2005-2010, the disk drive industry started to seriously look for the Next Thing that would allow continued scaling. They essentially had two ideas, 1.) bit-patterned media

Could you write about this? How could you do nano-patterning at low cost on the hard drive platters? I'm assuming that the lithography used in integrated circuits is ruled out for being too slow and too expensive to be used over the large area of a hard drive platter. But you don't need arbitrary shapes like in integrated circuits, just a grid shape (right?), so maybe there is another technique that could work. And of course there is bottom-up patterning, but AFAIK, those strategies traditionally haven't been consistent enough to be relied upon real electronics products where basically perfect accuracy and fidelity in shapes is demanded.

silence_kit fucked around with this message at 18:16 on Sep 30, 2017

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Morbus posted:

Dude, "just another incremental shrink" is better than any other possible thing. That's what you don't get. The ability to "just incrementally shrink" things, again and again and again, and again, is almost exclusively responsible for all the things we can do today that we couldn't do 20 or 30 years ago. When you literally cannot do better than F^2, and there are things in WalMart today at 4F^2, shrinking F is a lot more important than dicking around in the space between 1F and 10F cell sizes just so you can put more poo poo on the same die and play with analog signals.

The biggest improvement to hard disks in the last decade wasn't shrinking them, it was throwing them away and getting ssds that were less storage but much much faster and worked differently. You could shrink hard disk cells to one atom and no one would put them back in their laptop because devices have more than one metric. Shrinking isn't the one true progress.

silence_kit
Jul 14, 2011

by the sex ghost

Owlofcreamcheese posted:

Shrinking isn't the one true progress.

The point is that it is by far the biggest contributor to decreasing cost/function in electronics, and when it slows down and eventually grinds to a halt, it is a huge deal.

If the semiconductor industry in the 70's had ignored Moore's Law (reduce transistor & wire sizes for greater functionality/cost), and instead obeyed Owlofcreamcheese's law (double die areas to get greater functionality, at the expense of rising chip production prices every generation), we would be living in a much different world.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

silence_kit posted:

The point is that it is by far the biggest contributor to decreasing cost/function in electronics, and when it slows down and eventually grinds to a halt, it is a huge deal.

You could shrink a hard disk to the size of a pea and people would still keep using SSDs with 1/10th the storage because it loads doom faster.

suck my woke dick
Oct 10, 2012

:siren:I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!:siren:

Put this cum-loving slave on ignore immediately!

Owlofcreamcheese posted:

You could shrink a hard disk to the size of a pea and people would still keep using SSDs with 1/10th the storage because it loads doom faster.

This. I don't want 60TB spinning rust, I want SSDs at lower $/GB. The only imaginable reason to buy 60TB HDDs would be that SSDs haven't reached lower (or remotely comparable, say 1/3 to 1/2 more expensive) $/GB than HDDs at the point when HDDs reach 60TB.
At some point optimisation will top out, and further improvements require a different underlying technology (e.g. solid state storage instead of spinning rust, or previously different ways of making spinning rust) rather than just using better materials and more precise manufacturing to shrink the existing technology.

suck my woke dick fucked around with this message at 16:32 on Sep 30, 2017

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

blowfish posted:

This. I don't want 60TB spinning rust, I want SSDs at lower $/GB.
At some point optimisation will top out, and further improvements require a different underlying technology (e.g. solid state storage instead of spinning rust, or previously different ways of making spinning rust) rather than just using better materials and more precise manufacturing to shrink the existing technology.

Yeah, exactly. A 1gb hard disk is way better than a 500mb hard disk, and a 1tb hard disk is better than a 500gb hard disk, But the usefulness of a 10tb hard disk over a 5tb hard disk is pretty marginal. It's better. It's twice as many TB and that is better. But that is of pretty marginal usefulness to many people. While a 128GB SSD actually does things for them that no size hard disk ever would.

Like SSDs reversed a ton of the growth in disksize, a 3tb HHD cost 89 bucks while a 3tb SSD costs 17,000, but it turns out just shrinking the same technology forever to improve one and only one metric doesn't matter if something else can improve on other metrics.

That is why a technology that changes the overall design of a CPU is a bigger deal than just another shrink. Even if it moves us backwards in some statistic, if it moves us way forward in some other that is fine.

mobby_6kl
Aug 9, 2009

by Fluffdaddy
Well that entirely depends on what you're doing. For normies, absolutely a small SSD is the best possible solution because they just need facebook to launch quickly. But unless SSDs can catch up in terms of total capacity and capacity/$, they're not going to solve everything. We're generating more and more data continuously and maybe you don't need it, but google certainly does to store the bajillion terabytes of cat videos everyone uploads to youtube.

That said, it's a very important point that even if one specific technology isn't improving as rapidly as it did in the past (personal computers), massive progress can take place in another related area leading to very different usage scenarios, such as with smatphones. Maybe regular CPUs don't get much faster than they are now, but specialized neural network processors could allow a lot some cool AI poo poo.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

mobby_6kl posted:

That said, it's a very important point that even if one specific technology isn't improving as rapidly as it did in the past (personal computers), massive progress can take place in another related area leading to very different usage scenarios, such as with smatphones. Maybe regular CPUs don't get much faster than they are now, but specialized neural network processors could allow a lot some cool AI poo poo.

Even in the same area. the 90s had a bunch of time where we clearly wanted 3D graphics and cpus just couldn't. Shrinking the CPUs wouldn't have done it either. So people started putting GPUs in computers using the same physical chip technology but different designs and they got matrix math performance millions of times better.

A new design using the same technology impacted graphics more than just more die shrinks of a CPU and then die shrinking since has benefited both massively. But both things matter. Design and physical technology. GPUs weren't even any sort of shocking new idea or huge breakthrough.

Owlofcreamcheese fucked around with this message at 17:18 on Sep 30, 2017

Adbot
ADBOT LOVES YOU

silence_kit
Jul 14, 2011

by the sex ghost

Owlofcreamcheese posted:

Like SSDs reversed a ton of the growth in disksize, a 3tb HHD cost 89 bucks while a 3tb SSD costs 17,000, but it turns out just shrinking the same technology forever to improve one and only one metric doesn't matter if something else can improve on other metrics.

Ahh this is really frustrating because the main reason why flash memory was able to drop in cost/byte so rapidly to be somewhat in the same ballpark as magnetic storage was through flash memory cell size reduction! The end of scaling means that reductions in flash memory cost/$ will slow as well.

If the non-volatile solid state memory/storage industry in the 70's had ignored Moore's Law (reduce transistor & wire sizes for greater functionality/cost), and instead obeyed Owlofcreamcheese's law (double die areas to get greater functionality, at the expense of rising chip production prices every generation), non-volatile solid state memory cost/GB now would be not that different from what it was back then, and no one would be paying the astronomical cost/GB difference to get faster storage.

You are randomly jumping from topic to topic to avoid having to admit that size scaling was and still is massively important for information technology, and it grinding to a halt has huge implications.

silence_kit fucked around with this message at 17:53 on Sep 30, 2017

  • Locked thread