Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

VideoGameVet posted:

Yeah my son's finishing his senior year at UNH CompSci and they are doing some robot vision project.

When I did my grad school work there (early 1980's) I was coding a 6502 to read some sort of DAC to do a flying-dot scanner with a CRT tube. Yeah, things change.

I think any job with computers is a pact that you will always be out of date but god do I feel bad for people that work on hardware. In computers at least stuff comes from major companies with marketing and announcements and stuff, in low level hard ware stuff things just suddenly appear out of no where. Like how in the last three years self contained microwave motion sensors dropped in price from like barely existing to 30 bucks to 99 cents for no real particular reason: http://www.ebay.com/itm/Motion-Sensor-Control-Microwave-Radar-Detector-Module-Home-Light-Switch-Board-/132285034133 some chinese company just got some good cheap design and suddenly drove the price down to basically nothing. I'm sure you could buy them in bulk for a penny. If for some reason you needed to detect motion through a wall?

Adbot
ADBOT LOVES YOU

Blue Star
Feb 18, 2013

by FactsAreUseless
Moore's law is ending, though. All of the computer progress that we have seen in the past few decades has been because of moore's law, which means that we can fit more transistors onto a computer chip and processing power has increased. But we've hit physical limits and we can't keep up the pace of improvement. Computers today are way more advanced then they were in the 80s or even the 90s. But i doubt computers in the 2040s will be much better than today. Our personal computers, tablets, cellphones, gadgets, videogames, and all other electronic computer technologies have probably become as good as there going to get. At least the rate of improvement is going to slow down. These techs in 2050 may be slightly better. Slightly.

Better batteries, better solar energy, fusion, selfdriving cars, robots, cyborg arms and hands, stem cells, its all science fiction at this point. We've heard about this stuff for decades and its still just something that MAY happen SOME time in the future but it never comes. :shrug:

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord
That is silly, moore's law ending and humans finding the one true eternal lithography technique would be the start of the golden age of computers, not the end. Everything about your dell desktop is a huge number of compromises to support building it with a 300 dollar CPU that will be obsolete in 6 months. There is very little about how we design consumer computers that would stay the same if we confidently knew that 50 years from now the same fab labs would be working the same process, and technology would not freeze in place if making 5nm chips was a thing that kept getting cheaper and cheaper because it was a commodity.

Like who even needs better batteries if we just build everything as a system on a chip because that takes 5 dollars to make instead of 2000. Who would design anything the way we design things if lithography was a solved thing and lithography factories weren't some ultra limited resource everything competed over.

VideoGameVet
May 14, 2005

It is by caffeine alone I set my bike in motion. It is by the juice of Java that pedaling acquires speed, the teeth acquire stains, stains become a warning. It is by caffeine alone I set my bike in motion.

Owlofcreamcheese posted:

I think any job with computers is a pact that you will always be out of date but god do I feel bad for people that work on hardware. In computers at least stuff comes from major companies with marketing and announcements and stuff, in low level hard ware stuff things just suddenly appear out of no where. Like how in the last three years self contained microwave motion sensors dropped in price from like barely existing to 30 bucks to 99 cents for no real particular reason: http://www.ebay.com/itm/Motion-Sensor-Control-Microwave-Radar-Detector-Module-Home-Light-Switch-Board-/132285034133 some chinese company just got some good cheap design and suddenly drove the price down to basically nothing. I'm sure you could buy them in bulk for a penny. If for some reason you needed to detect motion through a wall?

Moved into management and design in the 1990's. For that reason.

Absurd Alhazred
Mar 27, 2010

by Athanatos

Owlofcreamcheese posted:

That is silly, moore's law ending and humans finding the one true eternal lithography technique would be the start of the golden age of computers, not the end. Everything about your dell desktop is a huge number of compromises to support building it with a 300 dollar CPU that will be obsolete in 6 months. There is very little about how we design consumer computers that would stay the same if we confidently knew that 50 years from now the same fab labs would be working the same process, and technology would not freeze in place if making 5nm chips was a thing that kept getting cheaper and cheaper because it was a commodity.

Like who even needs better batteries if we just build everything as a system on a chip because that takes 5 dollars to make instead of 2000. Who would design anything the way we design things if lithography was a solved thing and lithography factories weren't some ultra limited resource everything competed over.

Yeah. One reason consoles used to have really good bang for the buck was that a stable set of hardware gave you more time to learn how to use it more efficiently. One example I've read about recently is how Crash Bandicoot: Warped was technologically way ahead of Crash Bandicoot, for the same hardware. They just learned how to get more out of the Playstation, knowing they'll still have it around.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Absurd Alhazred posted:

Yeah. One reason consoles used to have really good bang for the buck was that a stable set of hardware gave you more time to learn how to use it more efficiently. One example I've read about recently is how Crash Bandicoot: Warped was technologically way ahead of Crash Bandicoot, for the same hardware. They just learned how to get more out of the Playstation, knowing they'll still have it around.

That, but more than that. Like if we get to 3 nanometer transistors and it turns out that is just the end it's not like the 2020 edition of the core i9 would be the last chip.

Like moore's law has been a super cool thing and has been a big deal but it also is a real yoke around the neck of the CPU industry and we have not even scratched the surface of the possibilities of CPU or computer design because of the limitations moore's law has put on us. Like how much chips are designed around the fact yield success rates drop to comically low numbers like 10% or less every 18 months then slowly recover back up to 50+% every generation before doing it again and the way that forces the entire design of every single CPU to be built around binning and the concept that since every feature of the CPU might be broken every chip needs to fail into some other chip and how almost no one owns a non-broken CPU but we just live with that and live with never designing any features that losing wouldn't just make a cheaper chip. Or the way the fast development of chips means we have to design everything modularly and that no one wants to deal with serious system on a chips or with known to be better non von neumann architectures that don't lend themselves to the CPU being as much a simple modular piece you'd just replace in a socket.

Like it'd be sad to see moore's law end, but having perfected transistor production would absolutely be the start of the real design phase of computers, not the end of it. There is so many aspects of home desktop computers that are just huge compromises. And die shrink R&D (for good reason) dominates everything at the expense of everything else. Like when we get to some forever technology people will start doing all the crazy stuff that is just a curiosity now, like the weird analog stuff that FPGA evolved chips do in college research departments that would be pointless to design around now because we would just change the chip technology in 18 months and have them not work the same in the next chip.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord
Time to jerk off over military technology:

https://www.youtube.com/watch?v=zBpJ00AdW5s

Blue Star
Feb 18, 2013

by FactsAreUseless
Why are you interested in technology? What does it matter if our devices get thinner or slightly longer battery life or slightly better screen resolution? Is this something that will really change people's lives? Is it important? Is it even interesting?

(USER WAS PUT ON PROBATION FOR THIS POST)

Morbus
May 18, 2004

Owlofcreamcheese posted:

That, but more than that. Like if we get to 3 nanometer transistors and it turns out that is just the end it's not like the 2020 edition of the core i9 would be the last chip.

Like moore's law has been a super cool thing and has been a big deal but it also is a real yoke around the neck of the CPU industry and we have not even scratched the surface of the possibilities of CPU or computer design because of the limitations moore's law has put on us. Like how much chips are designed around the fact yield success rates drop to comically low numbers like 10% or less every 18 months then slowly recover back up to 50+% every generation before doing it again and the way that forces the entire design of every single CPU to be built around binning and the concept that since every feature of the CPU might be broken every chip needs to fail into some other chip and how almost no one owns a non-broken CPU but we just live with that and live with never designing any features that losing wouldn't just make a cheaper chip. Or the way the fast development of chips means we have to design everything modularly and that no one wants to deal with serious system on a chips or with known to be better non von neumann architectures that don't lend themselves to the CPU being as much a simple modular piece you'd just replace in a socket.

Like it'd be sad to see moore's law end, but having perfected transistor production would absolutely be the start of the real design phase of computers, not the end of it. There is so many aspects of home desktop computers that are just huge compromises. And die shrink R&D (for good reason) dominates everything at the expense of everything else. Like when we get to some forever technology people will start doing all the crazy stuff that is just a curiosity now, like the weird analog stuff that FPGA evolved chips do in college research departments that would be pointless to design around now because we would just change the chip technology in 18 months and have them not work the same in the next chip.

I don't think you appreciate just how overwhelmingly responsible the fundamentally scalable nature of VLSI and related technologies has been to the information revolution, and how limp-dicked and futile the design improvements you allude to would be in comparison.

I mean, you're right: if someone blew the whistle and said "that's it, semiconductor manufacturing technology is done, this is as good as it gets", you would definitely see more effort placed into e.g. specialized architectures, asynchronous computing, etc. And a lot of that work can definitely be expected to produce significant performance gains for particular applications all else being equal. But design improvements and clever architectures are tricks you can only play once, and they are constrained by the fundamental information theoretic boundaries of a problem. If you want another increase in performance, you need to invent a new trick, and eventually you run out of tricks. And while there are plenty of clever engineers and plenty of interesting improvements to make, the bottom line is there is fundamentally not a lot of room for improvement--at least not like the many orders of magnitude improvements that have been plainly attainable for the last several decades. Sure, there will be some rapid initial gains from low hanging fruit (a la what we've seen with GPUs being used for massively parallel computing in the last decade or so). But you can't design your way into the 10,000 fold performance increases that Moore's Law has gotten us accustomed to, certainly not across all problem domains at once all driven by the same underlying technological progress.

The essential issue is that the explosive growth in computation and information technologies owes itself really to one thing and one thing alone, and that is that the underlying technologies were fundamentally scalable to incredibly greater capabilities than anything we were building. You could look at the manufacturing process being used in 1992, and immediately see that, in principle, there was nothing to stop you from just doing more or less the same thing just way the gently caress smaller. And if you did that, everything just automatically gets a million times better, literally. Actually figuring out how the gently caress to do that required massive amounts of work, but it was always clear that it was possible and that that's where we needed to go.

The same doesn't apply to just designing chips better. For any given problem, you fundamentally need to manipulate so many bits in so many ways and store them in so many places, and once you've done that in the most efficient way possible, you're done. At that point the only way to make things better is to add more compute capacity. So yes, it's true we are only scratching the surface of what can be done with chip design, but unfortunately the space beneath that surface is pitifully shallow compared to the plenty of room we used to have at the bottom. Switching from exponential growth driven by extremely distant fundamental constraints (i.e. computation, data storage, data transmission over the last 60+ years), to incremental growth towards relatively close fundamental limits (improvements in jet engine efficiency over the same time period) is really significant, and the end of Moore's law will so radically retard the overall rate of progress in information technologies that no amount of heroically clever design can even remotely compensate for it.

Bates
Jun 15, 2006

Blue Star posted:

Why are you interested in technology? What does it matter if our devices get thinner or slightly longer battery life or slightly better screen resolution? Is this something that will really change people's lives? Is it important? Is it even interesting?

Making computers more portable have improved people's lives but beyond that a thinner phone is in itself just a symptom of technological progress that has applications across a wide variety of fields.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Morbus posted:

I don't think you appreciate just how overwhelmingly responsible the fundamentally scalable nature of VLSI and related technologies has been to the information revolution, and how limp-dicked and futile the design improvements you allude to would be in comparison.

Lithography shrinks are important but they are just one easily marketable number. Moore's law would continue for many years after they blow the whistle and say "this is as small as transistors get". Like the biggest 22nm haswell chip had 5 billion transistors while the 14nm broadwell had 7 billion and that math doesn't work out at all. And it'd because the physical chip size has shrunk. But not for any physics reason, just because intel needed to get chips per wafer up, and that has been the deal for a while, chip sizes are not anywhere near the physical limit and so we'd get quite a few more cycles of moore's law just out of "make chips bigger" and you'd still be able to double the number of transistors per chip per dollar a couple more times. (And moore's law could go on indefinitely past that because it's actually price per transistors per chip and price could shrink forever, but people tend to discount that as not a fair way to count progress).

Past that you can raise clock speed, probably up to around 10ghz with non-scifi technology.

Past that there is other things, 3D chip design and spintronics and optronics and non-digital math and non-von neumann computers. And it's easy to hand wave that as sci-fi gibberjabber that only exists in labs, but like look at hard disks, they have literally hit their physics absolute limits several times and they manage to take sci-fi gibberjabber out of labs, like in the 90s when physically they made magnetic cells as small as physics allowed to exist and then they dragged their feet for a couple years and then spent the money and turned superparamagnetic effects into real hard disk technology, then later did vertically aligned magnetic cells then just moved to flash technology (and now samsung is showing magnetic hard disks that generate plasma on their read/write head).

And you can take all this stuff and say "yeah, but after 2025 when moore's law ends and then they make chips a couple multiples bigger and then get them uniform enough everything wins the silicon lottery and everything runs at 9 ghz aircooled and then they add exotic spin based transistors and memristors and redesign the architecture to do some processing in memory THEN progress will stop" and like, sure, yeah, maybe computer speed only has like 10 more doublings left, past die shirks and all the other stuff and maybe computers will *ONLY* be 1000 times better than they are now, then only slowly improve past that ever again, that still puts us now with computers .1% as good as they will be eventually.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord
Like, even if literally all post moore's law technology ends up being a conspiracy of snake oil where researchers just faked results for grant money and modern technology is just all there is, intel still has the product roadmap going til 2025 before questions if the 5nm to 3nm jump might be slowed. That is still more than 7 years more of just regular old progress in regular old moore's law before one slower increase to 3nm then a need for new techniques. Like even without anything changing (I don't even want to say anything "speculative" because spintronics and optronics and memristors and bigger die sizes and higher clock speeds and non von neuman architecture are all things that already exist and just aren't commercialized) we still basically are getting 5 or more doublings just out of what remains of moore's law as is.

thechosenone
Mar 21, 2009
Also, aren't the names for these process nodes just marketing, and a lot of computer parts are still significantly bigger than 10 or 12 or 14 nano-meters? So even if Moore's law reaches its end for transistors, it won't necessarily do so for other parts yet, right?

silence_kit
Jul 14, 2011

by the sex ghost

Owlofcreamcheese posted:

Lithography shrinks are important but they are just one easily marketable number. Moore's law would continue for many years after they blow the whistle and say "this is as small as transistors get". Like the biggest 22nm haswell chip had 5 billion transistors while the 14nm broadwell had 7 billion and that math doesn't work out at all. And it'd because the physical chip size has shrunk. But not for any physics reason, just because intel needed to get chips per wafer up, and that has been the deal for a while, chip sizes are not anywhere near the physical limit and so we'd get quite a few more cycles of moore's law just out of "make chips bigger" and you'd still be able to double the number of transistors per chip per dollar a couple more times. (And moore's law could go on indefinitely past that because it's actually price per transistors per chip and price could shrink forever, but people tend to discount that as not a fair way to count progress).

Increasing die size is NOT the inevitable, guaranteed way to lowering cost per function on a computer chip that you are claiming here. Famously, marginal production cost of a computer chip is AT LEAST proportional to area and could be super-linear.

Cost improvements to manufacturing technology are needed to lower cost/area. IIRC, the past few computer chip manufacturing process nodes actually have higher cost/area than earlier ones—it is just that more digital functionality can fit per area, and so the net cost/function goes down.

Owlofcreamcheese posted:

Past that you can raise clock speed, probably up to around 10ghz with non-scifi technology.

Where are you pulling this number from?

Owlofcreamcheese posted:

Past that there is other things, . . . spintronics and optronics

Neither of these things are really replacements to the normal transistors in computer chips. In some ways they can be better than normal transistors, but in many ways they are worse. They are not anywhere close to being as much of a slam dunk technical option as shrinking the normal transistor.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

thechosenone posted:

Also, aren't the names for these process nodes just marketing,


Yeah, it's like megahertz in the 90s.

After 45nm intel changed the shape of transistors in chips so they wrap around themselves and it ruined the measurements, so everything past that the nanometer sizes they use as marketing names only very tangentially relate to the actual size of anything. Like a 10nm transistor is smaller than a 12nm transistor but there is no part you can point to and easily say "a ruler would measure 10nm here". It's a weird "theoretically if this was the old design these would be equivalent to 10nm"

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

silence_kit posted:

Increasing die size is NOT the inevitable, guaranteed way to lowering cost per function on a computer chip that you are claiming here. Famously, marginal production cost of a computer chip is AT LEAST proportional to area and could be super-linear.

Yeah, but we know for a fact the chips are smaller than they could be. I'm not arguing that there is infinity doublings left after transistors stop shrinking and that in 50 years we will have 10 foot chips, but we know for a fact that chips can grow in size a few times, since we've made chips that big before and because chips are small right now just for economic reasons that intel needed higher yields.


silence_kit posted:

Where are you pulling this number from?

Past ~10ghz the speed of light means electricity can't get to the edges of a reasonably sized chip in a reasonable time. You can design around that but all non invented technology is fake. For modern chips you can air cool them down to 5ghz right now if you get a good chip and you can extreme overclock them to 8ghz with extreme cooling. So if we hit some eternal lithography technique it seems reasonable we could get chip production to have few enough flaws we'd get up around there. (


silence_kit posted:

Neither of these things are really replacements to the normal transistors in computer chips. In some ways they can be better than normal transistors, but in many ways they are worse. They are not anywhere close to being as much of a slam dunk technical option as shrinking the normal transistor.

They wouldn't replace transistors anyway. Intel wants to use them for cache in their chips because cache is like 60% of the space of a chip and uses like a crazy percent of the energy.

Phyzzle
Jan 26, 2008

Morbus posted:

But design improvements and clever architectures are tricks you can only play once, and they are constrained by the fundamental information theoretic boundaries of a problem. If you want another increase in performance, you need to invent a new trick, and eventually you run out of tricks.
...
Switching from exponential growth driven by extremely distant fundamental constraints (i.e. computation, data storage, data transmission over the last 60+ years), to incremental growth towards relatively close fundamental limits (improvements in jet engine efficiency over the same time period) is really significant, and the end of Moore's law will so radically retard the overall rate of progress in information technologies that no amount of heroically clever design can even remotely compensate for it.

If you mean that a 10x improvement in our ability to solve a given computation problem will now cost many more human hours than it did before, then I suppose I agree. But I'm not sure what you mean by fundamental information theoretic boundaries of a problem. There are hard fundamental limits to a jet engine's efficiency, but I don't think such a thing exists for the 'parallelizability' or 'trickability' of a problem.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Phyzzle posted:

But I'm not sure what you mean by fundamental information theoretic boundaries of a problem.

Looks like the actual physics limit of computation is: 6 × 10^33 operations per second per joule of energy

https://en.wikipedia.org/wiki/Margolus%E2%80%93Levitin_theorem

While actual real life CPUs do ~3*10^8 operations per like, 600 joules.

silence_kit
Jul 14, 2011

by the sex ghost

Owlofcreamcheese posted:

Yeah, but we know for a fact the chips are smaller than they could be. I’m not arguing that there is infinity doublings left after transistors stop shrinking and that in 50 years we will have 10 foot chips, but we know for a fact that chips can grow in size a few times, since we’ve made chips that big before and because chips are small right now just for economic reasons that intel needed higher yields.

Your point makes no sense. Obviously they can make computer chips larger, but the larger chips are not less expensive per function unless the manufacturing technology greatly lowers marginal cost per area.

IIRC, new manufacturing technologies are the same or even are slightly increasing in marginal cost/area over the old ones, so your claim that there will be great cost/area reduction to make bigger chips greatly cheaper than the older smaller ones without the benefit of transistor/wire size scaling is not obvious to me.

silence_kit fucked around with this message at 17:46 on Sep 29, 2017

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

silence_kit posted:

Your point makes no sense. Obviously they can make computer chips larger, but the larger chips are not less expensive per function unless the manufacturing technology greatly lowers marginal cost per area.

Moore's law is actually number of transistors per integrated circuit per dollar.

So like, hitting the end of physics would still mean just making bigger chips would work for a few steps and then after that if you lowered the cost of the 2028 i9 from 500 dollars to 250 in 2030 you still have managed to not end moore's law.

So like, making the smallest transistor is not a good sign for the continued existence of moore's law, you could get a couple steps out past that just by non-breakthrough manufacturing stuff. It wouldn't end it right there.

Blue Star
Feb 18, 2013

by FactsAreUseless
This cant go on forever, and magical quantum computers or nanotechnology isnt going to save it. We have definitely hit the limit of what computers can do. Dont expect much.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Blue Star posted:

This cant go on forever, and magical quantum computers or nanotechnology isnt going to save it. We have definitely hit the limit of what computers can do. Dont expect much.

I mean, even if you declare every single scientist working on every single computer technology is just a scam that is faking research to get grant money intel still has 3 announced doublings left on it's roadmap before the point moore's law is supposed to slip. Meaning even at the worst case a current brand new computer today is only 12.5% as complex as the final computer that will ever exist when it get's released in 2025.

Owlofcreamcheese fucked around with this message at 19:28 on Sep 29, 2017

silence_kit
Jul 14, 2011

by the sex ghost

Owlofcreamcheese posted:

Moore's law is actually number of transistors per integrated circuit per dollar.

So like, hitting the end of physics would still mean just making bigger chips would work

You aren't understanding my point. If scaling has ended, and you add 2x the number of transistors on a computer chip and make the chip 2x bigger in area, all other things being equal, its marginal cost goes up by 2x or maybe more. The cost/function or cost/transistor in the best case stays the same or is likely worse, since it is harder to produce the larger chip consistently.

To greatly reduce the cost per function, you need to rely on the cost/area of the e.g. a 14 nm manufacturing process greatly reducing as it ages. I don't know if there is a whole lot of room for improvement there.

silence_kit fucked around with this message at 20:12 on Sep 29, 2017

Xarn
Jun 26, 2015
Yall realize that most of current CPU is turned OFF even when going at full tilt because we are unable to properly distribute the required electricity without burning everything down, right?

And that things like SIMD pipeline actually have to underclock the CPU to run :v:

silence_kit
Jul 14, 2011

by the sex ghost

Xarn posted:

Yall realize that most of current CPU is turned OFF even when going at full tilt because we are unable to properly distribute the required electricity without burning everything down, right?

And that things like SIMD pipeline actually have to underclock the CPU to run :v:

The idea that functionality is directly proportional to transistor count is pretty idealized. You have pointed out one of the other constraints on "functionality", put in quotes for being a vague term.

thechosenone
Mar 21, 2009

Owlofcreamcheese posted:

I mean, even if you declare every single scientist working on every single computer technology is just a scam that is faking research to get grant money intel still has 3 announced doublings left on it's roadmap before the point moore's law is supposed to slip. Meaning even at the worst case a current brand new computer today is only 12.5% as complex as the final computer that will ever exist when it get's released in 2025.

I would rather not put words in peoples mouth. I am also really hopeful for what technological advancement could possibly bring us, probably unreasonably so. Doing some (conjectural) paper napkin level estimation of what computational power we can reasonably access, I figure we have anywhere from five factors of two to five factors of ten left. Now 32 to 100,000 is a massive range, but I'm still concerned if this is even [i]accurate[i/], much less precise.

So looking at that computational limit stuff owl brought up about computations per second and stuff, you've got eight of those 33 potential powers of ten gotten, and while you could say that we could get more than this, I can't imagine us getting more than a fifth of those remaining powers of ten (and you sure as heck aren't getting all or even almost all of them since there are alot of other things limiting us a lot tighter than these numbers), leaving us with five of them. The five powers of ten is a low ball by replacing the tens with twos.

Now if you take the Sunway TaihuLight's 93 petaflop speed and multiply it by these two factors to estimate the potential strength of the best computer we will ever get, you get somewhere between the range of 3 exaflops and 9.3 zetta flops. This would be the strongest computer we would ever have, and it would probably take just as much, if not more power than the TaihuLight to run. This would account for all other methods of nonsense you would use to increase computational power short of a sickeningly huge increase from something that no one here can possibly predict. The upper range of this would still be at around the bottom of zetta scale computing.

'So we aren't likely to get to far beyond zetta scale computing' you might say, 'but that is still a ridiculous amount of computational power to gain, so we are going to revolutionize everything still-' no, we probably aren't. I bring up zetta scale computing because of something that was listed as requiring it in order to have the ability to work. This thing is accurate 7 day weather forecasts. Just to know if it is going to rain next week requires 10,000 times as much power as our best computer has to even get to thinking we could do it. Just because we get huge orders of magnitude of better computing, doesn't mean we are going to get orders of magnitude better results, because we cannot really use that power to its potential, and that power we can use is not even guaranteed to be well suited to the problem at hand.

Even if I'm off by an order of 1000, we aren't going to get anything obscene from it. You get a few more nodes in the traveling salesmen problem, you get a few more orders of magnitude digits of pi, maybe you even get some real good simulations of molecular interactions you couldn't quite get before, but nothing like the science fiction future we want. This isn't to say don't do more computer technology research, because not only is that out of my and your hands, but it wouldn't even be true. You get linear benefits from exponential advances, but you should consider that impressive.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

silence_kit posted:

You aren't understanding my point. If scaling has ended, and you add 2x the number of transistors on a computer chip and make the chip 2x bigger in area, all other things being equal, its marginal cost goes up by 2x or maybe more. The cost/function or cost/transistor in the best case stays the same or is likely worse, since it is harder to produce the larger chip consistently.

To greatly reduce the cost per function, you need to rely on the cost/area of the e.g. a 14 nm manufacturing process greatly reducing as it ages. I don't know if there is a whole lot of room for improvement there.

Like, a 1080ti GPU chip is 470mm^2, and a broadwell CPU chip is 80mm^2. "make a bigger chip" is a pretty non-fantastical, non-scifi technology compared to the stuff intel has been doing to get more transistors per chip.

Harold Fjord
Jan 3, 2004
But you aren't getting moore's law improvements if the twice as big, twice as fast chip costs twice as much to make. And you haven't really proposed any explanation for how these new chips get cheaper, you just assumed they would? This is my interpretation of silence kits point, which I don't think your response really addresses. But I don't know this circuity stuff.

Harold Fjord fucked around with this message at 20:30 on Sep 29, 2017

silence_kit
Jul 14, 2011

by the sex ghost

Owlofcreamcheese posted:

Like, a 1080ti GPU chip is 470mm^2, and a broadwell CPU chip is 80mm^2. “make a bigger chip” is a pretty non-fantastical, non-scifi technology compared to the stuff intel has been doing to get more transistors per chip.

Ahhh you are clearly not getting it! The larger chips are in the best case the same cost per transistor, and in the more usual case more expensive per transistor. "Continuing Moore's Law by increasing die size" is a totally nonsensical concept.

thechosenone
Mar 21, 2009
Yields would also have to be pretty drat good to allow for these sorts of chips anyway, and since they would be at the forefront of current manufacturing processes, I'm doubtful of even that. Maybe you could decrease costs by increasing the size of the silicon wafer, allowing you to have more throughput? Sounds like something they probably are already looking into though. And it certainly isn't going to provide monstrous improvements to let us bash are heads harder against difficult problems. a 450mm diameter wafer is about as big as it gets right now, and at 470 square mm, you only get 338 1080ti's out of that even at 100%, which is impossible to begin with since chips are rectangular, and the wafers are circles. even assuming you get 300 of them, if you quintuple the size, at best you get 60, assuming not a single defect occurs. Even if you get five times the power, which is highly unlikely, you end up with five times the cost. you at best have multiplied your power to cost ratio by one.

thechosenone
Mar 21, 2009

silence_kit posted:

Ahhh you are clearly not getting it! The larger chips are in the best case the same cost per transistor, and in the more usual case more expensive per transistor. "Continuing Moore's Law by increasing die size" is a totally nonsensical concept.

I get it, it would make computers stupidly expensive, without any increase in performance per dollar (likely with a steep loss in that regard actually). As it is, it seems like you get qualitatively linear increases out of quantitative increases of factors of 1000. Probably at best we can squeeze one of those out of every single trick we can think of, and then we are gonna need some good luck if we want to get past that.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

silence_kit posted:

Ahhh you are clearly not getting it! The larger chips are in the best case the same cost per transistor, and in the more usual case more expensive per transistor. "Continuing Moore's Law by increasing die size" is a totally nonsensical concept.

You don't buy chips by the transistor.

Graphics cards from 2012 have more transistors than a top end modern CPU, because just having a lot of transistors is simple and the design of a GPU is just repeating patterns. When a CPU is 1700 dollars it's not because you bought so much precious silicon.

qkkl
Jul 1, 2013

by FactsAreUseless
Single processors have a size limit due to the speed of electricity, because if the processor is too big then one cycle pulse won't propagate to all the transistors before the next cycle starts.

Chip prices are largely artificial since it doesn't actually cost much more to make a $1500 top end chip than it does to make a $200 entry level one.

GPU chips are a lot bigger than CPU chips because having more than eight processors on a CPU doesn't increase performance for typical CPU tasks, which are majority single-threaded, but GPUs can make use of thousands of small processors.

thechosenone
Mar 21, 2009
Also, to defend the owl, I will note that cpu-z does have clockspeeds of over 8Ghz recorded for a few processors. So 10 Ghz might be a possible hard cap. So far it seems like 5 Ghz is the softcap on that though. based on the idea that improvements by a factor of 1000 increase the quality of performance anywhere from n to 2^n, we will probably experience a qualitative improvement in computational power by a factor ranging from 3 to 8.

Does anyone else here like doing estimations? I've found this strangely enjoyable.

thechosenone fucked around with this message at 21:21 on Sep 29, 2017

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

qkkl posted:

Chip prices are largely artificial since it doesn't actually cost much more to make a $1500 top end chip than it does to make a $200 entry level one.

GPU chips are a lot bigger than CPU chips because having more than eight processors on a CPU doesn't increase performance for typical CPU tasks, which are majority single-threaded, but GPUs can make use of thousands of small processors.

GPUs are bigger than CPUs because GPUs are always a generation behind on fabrication technology and are using mature tech while intel CPUs are always finishing factories a few days before they have to start manufacturing chips and often start a generation with yields of like 10% but end with yields of like 70%.

It's the same reason early in a generation a bunch of cheaper intel chips are just more expensive chips with broken parts then later on intel starts laser cutting parts off because the process improves to the point they don't have enough naturally broken ones to sell and have to break chips intentionally to make lower end chips.

thechosenone
Mar 21, 2009
Do any compsci goons know of anything that would trivially benefit from a 1000 times more computing power? I'm suddenly really interested since I just realized there are probably more things than weather simulation out there that could do something with that.

Harold Fjord
Jan 3, 2004

thechosenone posted:

Do any compsci goons know of anything that would trivially benefit from a 1000 times more computing power? I'm suddenly really interested since I just realized there are probably more things than weather simulation out there that could do something with that.

Fold at home?

silence_kit
Jul 14, 2011

by the sex ghost

Owlofcreamcheese posted:

You don’t buy chips by the transistor

? And your point here is?

Owlofcreamcheese posted:

When a CPU is 1700 dollars it’s not because you bought so much precious silicon.

Yes, Intel enjoys a healthy profit margin on its server chips but it does cost Intel quite a bit more to manufacture their top-of-the-line Xeons when compared to their smaller Core or Atom products. This is because Xeon die sizes are larger.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

silence_kit posted:

Yes, Intel enjoys a healthy profit margin on its server chips but it does cost Intel quite a bit more to manufacture their top-of-the-line Xeons when compared to their smaller Core or Atom products. This is because Xeon die sizes are larger.

Is that why you think they cost more? An entire wafer costs less than one low end chip

Owlofcreamcheese fucked around with this message at 21:42 on Sep 29, 2017

Adbot
ADBOT LOVES YOU

silence_kit
Jul 14, 2011

by the sex ghost

Owlofcreamcheese posted:

Is that why you think they cost more?

You are conflating two things here—the amount of money customers are willing to pay for server chips when compared to chips for consumer desktops and the production cost of an integrated circuit.

I think you are doing this either accidentally because you are looking at this in a muddled way or intentionally because you want to change the goalposts.

  • Locked thread