Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Condiv
May 7, 2008

Sorry to undo the effort of paying a domestic abuser $10 to own this poster, but I am going to lose my dang mind if I keep seeing multiple posters who appear to be Baloogan.

With love,
a mod


Dead Reckoning posted:

TBH, most humans can't create meaningful art or coherently talk about ethical philosophy, so we're probably closer than we think. Creating a robot Mozart or Einstein might be hard, but beating the intelligence of the average human is shockingly easy.

the shockingly beatable intelligence of the average human is actually beyond our most advanced computers at the moment and we're not even close to being able to simulate such intelligence. also, the AIs that are "creating" art really aren't, they are simulating "art" based on their creators' notions and preconceptions about what art is and what is worthwhile art.

Adbot
ADBOT LOVES YOU

Dolash
Oct 23, 2008

aNYWAY,
tHAT'S REALLY ALL THERE IS,
tO REPORT ON THE SUBJECT,
oF ME GETTING HURT,


The answer to the Chinese Room Puzzle and the Turing Test stuff, to me, is that it's irrelevant if what's going on is "true" intelligence or if it has the "spark" of life or is "self-aware" or "human-level" or any of the other terms we try to use for the ineffable difference between us and the toaster.

We'll have intelligence when we build something that we're not comfortable treating as not intelligent. If you want to get Cartesian you don't know for sure that anyone's intelligent except maybe yourself, you extend recognition to others because you have to and because it's hard not to when they demand it. If machines are made that are sufficiently autonomous and convincing then we might as well recognize them once it becomes socially awkward not to and leave the fine detail work to the philosophers. To that end, we'll make bigger gains in public perception with things that aren't even part of the core intelligence question, like improved natural language skills and better-engineered body-language.

This is also not a bar limited by what's achievable in the finicky hypothetical-technical sense that people normally talk about, since humans if left alone will invest themselves in imaginary friends or pet rocks or a volleyball with a face drawn on it, sooner or later they'll make a Siri that's a little too good and we'll start having this conversation more seriously even if under the hood it's nothing new.

Those are my two AM two cents.

Cingulate
Oct 23, 2012

by Fluffdaddy

Dolash posted:

The answer to the Chinese Room Puzzle and the Turing Test stuff, to me, is that it's irrelevant if what's going on is "true" intelligence or if it has the "spark" of life or is "self-aware" or "human-level" or any of the other terms we try to use for the ineffable difference between us and the toaster.

We'll have intelligence when we build something that we're not comfortable treating as not intelligent. If you want to get Cartesian you don't know for sure that anyone's intelligent except maybe yourself, you extend recognition to others because you have to and because it's hard not to when they demand it. If machines are made that are sufficiently autonomous and convincing then we might as well recognize them once it becomes socially awkward not to and leave the fine detail work to the philosophers. To that end, we'll make bigger gains in public perception with things that aren't even part of the core intelligence question, like improved natural language skills and better-engineered body-language.
Qualia isn't equal to information processing.

suck my woke dick
Oct 10, 2012

:siren:I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!:siren:

Put this cum-loving slave on ignore immediately!

Rush Limbo posted:

Considering that AI requires so much computational power to tackle the problem of, say, stairs and even then fucks it up with enough regularity to be practically useless unless, for example, you devote 100% of its effort for it to incredibly slowly tackle the problem.

Meanwhile, even your average human intelligence has mastered the art of walking down the stairs, thinking of where they're going, and talking at the same time on an instinctual level to the point where it's not even consciously thought about.

How advanced are the state of the art stair climbing robots anyway (basically how many hundred sensors and negative feedback loops depending on those sensors)? Biologically speaking, climbing stairs isn't just a stereotyped motion but additionally needs to constantly adjust for minor imbalances in the inherently unstable human body as well as minor changes in the flatness and elasticity of the ground and poo poo. Anything that can measure only things like the angle between the femur and tibia and the overall body tilt, but not the forces acting on each segment of the leg and foot would be expected to fail miserably. I won't say no amount of AI can control a robot with dumb legs, but I definitely expect that putting a literal bucketload of cheap stretch sensors and poo poo into the leg makes the job orders of magnitude easier.

suck my woke dick fucked around with this message at 13:55 on Nov 28, 2016

suck my woke dick
Oct 10, 2012

:siren:I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!:siren:

Put this cum-loving slave on ignore immediately!

Reveilled posted:

I can't say I've ever met anybody like this to tell them. Do they really exist? The mainstream of AI ethics right now seems to be debating how big of a threat to humanity a general intelligence could be, rather than the ethical implications of loving one.

Eh. I expect they're probably not actually representative of all AI ethics as a field, but much like the well known loudmouths in other fields they're very busy writing terrible guest articles and being interviewed for news website sci/tech and culture sections to the point where they drown out everything else.

mobby_6kl
Aug 9, 2009

by Fluffdaddy
Stairs are mainly a problem for sensors and mechanical components than the AI, IMO. And it's not like it's an unsolvable problem:

https://www.youtube.com/watch?v=tf7IEVTDjng&t=89s

Senor Tron
May 26, 2006


KOTEX GOD OF BLOOD posted:

Let's first define what you mean by "human level AI," and by this I think you mean AI that causes a mind in the same way that a human brain causes a mind. I am going by this definition because no AI could truly be said to be "human-level" if it was incapable of semantic understanding.

What do I mean by this? John Searle's Chinese Room example is highly controversial these days but I think it still holds water. In part:


Note, however, that Searle is dealing with the method by which we program AI today: with formal syntactic rules in a programming language. But within these bounds, I do agree that it seems impossible to create a "strong AI."

This does leave open the potential for another process to create a strong AI: for instance, some method for simulating the processes of a human brain on a silicon chip. We are far, far away from this possibility, both in terms of processing power and in terms of our understanding of the human brain, and how it causes a mind.

But even if we get to this point, there is no real way to know if what we have created truly is a mind in the same way that we experience it. Everyone knows the famous line from Descartes' first Meditation: "I think, therefore I am," in other words, the only thing we can truly trust is our own mind as we experience it. This is in part because we experience our minds in a fundamentally different way from everything around us. If you buy Descartes' analysis, we don't even know if our brains really do cause our minds: we can look at a CT scan and see areas of the brain light up as we think and feel different things, but from a formal, epistemological perspective, that doesn't necessarily tell us that our brains cause our minds, our consciousness, our "souls." Similarly, we may create some things that emulate or simulate humanity but we may never know whether we have truly created a "mind" in the same way that we experience it, precisely because we can't experience other "minds," even those we presume to be genuine other minds like those of the people around us.

For further reading start with this wikipedia article on Hubert Dreyfus's critique of AI. He focuses more on the failed promises of AI in the past and some of the false assumptions strong AI proponents make about minds. The caveat here is that like Searle, he is really talking about our contemporary conception of AI, as syntactic programs being run on silicon chips, and not really talking about potential developments in the future that really could duplicate the internal processes of a brain, were we to ever understand those processes. But this is all very, very far away from where we are now and would still operate on some quite tenuous assumptions about brains and minds, which is my answer to your original question.

The follow on from that is to imagine systems where a group of humans simulates a living creatures brain.

For example a house cat is estimated to have 760,000,000 neurons.

So lets imagine we got the entire population of Earth to simulate a cats brain. We have developed some amazing future scanning technology that allows us to take a snapshot, and we build a perfect model using 760,000,000 people who have access to some kind of email/pager system that they use to receive and send signals to their other connections. The remaining 5+ billion people act as error checkers, fixing holes in the network as they arrive, bringing food and water to the participants and so on.

It seems widely well accepted that many other animals have consciousness. In this situation, where our 760,000,000 people are replicating the behaviour of a cats brain, does that replica have consciousness? If so where does it reside?

KOTEX GOD OF BLOOD
Jul 7, 2012

Dolash posted:

The answer to the Chinese Room Puzzle and the Turing Test stuff, to me, is that it's irrelevant if what's going on is "true" intelligence or if it has the "spark" of life or is "self-aware" or "human-level" or any of the other terms we try to use for the ineffable difference between us and the toaster.

We'll have intelligence when we build something that we're not comfortable treating as not intelligent. If you want to get Cartesian you don't know for sure that anyone's intelligent except maybe yourself, you extend recognition to others because you have to and because it's hard not to when they demand it. If machines are made that are sufficiently autonomous and convincing then we might as well recognize them once it becomes socially awkward not to and leave the fine detail work to the philosophers. To that end, we'll make bigger gains in public perception with things that aren't even part of the core intelligence question, like improved natural language skills and better-engineered body-language.
It's relevant because the OP asked about human-level AI, not a Turing-safe simulation that approximates human responses. Even then, because of the inability of any AI technology we know of right now to understand and interpret meaning, past a certain point you are going to have a really hard time making something capable of convincingly simulating a system that understands it. Passing a Turing test is a lot different from a system capable of organically being a celebrated poet or artist in a way that isn't just mathematically miming others' art but a real expression of feeling – or even just simulating this.

Senor Tron posted:

The follow on from that is to imagine systems where a group of humans simulates a living creatures brain.

For example a house cat is estimated to have 760,000,000 neurons.

So lets imagine we got the entire population of Earth to simulate a cats brain. We have developed some amazing future scanning technology that allows us to take a snapshot, and we build a perfect model using 760,000,000 people who have access to some kind of email/pager system that they use to receive and send signals to their other connections. The remaining 5+ billion people act as error checkers, fixing holes in the network as they arrive, bringing food and water to the participants and so on.

It seems widely well accepted that many other animals have consciousness. In this situation, where our 760,000,000 people are replicating the behaviour of a cats brain, does that replica have consciousness? If so where does it reside?
You are making several of the bad assumptions here which Dreyfus describes, or at minimum the biological and psychological assumptions.

Senor Tron
May 26, 2006


KOTEX GOD OF BLOOD posted:

You are making several of the bad assumptions here which Dreyfus describes, or at minimum the biological and psychological assumptions.

In engineering terms I'm sure it's extremely difficult/practically impossible, but is there any theoretical reason that assuming we could scan the current state of an animals brain exactly that we couldn't simulate it in such a way?

KOTEX GOD OF BLOOD
Jul 7, 2012

Again, that relies on several assumptions about things we don't understand yet with regard to how brains cause minds. Think about the kind of knowledge that would be required for this simulation and you begin to see what I mean.

Dolash
Oct 23, 2008

aNYWAY,
tHAT'S REALLY ALL THERE IS,
tO REPORT ON THE SUBJECT,
oF ME GETTING HURT,


Cingulate posted:

Qualia isn't equal to information processing.

Qualia isn't real / isn't important / isn't provable in anyone outside yourself / can be done without for the sake of social graces if the inert thing in front of you insists on its autonomy. Take your pick.

KOTEX GOD OF BLOOD posted:

It's relevant because the OP asked about human-level AI, not a Turing-safe simulation that approximates human responses. Even then, because of the inability of any AI technology we know of right now to understand and interpret meaning, past a certain point you are going to have a really hard time making something capable of convincingly simulating a system that understands it. Passing a Turing test is a lot different from a system capable of organically being a celebrated poet or artist in a way that isn't just mathematically miming others' art but a real expression of feeling – or even just simulating this.

And I'm saying it's an illusion to go after the mythic "human-level AI", because in practice we'll settle for something less or different than ourselves. We don't test everyone we meet to see if they have "true understanding" by getting them to create some original poetry, we just see they appear to be humans and figure their experience must therefore be close to ours, so we extend a courtesy to them. A machine that speaks, travels, asserts its autonomy and protects its existence - whether it does these things by 'simulating' rather than 'truly understanding' - is from an outside perspective about as human as the man on the street.

It''s like the "what is knowledge?" debate, where people keep pushing that knowledge has a special status beyond justified true belief. It's reliant on some unverifiable, internal but also universal properties that we can never seem to measure, yet somehow we get by day-to-day without a bullet-proof definition of knowledge because those justified true beliefs are close enough.

Sundae
Dec 1, 2005
For funsies' sake, is anyone up for defining clear, measurable criteria by which you will say "Yes, this AI is human-level" to even tell if you've accomplished the task in the end? :v:

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





Senor Tron posted:

In engineering terms I'm sure it's extremely difficult/practically impossible, but is there any theoretical reason that assuming we could scan the current state of an animals brain exactly that we couldn't simulate it in such a way?

just replicating a cat brain isn't ai, it's biology. if you can simulate a cat brain in another medium then you're onto something

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





Sundae posted:

For funsies' sake, is anyone up for defining clear, measurable criteria by which you will say "Yes, this AI is human-level" to even tell if you've accomplished the task in the end? :v:

a lot of philosophy says there is no possible criteria, so probably not

Cingulate
Oct 23, 2012

by Fluffdaddy

Dolash posted:

Qualia isn't real / isn't important / isn't provable in anyone outside yourself / can be done without for the sake of social graces if the inert thing in front of you insists on its autonomy. Take your pick.
None of the picks, and the main point was that your post was confused about the difference between qualia and information processing.

KOTEX GOD OF BLOOD posted:

You are making several of the bad assumptions here which Dreyfus describes, or at minimum the biological and psychological assumptions.
Alrernatively,
http://faculty.ucr.edu/~eschwitz/SchwitzAbs/USAconscious.htm

Sethex
Jun 2, 2008

by FactsAreUseless
I'm gonna say decades or sooner.

The 'go' playing AI was a general intelligence and not really programmed with much structure.

It viewed a series of go games, played against itself a bunch against itself, then smurfed a bunch of humans.

Recursive learning, although narrow atm is already here.

I also don't doubt that ai will be a thing.

High resolution brain emulation could be a thing in a few decades, giving your ghost a path to immortality.
It opens the possibility of a bunch of things, like having a philosophical discussion on combating entropy or you can have a reflection of yourself enjoying 24/7 IR sex orgy minigames.

Idk go read the control problem if you want these ideas fleshed out better with more effort.

Sethex
Jun 2, 2008

by FactsAreUseless

Condiv posted:

the shockingly beatable intelligence of the average human is actually beyond our most advanced computers at the moment and we're not even close to being able to simulate such intelligence. also, the AIs that are "creating" art really aren't, they are simulating "art" based on their creators' notions and preconceptions about what art is and what is worthwhile art.

UR RIGHT every human that picks up an instrument/brush was doing so without a prior influence or cognitive sample that they were emulating from

Dolash
Oct 23, 2008

aNYWAY,
tHAT'S REALLY ALL THERE IS,
tO REPORT ON THE SUBJECT,
oF ME GETTING HURT,


Cingulate posted:

None of the picks, and the main point was that your post was confused about the difference between qualia and information processing.

If the question is "how do we get human-level qualia for an artificial intelligence?", the answer is "you can't, it doesn't matter". You can't even prove that other human beings are experiencing qualia distinct from information processing, so it's a pointless goal for AI.

Rush Limbo
Sep 5, 2005

its with a full house

blowfish posted:

How advanced are the state of the art stair climbing robots anyway (basically how many hundred sensors and negative feedback loops depending on those sensors)? Biologically speaking, climbing stairs isn't just a stereotyped motion but additionally needs to constantly adjust for minor imbalances in the inherently unstable human body as well as minor changes in the flatness and elasticity of the ground and poo poo. Anything that can measure only things like the angle between the femur and tibia and the overall body tilt, but not the forces acting on each segment of the leg and foot would be expected to fail miserably. I won't say no amount of AI can control a robot with dumb legs, but I definitely expect that putting a literal bucketload of cheap stretch sensors and poo poo into the leg makes the job orders of magnitude easier.

Its also the case in, say, video games and other AI simulations that they handle even fairly basic things like stairs incredibly badly, to the point where various workarounds are created including them just plain ignoring such changes in elevation from a purely AI standpoint and just moving them while having animation take over.

A Wizard of Goatse
Dec 14, 2014

Rush Limbo posted:

Its also the case in, say, video games and other AI simulations that they handle even fairly basic things like stairs incredibly badly, to the point where various workarounds are created including them just plain ignoring such changes in elevation from a purely AI standpoint and just moving them while having animation take over.

Videogame AI is meant to be the cheapest possible thing that can move a lot of pixel terrorists from the spawn point to in front of your ray gun without clipping through the walls, though, the people programming it are doing so towards completely unrelated priorities from the guy making an accurate stair-climbing simulator, or the guy making a robot that can efficiently climb actual stairs. If you want a robot that can go up stairs, but it doesn't need to do it from 50 different angles simultaneously and also render a simulated chainsaw bloodspray in 1080p resolution with dynamic lighting effects on all the droplets, a $5 Arduino, some gyroscopes, and a two-three hundred lines of code will do the trick. If you need it to climb stairs and also do everything else a human pair of legs might possibly successfully do in a lifetime, and on-the-fly generate and execute a reasonable response to leg emergencies the manufacturer never planned for, you're better off just getting on OKCupid and making a fresh human with its own built-in set of legs.

A Wizard of Goatse fucked around with this message at 23:41 on Nov 28, 2016

Slow News Day
Jul 4, 2007

What we think the future will be like is probably about as accurate as what people in 1950s thought the future would be like: grossly, sometimes comically inaccurate.

We can reasonably predict what 2021 will look like. Beyond that, it's anybody's guess.

Slow News Day fucked around with this message at 23:40 on Nov 28, 2016

Cingulate
Oct 23, 2012

by Fluffdaddy

Dolash posted:

If the question is "how do we get human-level qualia for an artificial intelligence?"
It's not.

Xae
Jan 19, 2005

Rush Limbo posted:

Its also the case in, say, video games and other AI simulations that they handle even fairly basic things like stairs incredibly badly, to the point where various workarounds are created including them just plain ignoring such changes in elevation from a purely AI standpoint and just moving them while having animation take over.

Video game "AI" is its own can of worms and has no relation to academic or industrial AI. For one thing the goal in video game AI isn't to win. Creating a bot that beats the poo poo out of humans is trivial. The hard part is creating one that "acts" human with out kicking the humans rear end.



The thing to keep in mind about AI is that we keep goal post shifting it.

By the standards of early Computer Scientists Google, Wolfram Alpha, Alexa or a half dozen other things are AI.

I expect this will continue indefinitely. We will always view "AI" as something to come, never what we have now.

Blue Star
Feb 18, 2013

by FactsAreUseless

enraged_camel posted:

What we think the future will be like is probably about as accurate as what people in 1950s thought the future would be like: grossly, sometimes comically inaccurate.

We can reasonably predict what 2021 will look like. Beyond that, it's anybody's guess.

I dont think that's true. I think it's obvious that technological progress is slowing down and will probably stagnate in our lifetimes. Compare the first half of the 20th century to the second half: the first half saw way more progress. Cars, airplanes, electric power, nuclear energy, radio, telephones, television, x-rays, and much more all came out in the period between 1900 and 1950, give or take. But now look at the period from 1950 to 2000, there's way less progress. Yeah computers got smaller and faster, we got video games and cell phones and internet stuff. Visual effects in movies got better. And...that's about it.

The only really significant progress has been in computer chips. But now even that is ending, since Moore's Law will stop soon if it hasn't already. All the heady progress in computers that has been made over the past few decades will now stop. Computers in 30 years will probably be barely any better than computers today. Video game graphics probably aren't going to get any better. We're probably never going to be able to emulate a mammalian brain, even that of a mouse, let alone a human. And other fields, such as medicine, will be even slower. Drug development has slowed down dramatically.

2016 is basically 1986 except we got tablets and cellphones and social media. I think 2046 will be like 2016 almost exactly, at least technology-wise.

Sethex
Jun 2, 2008

by FactsAreUseless

Rush Limbo posted:

Its also the case in, say, video games and other AI simulations that they handle even fairly basic things like stairs incredibly badly, to the point where various workarounds are created including them just plain ignoring such changes in elevation from a purely AI standpoint and just moving them while having animation take over.



Video game ai is a bad metric for measuring where we are regarding ai.

Development costs for stairs ai constrains stair ai. Also making that hardware run as a side process on 400$ consumer platforms constraints it.

Cingulate
Oct 23, 2012

by Fluffdaddy

Blue Star posted:

I dont think that's true. I think it's obvious that technological progress is slowing down and will probably stagnate in our lifetimes. Compare the first half of the 20th century to the second half: the first half saw way more progress. Cars, airplanes, electric power, nuclear energy, radio, telephones, television, x-rays, and much more all came out in the period between 1900 and 1950, give or take. But now look at the period from 1950 to 2000, there's way less progress. Yeah computers got smaller and faster, we got video games and cell phones and internet stuff. Visual effects in movies got better. And...that's about it.

The only really significant progress has been in computer chips. But now even that is ending, since Moore's Law will stop soon if it hasn't already. All the heady progress in computers that has been made over the past few decades will now stop. Computers in 30 years will probably be barely any better than computers today. Video game graphics probably aren't going to get any better. We're probably never going to be able to emulate a mammalian brain, even that of a mouse, let alone a human. And other fields, such as medicine, will be even slower. Drug development has slowed down dramatically.

2016 is basically 1986 except we got tablets and cellphones and social media. I think 2046 will be like 2016 almost exactly, at least technology-wise.
Literally everything changed within the decade since the introduction of the iPhone, much like literally everything changed in the decade after the internet going mainstream in the 90s.
Next: an election run via twitter, self-driving cars, computers that talk to you, the end of coal in the West, ...

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Blue Star posted:

I dont think that's true. I think it's obvious that technological progress is slowing down and will probably stagnate in our lifetimes. Compare the first half of the 20th century to the second half: the first half saw way more progress. Cars, airplanes, electric power, nuclear energy, radio, telephones, television, x-rays, and much more all came out in the period between 1900 and 1950, give or take. But now look at the period from 1950 to 2000, there's way less progress. Yeah computers got smaller and faster, we got video games and cell phones and internet stuff. Visual effects in movies got better. And...that's about it.

Is this a joke post? Did you go out of your way to list mostly things from the 1800s as being from 1900 to 1950s?

Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe
High level AI is a zillion years away.

With no irony 5 minutes later opens up Google and types "that movie with the thing" and gets what he was looking for.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




The answer to this entire discussion is 'quit listening to the poo poo waterfall that pours out of Big Yud'.

Human-intelligence AI will likely exist eventually as an accidental byproduct of something else more useful. However our current technology and understanding of sentience is such that it is inconceivable to even predict what we don't know with accuracy at this point.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Unormal posted:

High level AI is a zillion years away.

With no irony 5 minutes later opens up Google and types "that movie with the thing" and gets what he was looking for.

I honestly think "AI" is more of a looks thing than anything meaningful. If you had a truly aware creature but it responded via database queries no one would ever rank it as AI even if it could do literally every single thing a human could but I bet if you stuck a 1990s chatbot in a realistic looking robot with a nice voice you'd have people arguing it deserves rights.

Senor Tron
May 26, 2006


Blue Star posted:

2016 is basically 1986 except we got tablets and cellphones and social media.

Industrial scale cloning is becoming a thing, genetically engineered crops are a fact of life rather than a terrifying novelty, We have had a continually manned outpost in orbit for fifteen years. While manned spaceflight has regressed beyond the space station, in terms of probes technology has made a leap forward. Rather than short lived landers there are mobile robotic rovers on Mars, one of which has been operating continuously for over a decade. Dismissing tablets and cellphones like that is a mistake, in the first world almost every individual has easy and cheap access to the sum total of human knowledge.

Communication is a magnitude easier. Twenty years ago, international calls were expensive, I remember my mum having to ration herself to one Sunday evening call home to her sisters each week. Now you can contact someone on the other side of the planet as much as you want for no more than the cost of your monthly internet bill, or just use one of the widespread free wifi spots.

Remember science fiction with videophones on the wall? They are now a mundane fact of life.

3d printing is rapidly becoming a serious technology, school students have access to industrial prototyping tools that would have been unimaginable thirty years ago, and then when they want to do something with that work they can with pocket change buy something as powerful as a 1980's supercomputer to be the brains for it.

Electric vehicles have become big business. In many areas, house roofs are covered with cheap solar panels.

While not cured, cancer treatment and survival rates have drastically increased. If caught early, HIV can be controlled to the point where it isn't the death sentence it once was.

Virtual reality is now a consumer technology. Never mind the high end, cheap smartphone headsets give an experience better than that available at any cost in the 1980's. Those same supercomputers in our pockets have AR capabilities unimaginable in the 1980's. Ever used Google Translate? You can point your device at text in almost any language, and see it instantly changed to the language of your choice. Not just translated, actually visually changed to become something you can read.

While living through the changes it's easy to overlook everything that is happening in the world, but I would argue the 1986 to 2016 difference is much larger than the 1956 to 1986 one. I mean what changed for the average person between 1956 and 1986 in terms of new technology? Televisions got better, audio players got smaller, phones got a bit better and computers shrank to where people could have them in their homes for only a few hundred dollars, but if you are talking about day to day life that period was more of an evolution whereas the information age has been a revolution.

Slow News Day
Jul 4, 2007

Blue Star posted:

I dont think that's true. I think it's obvious that technological progress is slowing down and will probably stagnate in our lifetimes. Compare the first half of the 20th century to the second half: the first half saw way more progress. Cars, airplanes, electric power, nuclear energy, radio, telephones, television, x-rays, and much more all came out in the period between 1900 and 1950, give or take. But now look at the period from 1950 to 2000, there's way less progress. Yeah computers got smaller and faster, we got video games and cell phones and internet stuff. Visual effects in movies got better. And...that's about it.

The only really significant progress has been in computer chips. But now even that is ending, since Moore's Law will stop soon if it hasn't already. All the heady progress in computers that has been made over the past few decades will now stop. Computers in 30 years will probably be barely any better than computers today. Video game graphics probably aren't going to get any better. We're probably never going to be able to emulate a mammalian brain, even that of a mouse, let alone a human. And other fields, such as medicine, will be even slower. Drug development has slowed down dramatically.

2016 is basically 1986 except we got tablets and cellphones and social media. I think 2046 will be like 2016 almost exactly, at least technology-wise.

You're wrong. Take a look at this. Yes, it's from "singularityhub dot com" but it makes a lot of good points and has charts and stuff.

suck my woke dick
Oct 10, 2012

:siren:I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!:siren:

Put this cum-loving slave on ignore immediately!

Owlofcreamcheese posted:

I honestly think "AI" is more of a looks thing than anything meaningful. If you had a truly aware creature but it responded via database queries no one would ever rank it as AI even if it could do literally every single thing a human could but I bet if you stuck a 1990s chatbot in a realistic looking robot with a nice voice you'd have people arguing it deserves rights.

People already are.

suck my woke dick
Oct 10, 2012

:siren:I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!:siren:

Put this cum-loving slave on ignore immediately!

Senor Tron posted:

Industrial scale cloning is becoming a thing, genetically engineered crops are a fact of life rather than a terrifying novelty, We have had a continually manned outpost in orbit for fifteen years. While manned spaceflight has regressed beyond the space station, in terms of probes technology has made a leap forward. Rather than short lived landers there are mobile robotic rovers on Mars, one of which has been operating continuously for over a decade. Dismissing tablets and cellphones like that is a mistake, in the first world almost every individual has easy and cheap access to the sum total of human knowledge.

Communication is a magnitude easier. Twenty years ago, international calls were expensive, I remember my mum having to ration herself to one Sunday evening call home to her sisters each week. Now you can contact someone on the other side of the planet as much as you want for no more than the cost of your monthly internet bill, or just use one of the widespread free wifi spots.

Remember science fiction with videophones on the wall? They are now a mundane fact of life.

3d printing is rapidly becoming a serious technology, school students have access to industrial prototyping tools that would have been unimaginable thirty years ago, and then when they want to do something with that work they can with pocket change buy something as powerful as a 1980's supercomputer to be the brains for it.

Electric vehicles have become big business. In many areas, house roofs are covered with cheap solar panels.

While not cured, cancer treatment and survival rates have drastically increased. If caught early, HIV can be controlled to the point where it isn't the death sentence it once was.

Virtual reality is now a consumer technology. Never mind the high end, cheap smartphone headsets give an experience better than that available at any cost in the 1980's. Those same supercomputers in our pockets have AR capabilities unimaginable in the 1980's. Ever used Google Translate? You can point your device at text in almost any language, and see it instantly changed to the language of your choice. Not just translated, actually visually changed to become something you can read.

While living through the changes it's easy to overlook everything that is happening in the world, but I would argue the 1986 to 2016 difference is much larger than the 1956 to 1986 one. I mean what changed for the average person between 1956 and 1986 in terms of new technology? Televisions got better, audio players got smaller, phones got a bit better and computers shrank to where people could have them in their homes for only a few hundred dollars, but if you are talking about day to day life that period was more of an evolution whereas the information age has been a revolution.

However, because the western middle class is becoming relatively poorer compared to the western upper class, people feel like they are getting the same poo poo as in 1980 except slightly smaller/faster and made in China.

A Wizard of Goatse
Dec 14, 2014

Blue Star posted:

I dont think that's true. I think it's obvious that technological progress is slowing down and will probably stagnate in our lifetimes. Compare the first half of the 20th century to the second half: the first half saw way more progress. Cars, airplanes, electric power, nuclear energy, radio, telephones, television, x-rays, and much more all came out in the period between 1900 and 1950, give or take. But now look at the period from 1950 to 2000, there's way less progress. Yeah computers got smaller and faster, we got video games and cell phones and internet stuff. Visual effects in movies got better. And...that's about it.

The only really significant progress has been in computer chips. But now even that is ending, since Moore's Law will stop soon if it hasn't already. All the heady progress in computers that has been made over the past few decades will now stop. Computers in 30 years will probably be barely any better than computers today. Video game graphics probably aren't going to get any better. We're probably never going to be able to emulate a mammalian brain, even that of a mouse, let alone a human. And other fields, such as medicine, will be even slower. Drug development has slowed down dramatically.

2016 is basically 1986 except we got tablets and cellphones and social media. I think 2046 will be like 2016 almost exactly, at least technology-wise.

how old are you?

INH5
Dec 17, 2012
Error: file not found.

Senor Tron posted:

While living through the changes it's easy to overlook everything that is happening in the world, but I would argue the 1986 to 2016 difference is much larger than the 1956 to 1986 one. I mean what changed for the average person between 1956 and 1986 in terms of new technology? Televisions got better, audio players got smaller, phones got a bit better and computers shrank to where people could have them in their homes for only a few hundred dollars, but if you are talking about day to day life that period was more of an evolution whereas the information age has been a revolution.

Just from memory + 15 minutes of research on Wikipedia:

Unless I missed something in my research, jet airliners basically didn't exist in 1956 but were very common in 1986.

Office workers still mostly used typewriters in 1956. By 1986, typewriters had largely been replaced by computers and dedicated electronic word processors.

In terms of communication technology, while I can't say for sure because I didn't exist back then, pagers and answering machines (the latter existed in 1956, but became much more common by 1986) must have had an impact at least comparable to social media. Also, from what I gather, telegraph technologies greatly improved over the period. My mother is a computer programmer who started working in the 1970s, and she told me once that back then big companies had access to Telex systems that were pretty much equivalent to email.

Portable audio recorders and players were expensive and pretty much only used by professional reporters in 1956. By 1986, portable cassette players/recorders were extremely common. I've used both an Ipod and an actual 1980s Walkman, so I can say from experience that the difference between them in terms of user experience isn't all that great.

Cable TV, VCRs, and video rental stores all came into existence between 1956 and 1986. Again, I didn't exist back then, but the impact on personal entertainment must have been at least comparable to Tivo, Netflix, and Amazon Instant.

Plus there's this.

I'm not saying that I agree with Blue Star re: technological stagnation in general (I'd have to do a lot more research before I'd feel comfortable making a verdict), but 1956-1986 saw the introduction or at least widespread adoption of plenty of revolutionary technologies.

INH5 fucked around with this message at 04:25 on Nov 29, 2016

Condiv
May 7, 2008

Sorry to undo the effort of paying a domestic abuser $10 to own this poster, but I am going to lose my dang mind if I keep seeing multiple posters who appear to be Baloogan.

With love,
a mod


Sethex posted:

UR RIGHT every human that picks up an instrument/brush was doing so without a prior influence or cognitive sample that they were emulating from

All of them came to their own conclusions about what art is and pursues it, as opposed to a neural net which can only reflect the biases of its creators.

Dog Jones
Nov 4, 2005

by FactsAreUseless

Condiv posted:

All of them came to their own conclusions about what art is and pursues it, as opposed to a neural net which can only reflect the biases of its creators.

Actually, most artists don't know what the gently caress they are doing or why.

Also, a neural network doesn't necessarily reflect the biases of its creators either. A neural network is trained using a method which you might imagine as similar to Pavlovian conditioning. The neural net is exposed to a set of training data, and the network's topology is reinforced if it produces a desirable answer, and mutated if it produces an undesirable answer. Eventually the network is arranged in a way that transforms the input into the corresponding desirable output. It is an uninteresting (in my opinion) numerical technique which results in a system which will happen to be correct within a given error value. It has more in common with something like a linear regression than human intelligence, despite its name.

Condiv
May 7, 2008

Sorry to undo the effort of paying a domestic abuser $10 to own this poster, but I am going to lose my dang mind if I keep seeing multiple posters who appear to be Baloogan.

With love,
a mod


Dog Jones posted:

Actually, most artists don't know what the gently caress they are doing or why.

Also, a neural network doesn't necessarily reflect the biases of its creators either. A neural network is trained using a method which you might imagine as similar to Pavlovian conditioning. The neural net is exposed to a set of training data, and the network's topology is reinforced if it produces a desirable answer, and mutated if it produces an undesirable answer. Eventually the network is arranged in a way that transforms the input into the corresponding desirable output. It is an uninteresting (in my opinion) numerical technique which results in a system which will happen to be correct within a given error value. It has more in common with something like a linear regression than human intelligence, despite its name.

Yes I'm aware of how neural networks work, I've created one of my own. The desirable/undesirable answer is where the creators biases are introduced and why the network ends up reflecting those who train it.

As for your "artists don't know what they're doing" argument: They don't need to for cognition to be there. Not all cognition is conscious

Adbot
ADBOT LOVES YOU

TheNakedFantastic
Sep 22, 2006

LITERAL WHITE SUPREMACIST

INH5 posted:

Just from memory + 15 minutes of research on Wikipedia:

Unless I missed something in my research, jet airliners basically didn't exist in 1956 but were very common in 1986.

Office workers still mostly used typewriters in 1956. By 1986, typewriters had largely been replaced by computers and dedicated electronic word processors.

In terms of communication technology, while I can't say for sure because I didn't exist back then, pagers and answering machines (the latter existed in 1956, but became much more common by 1986) must have had an impact at least comparable to social media. Also, from what I gather, telegraph technologies greatly improved over the period. My mother is a computer programmer who started working in the 1970s, and she told me once that back then big companies had access to Telex systems that were pretty much equivalent to email.

Portable audio recorders and players were expensive and pretty much only used by professional reporters in 1956. By 1986, portable cassette players/recorders were extremely common. I've used both an Ipod and an actual 1980s Walkman, so I can say from experience that the difference between them in terms of user experience isn't all that great.

Cable TV, VCRs, and video rental stores all came into existence between 1956 and 1986. Again, I didn't exist back then, but the impact on personal entertainment must have been at least comparable to Tivo, Netflix, and Amazon Instant.

Plus there's this.

I'm not saying that I agree with Blue Star re: technological stagnation in general (I'd have to do a lot more research before I'd feel comfortable making a verdict), but 1956-1986 saw the introduction or at least widespread adoption of plenty of revolutionary technologies.

In general the most profound changes the last couple of decades revolve around the internet and less tangible material shifts. We're living through one of the largest social and economic upheavals in human history but these changes are more subtle than a new electronic media player you can hold in your hand.

  • Locked thread