Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Cingulate
Oct 23, 2012

by Fluffdaddy

blowfish posted:

What do you mean by "massive gains"? How do you quantify how close an AI is to becoming superhuman (as you pointed out, giving the AI an arbitrary number of extra processors doesn't make it superhuman by itself)? How do you define superhuman?
I don't, really. What I mean by massive gains is, we've started improving our cognition with education and writing and organized science, and suddenly we're on the loving moon.

We are superhuman, if you will. AIs are, as has been repeatedly pointed out in here, in some areas already superhuman, but nobody can really see anything scalable that is as good at general, non-specialized cognition as humans are.

Edit: argh, I though you were responding to a different post of mine.

By massive gains I basically mean that within a few years, beating humans at a bunch of well-defined tasks has gone from a pipe dream to a Google Engineer's weekend job. I think a big symbol is the 2012 ILSVCR win by a deep conv net. Since then, everything has been deep learning everywhere, and now nobody is really surprised anymore by AlphaGo and self-driving cars.

And as I said, the interesting thing would be something that is as general and as capable as humans on a scalable architecture. If you have a massive supercomputer that is just as smart as a human being and whose performance grows logarithmically with added nodes, that doesn't really change much, but if you have something you can easily grow linearly, then things will very rapidly begin to change.

Now look at exponential growth and we have guaranteed Skynet by Tuesday, which is what I think a bunch of AI fanatics are talking about, but how realistic is superlinear growth given the limits of physics (which also matter for information processing)?

Cingulate fucked around with this message at 23:07 on Nov 27, 2016

Adbot
ADBOT LOVES YOU

Thug Lessons
Dec 14, 2006


I lust in my heart for as many dead refugees as possible.

Owlofcreamcheese posted:

Yeah, of course. We designed it that way. It's the same way we set up all the olympic events so none of them require flight or staying underwater for hours or anything. It's not that humans have a general physical ability, it's that it'd be a waste of time to have olympic events absolutely no one could compete in. Human intelligence isn't actually general, we just don't even bother to do the tests for the stuff that people clearly can't do.

That example actually reveals why this isn't a useful way of looking at it: you're saying that only God has traits that could be described as "general".

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Thug Lessons posted:

That example actually reveals why this isn't a useful way of looking at it: you're saying that only God has traits that could be described as "general".

That seems extremely reasonable. No one would expect there to be a physical device that can do every physical task. It seems perfectly fine to claim different designs for information processing are better or worse at different things.And no design is just usable for everything.

Cingulate
Oct 23, 2012

by Fluffdaddy

Owlofcreamcheese posted:

That seems extremely reasonable. No one would expect there to be a physical device that can do every physical task. It seems perfectly fine to claim different designs for information processing are better or worse at different things.And no design is just usable for everything.
Human cognition is reasonably described as general. See Kahneman's Thinking, Fast and Slow, and specifically System 2 vs. 1 as a fairly prominent discussion. The contrast is particularly striking when comparing it to AI, which is a bunch of systems all of which excel on one task each, with most of these tasks falling under the umbrella of System 1.

suck my woke dick
Oct 10, 2012

:siren:I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!:siren:

Put this cum-loving slave on ignore immediately!

Owlofcreamcheese posted:

It seems perfectly fine to claim different designs for information processing are better or worse at different things.And no design is just usable for everything.

Well, the idea is to have a general and very versatile AI that can do everything somewhat worse than specialised AI and then just have it run on a higher-clocked CPU to make it work.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Cingulate posted:

Human cognition is reasonably described as general. See Kahneman's Thinking, Fast and Slow, and specifically System 2 vs. 1 as a fairly prominent discussion. The contrast is particularly striking when comparing it to AI, which is a bunch of systems all of which excel on one task each, with most of these tasks falling under the umbrella of System 1.

Yeah, but what if I told you your brain was just a bunch of systems that excel at one task each and are terrible at anything else.

Cingulate
Oct 23, 2012

by Fluffdaddy

Owlofcreamcheese posted:

Yeah, but what if I told you your brain was just a bunch of systems that excel at one task each and are terrible at anything else.
Well then I would first tell you to actually read the book I told you to read before making grand claims about the brain, and then I'd say that this might be true on a local level in the same sense as a pocket calculator is just a device to shuffle electrons around, but it's wrong as a non-trivial description of human cognition, where we find very clear differences between quasi-modular, highly specialized systems and a general, cross-modal global workspace.

Thug Lessons
Dec 14, 2006


I lust in my heart for as many dead refugees as possible.

Owlofcreamcheese posted:

Yeah, but what if I told you your brain was just a bunch of systems that excel at one task each and are terrible at anything else.

There are psychologists who believe that, primarily a subset of evolutionary psychologists, but it's not even close to an accepted paradigm. The psychology of intelligence in particular are great evidence against it.

Cingulate
Oct 23, 2012

by Fluffdaddy

Thug Lessons posted:

There are psychologists who believe that, primarily a subset of evolutionary psychologists, but it's certainly not an accepted paradigm. The psychology of intelligence in particular are great evidence against it.
The overlap between g-focused IQ researchers and evolutionary psychologists is pretty substantial though :D

Thug Lessons
Dec 14, 2006


I lust in my heart for as many dead refugees as possible.

Cingulate posted:

The overlap between g-focused IQ researchers and evolutionary psychologists is pretty substantial though :D

Yeah but most evolutionary psychologists don't foolishly hinge their work on modularity. That's a small but vocal minority, mostly from UC Santa Barbara.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord
To restate the actual point I'm making:

we are never going to hit "human level AI" because human isn't a level, it's a specific implementation.

Owlofcreamcheese fucked around with this message at 00:03 on Nov 28, 2016

Reveilled
Apr 19, 2007

Take up your rifles

Owlofcreamcheese posted:

To restate the actual point I'm making:

AI will never happen because AI is a silly concept based on a hand wavy and fanciful concept of minds.

Someday we could make a cool robot arm, but it won't ever be flesh and blood. Someday we can probably manufacture cool flesh and blood arms but it won't be a robot arm.

You either simulate a human brain to the point it's not AI anymore or you make computer programs and they aren't very much like human minds and don't do the same stuff.

I guess the issue is I fundamentally don't understand where you are getting your definition of AI from, it doesn't look like any definition of AI I've seen outside of science fiction novels. An AI is a constructed entity which can acquire and apply knowledge and skills. Suggesting it's a thing that will never happen is ridiculous because it's something which already exists. The difference between the AIs we have now and the ones being referred to as "human level AI" or "general intelligence" is that they aren't capable of doing certain tasks which have allowed humans to develop beyond a purely animalistic stage. "Human-level AI" does not mean "behaves like a human", it means "can do productive tasks most humans can do". There's no reason to suppose that a "human-level AI" will have emotions or opinions or empathy or ethics. It could have motivations, but those motivations could be utterly alien to us.

suck my woke dick
Oct 10, 2012

:siren:I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!:siren:

Put this cum-loving slave on ignore immediately!

Reveilled posted:

There's no reason to suppose that a "human-level AI" will have emotions or opinions or empathy or ethics. It could have motivations, but those motivations could be utterly alien to us.

:ssh: please don't tell this to robutt ethicists who are already calling for the protection of glorified realdolls humanlike androids against rape by neckbearded basement dwellers. Their heads might explode when they realise they're not actually doing something useful.

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!

Owlofcreamcheese posted:

To restate the actual point I'm making:

we are never going to hit "human level AI" because human isn't a level, it's a specific implementation.

So, you're basically claiming that in order to have a thing that makes plans, introspects, and reasons, it is a prerequisite to have an organic human brain, made of brain cells, and be raised in a human culture? Pardon me if this seems a little bit broad a claim to make.

EDIT: That is to say, how do you know that the only thing that could do those tasks is a flesh and blood human?

DrSunshine fucked around with this message at 00:29 on Nov 28, 2016

Thug Lessons
Dec 14, 2006


I lust in my heart for as many dead refugees as possible.

Owlofcreamcheese posted:

To restate the actual point I'm making:

we are never going to hit "human level AI" because human isn't a level, it's a specific implementation.

In a certain sense there is no such thing as "bear-level strength", but I'm sorry to say that this won't do you any good when you're facing down a bear.

Reveilled
Apr 19, 2007

Take up your rifles

blowfish posted:

:ssh: please don't tell this to robutt ethicists who are already calling for the protection of glorified realdolls humanlike androids against rape by neckbearded basement dwellers. Their heads might explode when they realise they're not actually doing something useful.

I can't say I've ever met anybody like this to tell them. Do they really exist? The mainstream of AI ethics right now seems to be debating how big of a threat to humanity a general intelligence could be, rather than the ethical implications of loving one.

Cingulate
Oct 23, 2012

by Fluffdaddy

Owlofcreamcheese posted:

To restate the actual point I'm making:

we are never going to hit "human level AI" because human isn't a level, it's a specific implementation.
We can imagine an AI that reaches or surpasses humans on every measurable dimension.

Thug Lessons
Dec 14, 2006


I lust in my heart for as many dead refugees as possible.
It seems likely to me that future AI will eventually resemble humans because a) individuality is extremely adaptive which is why we're individuals instead of the sea of LCL fluid from Evangelion and b) humans would prefer it that way.

Willie Tomg
Feb 2, 2006

Owlofcreamcheese posted:

Your brain couldn't render a single frame even if you spent your whole life trying. Human visual processing is pathetic compared to a computer.

The visual sensory organs of living creatures--of which humans are a middling sample--are extraordinarily acute and to this day the only metric which technology has more reasonably approximated them has been in resolution which is a function of display development and not computational development. It's actually actively regressed in terms of ability to display/capture color information (video has a latitude of roughly 3.5 f-stops in either direction with black being incorrectably black and white being incorrectably white) with silver halide recording volumes of information through mechanical/chemical processes of which only 20% is actually perceptible by the eye without further processes to bring them into the visible color range. If human visual processing is so deficient, why is it such a bastard of a hurdle when making robots that respond to an array of visual stimuli? Of the five senses you could literally have not chosen one in which humans have more of an advantage.

Owlofcreamcheese posted:

we are never going to hit "human level AI" because human isn't a level, it's a specific implementation.

you've completely hosed up describing the capacities and practical implementation of any average human eyeball, i am so stoked to hear your opinions on the practical implementation of The Human Being, In General

Reveilled
Apr 19, 2007

Take up your rifles

Thug Lessons posted:

It seems likely to me that future AI will eventually resemble humans because a) individuality is extremely adaptive which is why we're individuals instead of the sea of LCL fluid from Evangelion and b) humans would prefer it that way.

If you're interested in this sort of stuff, I'd highly recommend the book Superintelligence by Nick Bostom. It makes a pretty persuasive case that unless we're super careful about how we carry out AI research, we'll probably only ever make one human-level intelligence (before it kills us). It's really dry at the start as it spends multiple chapters laying out the case of why human-level AI is likely to be developed at some point, but very interesting once it starts discussing the various scenarios that could play out depending on when, where, and how the first AIs are developed, and how different security precautions would or would not work.

Cingulate
Oct 23, 2012

by Fluffdaddy

Willie Tomg posted:

The visual sensory organs of living creatures--of which humans are a middling sample--are extraordinarily acute and to this day the only metric which technology has more reasonably approximated them has been in resolution which is a function of display development and not computational development. It's actually actively regressed in terms of ability to display/capture color information (video has a latitude of roughly 3.5 f-stops in either direction with black being incorrectably black and white being incorrectably white) with silver halide recording volumes of information through mechanical/chemical processes of which only 20% is actually perceptible by the eye without further processes to bring them into the visible color range. If human visual processing is so deficient, why is it such a bastard of a hurdle when making robots that respond to an array of visual stimuli? Of the five senses you could literally have not chosen one in which humans have more of an advantage.


you've completely hosed up describing the capacities and practical implementation of any average human eyeball, i am so stoked to hear your opinions on the practical implementation of The Human Being, In General

Though of course, on hyperspecialized tasks (like rating pixelated images on a monitor), AIs match human beings. (That's not to dispute the actual point. No machine matches the occipital lobe on teraflops/watt, not even close.)

Reveilled posted:

If you're interested in this sort of stuff, I'd highly recommend the book Superintelligence by Nick Bostom. It makes a pretty persuasive case that unless we're super careful about how we carry out AI research, we'll probably only ever make one human-level intelligence (before it kills us). It's really dry at the start as it spends multiple chapters laying out the case of why human-level AI is likely to be developed at some point, but very interesting once it starts discussing the various scenarios that could play out depending on when, where, and how the first AIs are developed, and how different security precautions would or would not work.
Does Bostrom respond to what I brought up in here? Specifically that his claims depend from what I can tell on linear or better scalability.

Cingulate fucked around with this message at 01:11 on Nov 28, 2016

Willie Tomg
Feb 2, 2006

Owlofcreamcheese posted:

Yeah, but what if I told you your brain was just a bunch of systems that excel at one task each and are terrible at anything else.

Those systems accumulate into the creation of a plastic experience and understanding of reality which, barring the understated and hugely difficult challenge of clinical depression, is incredibly dope and that you're blowing that part off as NBD is one of those things that says more about your personal values at the moment than the premise.

Thug Lessons
Dec 14, 2006


I lust in my heart for as many dead refugees as possible.

Willie Tomg posted:

The visual sensory organs of living creatures--of which humans are a middling sample--are extraordinarily acute and to this day the only metric which technology has more reasonably approximated them has been in resolution which is a function of display development and not computational development. It's actually actively regressed in terms of ability to display/capture color information (video has a latitude of roughly 3.5 f-stops in either direction with black being incorrectably black and white being incorrectably white) with silver halide recording volumes of information through mechanical/chemical processes of which only 20% is actually perceptible by the eye without further processes to bring them into the visible color range. If human visual processing is so deficient, why is it such a bastard of a hurdle when making robots that respond to an array of visual stimuli? Of the five senses you could literally have not chosen one in which humans have more of an advantage.

Thanks I was going to say something like this but I don't understand visual acuity well enough. I think he might mean that it's impossible since we don't have Windows Media Player installed in our brain and a USB slot in our skull, which is of course completely missing the point.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

DrSunshine posted:

So, you're basically claiming that in order to have a thing that makes plans, introspects, and reasons, it is a prerequisite to have an organic human brain, made of brain cells, and be raised in a human culture? Pardon me if this seems a little bit broad a claim to make.

I am claiming that humans are not universal implementation of "planning, reasoning or introspection" and are just a bundle of a fairly arbitrary tool set. And even if you mimicked the skillset humans are also a very very arbitrary bundle of 'personality" stuff that is even more tied to biology so you still wouldn't have anything resembling data from star trek.

So basically there isn't one INT score we need to jack up to get a computer to be good at both facial recognition and speaking spanish, but even if we got a computer that does both by separate means it'd still be a sucky android because it still wouldn't 'act' meaningfully human because it wouldn't have our dumb arbitrary biological biases, motivations and parameters, and probably never would unless you just simulated the human brain.

Willie Tomg
Feb 2, 2006

Owlofcreamcheese posted:

So basically there isn't one INT score we need to jack up to get a computer to be good at both facial recognition and speaking spanish, but even if we got a computer that does both by separate means it'd still be a sucky android because it still wouldn't 'act' meaningfully human because it wouldn't have our dumb arbitrary biological biases, motivations and parameters, and probably never would unless you just simulated the human brain.

This concluding statement, in isolation, I agree with. An AI would probably simulate a human brain because yeah of course it would assuming it would be built by humans. We're humans, anthropormorphism is kind of our thing.

quote:

I am claiming that humans are not universal implementation of "planning, reasoning or introspection" and are just a bundle of a fairly arbitrary tool set. And even if you mimicked the skillset humans are also a very very arbitrary bundle of 'personality" stuff that is even more tied to biology so you still wouldn't have anything resembling data from star trek.

This rhetorical statement is dumb as poo poo though, which makes me think your problem isn't necessarily your point by itself but that to get to your ultimate point you talk a lot about things you don't know much about, which gets vaguely embarrassing for everyone involved when you make vast declamations about the rigid limits of the human condition.

Reveilled
Apr 19, 2007

Take up your rifles

Cingulate posted:

Does Bostrom respond to what I brought up in here? Specifically that his claims depend from what I can tell on linear or better scalability.

There's discussion of the process of developing a superintelligence, and that right now it's impossible to tell how close a human-level AI might be, and what obstacles might exist that could stall things indefinitely at a sub-human level. In terms of the AI improving itself, he has discussion of different kinds of "take-off" which depend on how easily a superintelligent AI can scale itself up, but makes the point that an AI does not necessarily need to be astronomically more intelligent than humans to pose a threat, depending on what other skills it possesses. Much of the book does deal with the fast takeoff scenario, but that's understandably because the book's central thesis is "prepare for the worst".

Thug Lessons
Dec 14, 2006


I lust in my heart for as many dead refugees as possible.
It's highly likely that the reason human brains which are good at facial recognition also tend to be good at speaking Spanish isn't an accident of evolution but something much more fundamental about intelligence.

Cingulate
Oct 23, 2012

by Fluffdaddy

Reveilled posted:

There's discussion of the process of developing a superintelligence, and that right now it's impossible to tell how close a human-level AI might be, and what obstacles might exist that could stall things indefinitely at a sub-human level. In terms of the AI improving itself, he has discussion of different kinds of "take-off" which depend on how easily a superintelligent AI can scale itself up, but makes the point that an AI does not necessarily need to be astronomically more intelligent than humans to pose a threat, depending on what other skills it possesses. Much of the book does deal with the fast takeoff scenario, but that's understandably because the book's central thesis is "prepare for the worst".
Does he ever show any indication of knowing there are hard physical limits on information processing?

Willie Tomg
Feb 2, 2006
*lives in a society all day*

Ahhhh, time to sit down and develop thoughts with my consciousness about how humans are just a set of limited rules, convert those those thoughts to text using my fingers upon a machine with a physical interface whose operation I learned after birth, and then send that text out onto an informational network to be interpreted in turn by a set of other living creatures who respond in ways that elicit feelings that cannot be expressed in any of the physical senses. My only regret is there is nothing more whatsoever to this life defined by an arbitrary set of biological qualia!

Thug Lessons
Dec 14, 2006


I lust in my heart for as many dead refugees as possible.

Willie Tomg posted:

*lives in a society all day*

Ahhhh, time to sit down and develop thoughts with my consciousness about how humans are just a set of limited rules, convert those those thoughts to text using my fingers upon a machine with a physical interface whose operation I learned after birth, and then send that text out onto an informational network to be interpreted in turn by a set of other living creatures who respond in ways that elicit feelings that cannot be expressed in any of the physical senses. My only regret is there is nothing more whatsoever to this life defined by an arbitrary set of biological qualia!

Good post but demerits for misusing "qualia".

Cingulate
Oct 23, 2012

by Fluffdaddy

Thug Lessons posted:

It's highly likely that the reason human brains which are good at facial recognition also tend to be good at speaking Spanish isn't an accident of evolution but something much more fundamental about intelligence.
AIs are, incidentally, also pretty good at spanish and face recognition.

The interesting question is still about the stuff computers are really bad at.

Reveilled
Apr 19, 2007

Take up your rifles

Cingulate posted:

Does he ever show any indication of knowing there are hard physical limits on information processing?

I'd say it's not really overly relevant to his point. Humans don't come anywhere near those limits, so you don't need to be able to process information faster than the speed of light to process information fast than the speed of human.

Blue Star
Feb 18, 2013

by FactsAreUseless
Seriously, human-like AI isn't going to happen in our lifetimes. Even today's newborns arent going to see it in their lifetimes. This is a silly discussion. Technology isnt accelerating, it's actually slowing down. The AIs that exist today are the sort of thing that makes Google and Amazon slightly easier to use. Thats pretty much it. Even more modest stuff like selfdriving cars are hyped up bullshit. None of that poo poo is going to happen.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Willie Tomg posted:

This rhetorical statement is dumb as poo poo though, which makes me think your problem isn't necessarily your point by itself but that to get to your ultimate point you talk a lot about things you don't know much about, which gets vaguely embarrassing for everyone involved when you make vast declamations about the rigid limits of the human condition.

Your response is a good example. You do not like my 'flawed' reasoning so you respond to it by describing your emotional state and by doing some vague threat that I need to stop having that reasoning because it should effect my emotional state in that negative way. That is not a thing that a computer is going to level up and then just download from somewhere. That is a wicked human response that a computer could not have without a bunch of really really weird programming that is unlikely to be feasible and probably not even desirable.

Like not even trying to do the sci-fi "COMPUTER NO HAVE EMOTION" but you provided this example by responding this way. Describing your emotion as a counter to my possibly imperfect "reasoning". Humans aren't just a generic implementation of pure thought. pure thought does not exist. It's all through a lens.

Cingulate
Oct 23, 2012

by Fluffdaddy

Reveilled posted:

I'd say it's not really overly relevant to his point. Humans don't come anywhere near those limits, so you don't need to be able to process information faster than the speed of light to process information fast than the speed of human.
That's not the problem. The problem is his fears rely on linear or better superlinear scaling, and that's simply not what we're currently seeing.


Owlofcreamcheese posted:

Your response is a good example. You do not like my 'flawed' reasoning so you respond to it by describing your emotional state and by doing some vague threat that I need to stop having that reasoning because it should effect my emotional state in that negative way. That is not a thing that a computer is going to level up and then just download from somewhere. That is a wicked human response that a computer could not have without a bunch of really really weird programming that is unlikely to be feasible and probably not even desirable.
Not sure what you're trying to say, but Willie pointed out you don't even know what human brains are capable of so what business do you have talking human-level AI until you do?

Thug Lessons
Dec 14, 2006


I lust in my heart for as many dead refugees as possible.

Cingulate posted:

That's not the problem. The problem is his fears rely on linear or better superlinear scaling, and that's simply not what we're currently seeing.

Well yeah, we're right on the cusp on the end of development for integrated circuits. You'll have an answer to your question 3-5 years.

Cingulate
Oct 23, 2012

by Fluffdaddy

Thug Lessons posted:

Well yeah, we're right on the cusp on the end of development for integrated circuits. You'll have an answer to your question 3-5 years.
No. Even while Moores law holds, performance doesn't scale superlinear. Of course, it's only gonna get worse, but Bostroms fears depend on scalability.

KOTEX GOD OF BLOOD
Jul 7, 2012

Let's first define what you mean by "human level AI," and by this I think you mean AI that causes a mind in the same way that a human brain causes a mind. I am going by this definition because no AI could truly be said to be "human-level" if it was incapable of semantic understanding.

What do I mean by this? John Searle's Chinese Room example is highly controversial these days but I think it still holds water. In part:

quote:

Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being.

The question Searle wants to answer is this: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese?[5][c] Searle calls the first position "strong AI" and the latter "weak AI".[d]

Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient paper, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program's instructions, and produce Chinese characters as output. If the computer had passed the Turing test this way, it follows, says Searle, that he would do so as well, simply by running the program manually.

Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing a behavior which is then interpreted as demonstrating intelligent conversation. However, Searle would not be able to understand the conversation. ("I don't speak a word of Chinese,"[8] he points out.) Therefore, he argues, it follows that the computer would not be able to understand the conversation either.

Searle argues that without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and since it does not think, it does not have a "mind" in anything like the normal sense of the word. Therefore, he concludes that "strong AI" is false.

Note, however, that Searle is dealing with the method by which we program AI today: with formal syntactic rules in a programming language. But within these bounds, I do agree that it seems impossible to create a "strong AI."

This does leave open the potential for another process to create a strong AI: for instance, some method for simulating the processes of a human brain on a silicon chip. We are far, far away from this possibility, both in terms of processing power and in terms of our understanding of the human brain, and how it causes a mind.

But even if we get to this point, there is no real way to know if what we have created truly is a mind in the same way that we experience it. Everyone knows the famous line from Descartes' first Meditation: "I think, therefore I am," in other words, the only thing we can truly trust is our own mind as we experience it. This is in part because we experience our minds in a fundamentally different way from everything around us. If you buy Descartes' analysis, we don't even know if our brains really do cause our minds: we can look at a CT scan and see areas of the brain light up as we think and feel different things, but from a formal, epistemological perspective, that doesn't necessarily tell us that our brains cause our minds, our consciousness, our "souls." Similarly, we may create some things that emulate or simulate humanity but we may never know whether we have truly created a "mind" in the same way that we experience it, precisely because we can't experience other "minds," even those we presume to be genuine other minds like those of the people around us.

For further reading start with this wikipedia article on Hubert Dreyfus's critique of AI. He focuses more on the failed promises of AI in the past and some of the false assumptions strong AI proponents make about minds. The caveat here is that like Searle, he is really talking about our contemporary conception of AI, as syntactic programs being run on silicon chips, and not really talking about potential developments in the future that really could duplicate the internal processes of a brain, were we to ever understand those processes. But this is all very, very far away from where we are now and would still operate on some quite tenuous assumptions about brains and minds, which is my answer to your original question.

KOTEX GOD OF BLOOD fucked around with this message at 07:34 on Nov 28, 2016

Reveilled
Apr 19, 2007

Take up your rifles

Cingulate posted:

That's not the problem. The problem is his fears rely on linear or better superlinear scaling, and that's simply not what we're currently seeing.

Can you explain how they rely on such scaling? Given that you asked me about the content of the book when I recommended it, you implied that you haven't actually read it, and it seems odd that you'd make such declarations about the content of his argument if you haven't actually read the argument. How do you think scaling would make impossible Bostrom's collective form of superintelligence?

Adbot
ADBOT LOVES YOU

Rush Limbo
Sep 5, 2005

its with a full house

Dead Reckoning posted:

TBH, most humans can't create meaningful art or coherently talk about ethical philosophy, so we're probably closer than we think. Creating a robot Mozart or Einstein might be hard, but beating the intelligence of the average human is shockingly easy.

Considering that AI requires so much computational power to tackle the problem of, say, stairs and even then fucks it up with enough regularity to be practically useless unless, for example, you devote 100% of its effort for it to incredibly slowly tackle the problem.

Meanwhile, even your average human intelligence has mastered the art of walking down the stairs, thinking of where they're going, and talking at the same time on an instinctual level to the point where it's not even consciously thought about.

  • Locked thread