Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
KOTEX GOD OF BLOOD
Jul 7, 2012

Let's first define what you mean by "human level AI," and by this I think you mean AI that causes a mind in the same way that a human brain causes a mind. I am going by this definition because no AI could truly be said to be "human-level" if it was incapable of semantic understanding.

What do I mean by this? John Searle's Chinese Room example is highly controversial these days but I think it still holds water. In part:

quote:

Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being.

The question Searle wants to answer is this: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese?[5][c] Searle calls the first position "strong AI" and the latter "weak AI".[d]

Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient paper, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program's instructions, and produce Chinese characters as output. If the computer had passed the Turing test this way, it follows, says Searle, that he would do so as well, simply by running the program manually.

Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing a behavior which is then interpreted as demonstrating intelligent conversation. However, Searle would not be able to understand the conversation. ("I don't speak a word of Chinese,"[8] he points out.) Therefore, he argues, it follows that the computer would not be able to understand the conversation either.

Searle argues that without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and since it does not think, it does not have a "mind" in anything like the normal sense of the word. Therefore, he concludes that "strong AI" is false.

Note, however, that Searle is dealing with the method by which we program AI today: with formal syntactic rules in a programming language. But within these bounds, I do agree that it seems impossible to create a "strong AI."

This does leave open the potential for another process to create a strong AI: for instance, some method for simulating the processes of a human brain on a silicon chip. We are far, far away from this possibility, both in terms of processing power and in terms of our understanding of the human brain, and how it causes a mind.

But even if we get to this point, there is no real way to know if what we have created truly is a mind in the same way that we experience it. Everyone knows the famous line from Descartes' first Meditation: "I think, therefore I am," in other words, the only thing we can truly trust is our own mind as we experience it. This is in part because we experience our minds in a fundamentally different way from everything around us. If you buy Descartes' analysis, we don't even know if our brains really do cause our minds: we can look at a CT scan and see areas of the brain light up as we think and feel different things, but from a formal, epistemological perspective, that doesn't necessarily tell us that our brains cause our minds, our consciousness, our "souls." Similarly, we may create some things that emulate or simulate humanity but we may never know whether we have truly created a "mind" in the same way that we experience it, precisely because we can't experience other "minds," even those we presume to be genuine other minds like those of the people around us.

For further reading start with this wikipedia article on Hubert Dreyfus's critique of AI. He focuses more on the failed promises of AI in the past and some of the false assumptions strong AI proponents make about minds. The caveat here is that like Searle, he is really talking about our contemporary conception of AI, as syntactic programs being run on silicon chips, and not really talking about potential developments in the future that really could duplicate the internal processes of a brain, were we to ever understand those processes. But this is all very, very far away from where we are now and would still operate on some quite tenuous assumptions about brains and minds, which is my answer to your original question.

KOTEX GOD OF BLOOD fucked around with this message at 07:34 on Nov 28, 2016

Adbot
ADBOT LOVES YOU

KOTEX GOD OF BLOOD
Jul 7, 2012

Dolash posted:

The answer to the Chinese Room Puzzle and the Turing Test stuff, to me, is that it's irrelevant if what's going on is "true" intelligence or if it has the "spark" of life or is "self-aware" or "human-level" or any of the other terms we try to use for the ineffable difference between us and the toaster.

We'll have intelligence when we build something that we're not comfortable treating as not intelligent. If you want to get Cartesian you don't know for sure that anyone's intelligent except maybe yourself, you extend recognition to others because you have to and because it's hard not to when they demand it. If machines are made that are sufficiently autonomous and convincing then we might as well recognize them once it becomes socially awkward not to and leave the fine detail work to the philosophers. To that end, we'll make bigger gains in public perception with things that aren't even part of the core intelligence question, like improved natural language skills and better-engineered body-language.
It's relevant because the OP asked about human-level AI, not a Turing-safe simulation that approximates human responses. Even then, because of the inability of any AI technology we know of right now to understand and interpret meaning, past a certain point you are going to have a really hard time making something capable of convincingly simulating a system that understands it. Passing a Turing test is a lot different from a system capable of organically being a celebrated poet or artist in a way that isn't just mathematically miming others' art but a real expression of feeling – or even just simulating this.

Senor Tron posted:

The follow on from that is to imagine systems where a group of humans simulates a living creatures brain.

For example a house cat is estimated to have 760,000,000 neurons.

So lets imagine we got the entire population of Earth to simulate a cats brain. We have developed some amazing future scanning technology that allows us to take a snapshot, and we build a perfect model using 760,000,000 people who have access to some kind of email/pager system that they use to receive and send signals to their other connections. The remaining 5+ billion people act as error checkers, fixing holes in the network as they arrive, bringing food and water to the participants and so on.

It seems widely well accepted that many other animals have consciousness. In this situation, where our 760,000,000 people are replicating the behaviour of a cats brain, does that replica have consciousness? If so where does it reside?
You are making several of the bad assumptions here which Dreyfus describes, or at minimum the biological and psychological assumptions.

KOTEX GOD OF BLOOD
Jul 7, 2012

Again, that relies on several assumptions about things we don't understand yet with regard to how brains cause minds. Think about the kind of knowledge that would be required for this simulation and you begin to see what I mean.

KOTEX GOD OF BLOOD
Jul 7, 2012

blowfish posted:

Does a screwdriver have meaning?
The right question to ask is can a screwdriver have meaning, which it can.

KOTEX GOD OF BLOOD
Jul 7, 2012

Sure?

KOTEX GOD OF BLOOD
Jul 7, 2012

I want to answer this question in the context of \W/estworld, which I just finished the first season of. It unintentionally demonstrates some of the errors people make when thinking about the feasibility of AI.

Westworld's notion – and the story is super convoluted and I was pretty baked for most of it, so forgive me if I get this wrong – is that the hosts transition from syntactic function to an understanding of semantic meaning as a consequence of the "reveries" and a sudden capacity for memory retention: their improvisation (self-modifying code) based on memory eventually becomes "consciousness," whatever that means, but we can take it as a capacity to appreciate and operate upon meaning rather than the formal rules outlined by their programmers.

The problem is that there is no functional reason why an AI's increasingly powerful capacity to improvise behavior based on preexisting code means they may transcend it and their underlying operation transitions to a semantic "mind". Based on their behavior, you would have pretty good reason to believe that the hosts do begin to understand meaning. They act increasingly as humans do and appear to form meaningful relationships, etc. The error is in believing that an increasingly high-quality emulation of human behavior, complete with a simulated understanding of semantic meaning, is equivalent to actually understanding that meaning internally. The show really wants you to make this unfounded leap by making you feel sorry for these robots who are getting raped and killed in perpetuity and as a result of the reveries begin to resemble real people in their actions and reactions rather than fancy moving dolls. But it's just that, a resemblance. If the show didn't make it clear that the hosts actually do gain conscious minds and semantic understanding, evidently based on magic, emergent qualities that don't make any sense in reality, the guests would be correct in continuing their slaughter and pillage without regard for the feelings of the hosts, because again, a really high quality emulation of human behavior is not equivalent to generating a mind, or anything more substantive than a machine following an ordered set of rules, even if it improvises new rules on its own.

KOTEX GOD OF BLOOD
Jul 7, 2012

A Wizard of Goatse posted:

The predictions of specific people with a background in the field who can show their work and make a plausible case for why they're predicting what they are are more relevant and promising than goobers burbling about how some guy was probably dismissive about electricity in 1707 therefore they're visionaries ahead of their time instead of fantasists in I loving Love Science T-shirts; which are more barely more relevant than the predictions of the abstract, mostly-mythical strawman 'they' they handwave to.

There is basically no overlap between the expectations of the keyboard metaphysicists navel-gazing about what is intelligence, really and speculating that robot people will be the majority vote of 2028, and the expectations of people who actually work with computer technology. The latter guys are the ones expected to actually build the robot people, so I'll tend to weigh what they have to say a bit higher, especially if they can source their case in their work instead of Star Wars.
On the other hand, Hubert Dreyfus, a philosopher with no training in technology whatsoever who probably has trouble setting up a projector for a class session, said that all the "guys who build the robot people" promising strong AI in the near term were full of poo poo back in the 70s and turned out to be right. So if anything it's more prudent to take things AI researchers say with a heaping mound of salt given their utter inability to deliver on any of the huge promises they have been making for decades.

KOTEX GOD OF BLOOD
Jul 7, 2012

Pochoclo posted:

I mean seriously, you can buy an aircraft carrier or a nuclear submarine too, technically. Not quite the same thing as them being available for the average Joe.
Or useful. I mean, that's the main thing with flying cars - based on movies people expect a regular car that can take off and hover thanks to some magic inertialess drive. The "flying cars" we have now are more like planes with folding wings that can be driven up to 60mph on little taxiing wheels or w/e.

Except for the Moller Skycar, which is fake bullshit.

KOTEX GOD OF BLOOD
Jul 7, 2012

Cingulate posted:

And on the other other hand, the Hubert Dreyfus was mostly skeptical about GOFAI, and was much more optimistic about neural networks (writing a book about how learning works with his brother, who then later made important contributions to neural nets).

And what is today's AI like? Well, it's not Good Old Fashioned AI. It's neural nets.
So the Dreyfus Argument can be used in both ways actually.
OK, but neural nets are not a good enough reason to be any less skeptical about AI given its past, and given the tendency of even its present proponents to say poo poo like this:

Cingulate posted:

Future AI will be to what we imagine AI to be right now as Tesla and Uber are to flying cars.
without any basis in anything other than masturbatory sci-fi dreams.

Adbot
ADBOT LOVES YOU

KOTEX GOD OF BLOOD
Jul 7, 2012

Cingulate posted:

But you do understand this is just the opposite of Dreyfus' (the guy you brought up) point from the 70s?
No, you don't understand Dreyfus's original point, because neural nets still fail to overcome several of the faulty assumptions he identified.

I am genuinely interested in whether Dreyfus supports neural nets as potential strong AI but google turned up nothing.

  • Locked thread