|
Let's first define what you mean by "human level AI," and by this I think you mean AI that causes a mind in the same way that a human brain causes a mind. I am going by this definition because no AI could truly be said to be "human-level" if it was incapable of semantic understanding. What do I mean by this? John Searle's Chinese Room example is highly controversial these days but I think it still holds water. In part: quote:Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being. Note, however, that Searle is dealing with the method by which we program AI today: with formal syntactic rules in a programming language. But within these bounds, I do agree that it seems impossible to create a "strong AI." This does leave open the potential for another process to create a strong AI: for instance, some method for simulating the processes of a human brain on a silicon chip. We are far, far away from this possibility, both in terms of processing power and in terms of our understanding of the human brain, and how it causes a mind. But even if we get to this point, there is no real way to know if what we have created truly is a mind in the same way that we experience it. Everyone knows the famous line from Descartes' first Meditation: "I think, therefore I am," in other words, the only thing we can truly trust is our own mind as we experience it. This is in part because we experience our minds in a fundamentally different way from everything around us. If you buy Descartes' analysis, we don't even know if our brains really do cause our minds: we can look at a CT scan and see areas of the brain light up as we think and feel different things, but from a formal, epistemological perspective, that doesn't necessarily tell us that our brains cause our minds, our consciousness, our "souls." Similarly, we may create some things that emulate or simulate humanity but we may never know whether we have truly created a "mind" in the same way that we experience it, precisely because we can't experience other "minds," even those we presume to be genuine other minds like those of the people around us. For further reading start with this wikipedia article on Hubert Dreyfus's critique of AI. He focuses more on the failed promises of AI in the past and some of the false assumptions strong AI proponents make about minds. The caveat here is that like Searle, he is really talking about our contemporary conception of AI, as syntactic programs being run on silicon chips, and not really talking about potential developments in the future that really could duplicate the internal processes of a brain, were we to ever understand those processes. But this is all very, very far away from where we are now and would still operate on some quite tenuous assumptions about brains and minds, which is my answer to your original question. KOTEX GOD OF BLOOD fucked around with this message at 07:34 on Nov 28, 2016 |
# ¿ Nov 28, 2016 05:25 |
|
|
# ¿ May 14, 2024 09:52 |
|
Dolash posted:The answer to the Chinese Room Puzzle and the Turing Test stuff, to me, is that it's irrelevant if what's going on is "true" intelligence or if it has the "spark" of life or is "self-aware" or "human-level" or any of the other terms we try to use for the ineffable difference between us and the toaster. Senor Tron posted:The follow on from that is to imagine systems where a group of humans simulates a living creatures brain.
|
# ¿ Nov 28, 2016 16:14 |
|
Again, that relies on several assumptions about things we don't understand yet with regard to how brains cause minds. Think about the kind of knowledge that would be required for this simulation and you begin to see what I mean.
|
# ¿ Nov 28, 2016 16:58 |
|
blowfish posted:Does a screwdriver have meaning?
|
# ¿ Nov 30, 2016 19:50 |
|
Sure?
|
# ¿ Nov 30, 2016 21:10 |
|
I want to answer this question in the context of \W/estworld, which I just finished the first season of. It unintentionally demonstrates some of the errors people make when thinking about the feasibility of AI. Westworld's notion – and the story is super convoluted and I was pretty baked for most of it, so forgive me if I get this wrong – is that the hosts transition from syntactic function to an understanding of semantic meaning as a consequence of the "reveries" and a sudden capacity for memory retention: their improvisation (self-modifying code) based on memory eventually becomes "consciousness," whatever that means, but we can take it as a capacity to appreciate and operate upon meaning rather than the formal rules outlined by their programmers. The problem is that there is no functional reason why an AI's increasingly powerful capacity to improvise behavior based on preexisting code means they may transcend it and their underlying operation transitions to a semantic "mind". Based on their behavior, you would have pretty good reason to believe that the hosts do begin to understand meaning. They act increasingly as humans do and appear to form meaningful relationships, etc. The error is in believing that an increasingly high-quality emulation of human behavior, complete with a simulated understanding of semantic meaning, is equivalent to actually understanding that meaning internally. The show really wants you to make this unfounded leap by making you feel sorry for these robots who are getting raped and killed in perpetuity and as a result of the reveries begin to resemble real people in their actions and reactions rather than fancy moving dolls. But it's just that, a resemblance. If the show didn't make it clear that the hosts actually do gain conscious minds and semantic understanding, evidently based on magic, emergent qualities that don't make any sense in reality, the guests would be correct in continuing their slaughter and pillage without regard for the feelings of the hosts, because again, a really high quality emulation of human behavior is not equivalent to generating a mind, or anything more substantive than a machine following an ordered set of rules, even if it improvises new rules on its own.
|
# ¿ Dec 24, 2016 18:15 |
|
A Wizard of Goatse posted:The predictions of specific people with a background in the field who can show their work and make a plausible case for why they're predicting what they are are more relevant and promising than goobers burbling about how some guy was probably dismissive about electricity in 1707 therefore they're visionaries ahead of their time instead of fantasists in I loving Love Science T-shirts; which are more barely more relevant than the predictions of the abstract, mostly-mythical strawman 'they' they handwave to.
|
# ¿ Dec 25, 2016 02:58 |
|
Pochoclo posted:I mean seriously, you can buy an aircraft carrier or a nuclear submarine too, technically. Not quite the same thing as them being available for the average Joe. Except for the Moller Skycar, which is fake bullshit.
|
# ¿ Dec 25, 2016 03:17 |
|
Cingulate posted:And on the other other hand, the Hubert Dreyfus was mostly skeptical about GOFAI, and was much more optimistic about neural networks (writing a book about how learning works with his brother, who then later made important contributions to neural nets). Cingulate posted:Future AI will be to what we imagine AI to be right now as Tesla and Uber are to flying cars.
|
# ¿ Dec 25, 2016 03:58 |
|
|
# ¿ May 14, 2024 09:52 |
|
Cingulate posted:But you do understand this is just the opposite of Dreyfus' (the guy you brought up) point from the 70s? I am genuinely interested in whether Dreyfus supports neural nets as potential strong AI but google turned up nothing.
|
# ¿ Dec 25, 2016 05:36 |