|
Owlofcreamcheese posted:To put it another way: a "human mind" is not some pure abstract, it's just a really dumb and arbitrary set of functions we just happened to get from evolution that we fashioned into tools for problem solving. No other intelligence will ever just happen to fall into the exact same mold even if it has the same end capabilities. If we meet an alien or a computer or whatever it's always just going to have rolled the dice and come from a different design space with different emotions and biases and stuff to the point of being very inhuman. I think this is being over-pedantic, when people talk about "human-level AI" they're not talking about a robot which expresses emotions, they're talking about AIs that have a general intelligence on par with a human, meaning that they are capable of performing the same broad base of productive tasks a human can perform, to the same degree of competency. Pretty much all AI we have today is specific to an extremely small number of tasks, and they are usually purpose built with a specific task in mind. Right now we're not sure how to build a general intelligence, but do you think it's impossible to build a general intelligence?
|
# ¿ Nov 27, 2016 18:09 |
|
|
# ¿ May 13, 2024 23:40 |
|
Owlofcreamcheese posted:It's not pedantic because the idea humans have "general intelligence" is hilariously wrong. Humans have a small set of tasks we are good at. We just toot our own horn by pretending that it's the important set. If humans can only perform an extremely small number of tasks, it should be no trouble then for you to provide a comprehensive list? Or at the very least, give a ballpark estimate of the number of tasks a human can perform?
|
# ¿ Nov 27, 2016 19:43 |
|
Owlofcreamcheese posted:If computers are only able to preform a small number of tasks why don't you list all of them? Okay, fair enough. But still, humans do have a general intelligence. This is true pretty much definitionally, as AI research usually defines "general intelligence" to mean "able to complete any task a human can do". Do you believe it's impossible to create an AI able to complete any task a human can do? EDIT: Actually I'm going to walk back that concession as you moved the goal posts. I never said computers are only able to perform a small number of tasks, I said that AIs are only able to perform a small number of tasks. There's a huge number of tasks which can be performed by AIs, but each AI we have designed is only able to perform a small number of those tasks, and I'd absolutely say that those tasks are quantifiable. I think that someone familiar with any particular AI could list comprehensively the tasks that AI is able to perform, which I would contend is not the case with humans. Reveilled fucked around with this message at 20:02 on Nov 27, 2016 |
# ¿ Nov 27, 2016 19:53 |
|
Owlofcreamcheese posted:Yeah, of course. We designed it that way. It's the same way we set up all the olympic events so none of them require flight or staying underwater for hours or anything. It's not that humans have a general physical ability, it's that it'd be a waste of time to have olympic events absolutely no one could compete in. Human intelligence isn't actually general, we just don't even bother to do the tests for the stuff that people clearly can't do. But humans can reverse audio samples with computers, and the ability to do so requires a sort of intelligence. We identify that a task needs to be done, we build a tool that performs the task, then use the tool to complete the task. Even when the tool has already been invented and merely needs to be operated (removing the need for the "build a tool" task), we can direct a human to carry out the task, and a sufficiently intelligent human could perform the task without any additional instruction. That's not something I could ask Siri to do, or AlphaGo to do. Human level AI isn't "a dude", it's being able to perform tasks like "create a plan to achieve an arbitrary end-goal", "identify that a task exists which you don't know how to do", "learn to perform task", "if no known way to perform the task exists, invent one".
|
# ¿ Nov 27, 2016 21:54 |
|
Owlofcreamcheese posted:To restate the actual point I'm making: I guess the issue is I fundamentally don't understand where you are getting your definition of AI from, it doesn't look like any definition of AI I've seen outside of science fiction novels. An AI is a constructed entity which can acquire and apply knowledge and skills. Suggesting it's a thing that will never happen is ridiculous because it's something which already exists. The difference between the AIs we have now and the ones being referred to as "human level AI" or "general intelligence" is that they aren't capable of doing certain tasks which have allowed humans to develop beyond a purely animalistic stage. "Human-level AI" does not mean "behaves like a human", it means "can do productive tasks most humans can do". There's no reason to suppose that a "human-level AI" will have emotions or opinions or empathy or ethics. It could have motivations, but those motivations could be utterly alien to us.
|
# ¿ Nov 28, 2016 00:15 |
|
blowfish posted:please don't tell this to robutt ethicists who are already calling for the protection of I can't say I've ever met anybody like this to tell them. Do they really exist? The mainstream of AI ethics right now seems to be debating how big of a threat to humanity a general intelligence could be, rather than the ethical implications of loving one.
|
# ¿ Nov 28, 2016 00:33 |
|
Thug Lessons posted:It seems likely to me that future AI will eventually resemble humans because a) individuality is extremely adaptive which is why we're individuals instead of the sea of LCL fluid from Evangelion and b) humans would prefer it that way. If you're interested in this sort of stuff, I'd highly recommend the book Superintelligence by Nick Bostom. It makes a pretty persuasive case that unless we're super careful about how we carry out AI research, we'll probably only ever make one human-level intelligence (before it kills us). It's really dry at the start as it spends multiple chapters laying out the case of why human-level AI is likely to be developed at some point, but very interesting once it starts discussing the various scenarios that could play out depending on when, where, and how the first AIs are developed, and how different security precautions would or would not work.
|
# ¿ Nov 28, 2016 01:08 |
|
Cingulate posted:Does Bostrom respond to what I brought up in here? Specifically that his claims depend from what I can tell on linear or better scalability. There's discussion of the process of developing a superintelligence, and that right now it's impossible to tell how close a human-level AI might be, and what obstacles might exist that could stall things indefinitely at a sub-human level. In terms of the AI improving itself, he has discussion of different kinds of "take-off" which depend on how easily a superintelligent AI can scale itself up, but makes the point that an AI does not necessarily need to be astronomically more intelligent than humans to pose a threat, depending on what other skills it possesses. Much of the book does deal with the fast takeoff scenario, but that's understandably because the book's central thesis is "prepare for the worst".
|
# ¿ Nov 28, 2016 01:31 |
|
Cingulate posted:Does he ever show any indication of knowing there are hard physical limits on information processing? I'd say it's not really overly relevant to his point. Humans don't come anywhere near those limits, so you don't need to be able to process information faster than the speed of light to process information fast than the speed of human.
|
# ¿ Nov 28, 2016 01:44 |
|
|
# ¿ May 13, 2024 23:40 |
|
Cingulate posted:That's not the problem. The problem is his fears rely on linear or better superlinear scaling, and that's simply not what we're currently seeing. Can you explain how they rely on such scaling? Given that you asked me about the content of the book when I recommended it, you implied that you haven't actually read it, and it seems odd that you'd make such declarations about the content of his argument if you haven't actually read the argument. How do you think scaling would make impossible Bostrom's collective form of superintelligence?
|
# ¿ Nov 28, 2016 08:57 |