Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Reveilled
Apr 19, 2007

Take up your rifles

Owlofcreamcheese posted:

To put it another way: a "human mind" is not some pure abstract, it's just a really dumb and arbitrary set of functions we just happened to get from evolution that we fashioned into tools for problem solving. No other intelligence will ever just happen to fall into the exact same mold even if it has the same end capabilities. If we meet an alien or a computer or whatever it's always just going to have rolled the dice and come from a different design space with different emotions and biases and stuff to the point of being very inhuman.

And even if you say "well if we design the robot to be exactly like a human it could be like a human!" you could simulate the brain cell by and all our hormones and so on and that would make something human seeming, but people generally view that sort of simulation as being a different sort of thing than an AI.

I think this is being over-pedantic, when people talk about "human-level AI" they're not talking about a robot which expresses emotions, they're talking about AIs that have a general intelligence on par with a human, meaning that they are capable of performing the same broad base of productive tasks a human can perform, to the same degree of competency. Pretty much all AI we have today is specific to an extremely small number of tasks, and they are usually purpose built with a specific task in mind. Right now we're not sure how to build a general intelligence, but do you think it's impossible to build a general intelligence?

Adbot
ADBOT LOVES YOU

Reveilled
Apr 19, 2007

Take up your rifles

Owlofcreamcheese posted:

It's not pedantic because the idea humans have "general intelligence" is hilariously wrong. Humans have a small set of tasks we are good at. We just toot our own horn by pretending that it's the important set.

Humans are a intelligence that can preform an extremely small number of tasks, and they are usually purpose built with a specific task in mind. With the task in our case being biologically evolved to be generally survive as a mammal on planet earth. But just by total non coincidence we just happen to have a majority of types of task mastery that we just totally happen to find to be the big important ones.

If humans can only perform an extremely small number of tasks, it should be no trouble then for you to provide a comprehensive list?

Or at the very least, give a ballpark estimate of the number of tasks a human can perform?

Reveilled
Apr 19, 2007

Take up your rifles

Owlofcreamcheese posted:

If computers are only able to preform a small number of tasks why don't you list all of them?

Okay, fair enough. But still, humans do have a general intelligence. This is true pretty much definitionally, as AI research usually defines "general intelligence" to mean "able to complete any task a human can do". Do you believe it's impossible to create an AI able to complete any task a human can do?

EDIT: Actually I'm going to walk back that concession as you moved the goal posts. I never said computers are only able to perform a small number of tasks, I said that AIs are only able to perform a small number of tasks. There's a huge number of tasks which can be performed by AIs, but each AI we have designed is only able to perform a small number of those tasks, and I'd absolutely say that those tasks are quantifiable. I think that someone familiar with any particular AI could list comprehensively the tasks that AI is able to perform, which I would contend is not the case with humans.

Reveilled fucked around with this message at 20:02 on Nov 27, 2016

Reveilled
Apr 19, 2007

Take up your rifles

Owlofcreamcheese posted:

Yeah, of course. We designed it that way. It's the same way we set up all the olympic events so none of them require flight or staying underwater for hours or anything. It's not that humans have a general physical ability, it's that it'd be a waste of time to have olympic events absolutely no one could compete in. Human intelligence isn't actually general, we just don't even bother to do the tests for the stuff that people clearly can't do.

It's not like the SATs are gonna have a last page that ask you to reverse an audio sample. No body on earth can mentally do that. Every single copy of the test would come back with that question skipped. We can mentally reverse a visual signal though and rotation of objects is a question on IQ tests and just flipping an object wouldn't even be a question because everyone can do that trivially.

But humans can reverse audio samples with computers, and the ability to do so requires a sort of intelligence. We identify that a task needs to be done, we build a tool that performs the task, then use the tool to complete the task. Even when the tool has already been invented and merely needs to be operated (removing the need for the "build a tool" task), we can direct a human to carry out the task, and a sufficiently intelligent human could perform the task without any additional instruction. That's not something I could ask Siri to do, or AlphaGo to do. Human level AI isn't "a dude", it's being able to perform tasks like "create a plan to achieve an arbitrary end-goal", "identify that a task exists which you don't know how to do", "learn to perform task", "if no known way to perform the task exists, invent one".

Reveilled
Apr 19, 2007

Take up your rifles

Owlofcreamcheese posted:

To restate the actual point I'm making:

AI will never happen because AI is a silly concept based on a hand wavy and fanciful concept of minds.

Someday we could make a cool robot arm, but it won't ever be flesh and blood. Someday we can probably manufacture cool flesh and blood arms but it won't be a robot arm.

You either simulate a human brain to the point it's not AI anymore or you make computer programs and they aren't very much like human minds and don't do the same stuff.

I guess the issue is I fundamentally don't understand where you are getting your definition of AI from, it doesn't look like any definition of AI I've seen outside of science fiction novels. An AI is a constructed entity which can acquire and apply knowledge and skills. Suggesting it's a thing that will never happen is ridiculous because it's something which already exists. The difference between the AIs we have now and the ones being referred to as "human level AI" or "general intelligence" is that they aren't capable of doing certain tasks which have allowed humans to develop beyond a purely animalistic stage. "Human-level AI" does not mean "behaves like a human", it means "can do productive tasks most humans can do". There's no reason to suppose that a "human-level AI" will have emotions or opinions or empathy or ethics. It could have motivations, but those motivations could be utterly alien to us.

Reveilled
Apr 19, 2007

Take up your rifles

blowfish posted:

:ssh: please don't tell this to robutt ethicists who are already calling for the protection of glorified realdolls humanlike androids against rape by neckbearded basement dwellers. Their heads might explode when they realise they're not actually doing something useful.

I can't say I've ever met anybody like this to tell them. Do they really exist? The mainstream of AI ethics right now seems to be debating how big of a threat to humanity a general intelligence could be, rather than the ethical implications of loving one.

Reveilled
Apr 19, 2007

Take up your rifles

Thug Lessons posted:

It seems likely to me that future AI will eventually resemble humans because a) individuality is extremely adaptive which is why we're individuals instead of the sea of LCL fluid from Evangelion and b) humans would prefer it that way.

If you're interested in this sort of stuff, I'd highly recommend the book Superintelligence by Nick Bostom. It makes a pretty persuasive case that unless we're super careful about how we carry out AI research, we'll probably only ever make one human-level intelligence (before it kills us). It's really dry at the start as it spends multiple chapters laying out the case of why human-level AI is likely to be developed at some point, but very interesting once it starts discussing the various scenarios that could play out depending on when, where, and how the first AIs are developed, and how different security precautions would or would not work.

Reveilled
Apr 19, 2007

Take up your rifles

Cingulate posted:

Does Bostrom respond to what I brought up in here? Specifically that his claims depend from what I can tell on linear or better scalability.

There's discussion of the process of developing a superintelligence, and that right now it's impossible to tell how close a human-level AI might be, and what obstacles might exist that could stall things indefinitely at a sub-human level. In terms of the AI improving itself, he has discussion of different kinds of "take-off" which depend on how easily a superintelligent AI can scale itself up, but makes the point that an AI does not necessarily need to be astronomically more intelligent than humans to pose a threat, depending on what other skills it possesses. Much of the book does deal with the fast takeoff scenario, but that's understandably because the book's central thesis is "prepare for the worst".

Reveilled
Apr 19, 2007

Take up your rifles

Cingulate posted:

Does he ever show any indication of knowing there are hard physical limits on information processing?

I'd say it's not really overly relevant to his point. Humans don't come anywhere near those limits, so you don't need to be able to process information faster than the speed of light to process information fast than the speed of human.

Adbot
ADBOT LOVES YOU

Reveilled
Apr 19, 2007

Take up your rifles

Cingulate posted:

That's not the problem. The problem is his fears rely on linear or better superlinear scaling, and that's simply not what we're currently seeing.

Can you explain how they rely on such scaling? Given that you asked me about the content of the book when I recommended it, you implied that you haven't actually read it, and it seems odd that you'd make such declarations about the content of his argument if you haven't actually read the argument. How do you think scaling would make impossible Bostrom's collective form of superintelligence?

  • Locked thread