Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Char
Jan 5, 2013

twodot posted:

We don't need to know how a bee flies to build to planes.

This implies there something we can do that can't be expressed in mathematics, which I've seen no evidence of. But more than that, you've chosen an abstraction layer for computers that just doesn't apply to humans, and it's causing you to come to weird computers. What computers do is shunt electrical signals around in fixed patterns, for the convenience of humans we've organized those shunts into logic gates, and those logic gates into adders, registers, and such, and those components into a CPU and memory. The result is a thing that takes instructions like "add this to that" or "go run that instruction" which looks sort of like mathematics, but there's absolutely no reason a computer needs to be organized like that (other than that humans would have a hard time understanding how to use it). Neural networks are essentially an expression of that fact, but implemented in software.

The fact that it's even possible to run dynamic shunting software on static shunting hardware is because, as near as we can tell, computers are general purpose problem solvers, they're Turing complete, and we've haven't encountered a solvable problem that isn't decidable by a Turing machine (even if a particular machine is inefficient compared to other machines). Humans weren't designed so it's harder to crack open a skull and say "Ah ha! Here's the adder" (though people certainly try), so we can't apply your computer abstraction level to humans, but they look pretty similar at the electrical/chemical shunting machine level.

I overall agree with your post, but I wanted to add that any kind of advanced contemporary prothesis works because we've being successful at reverse-engineering parts of our nervous system. It's not designed, but mathematics are a functional abstraction layer of phenomena. "Giving touch back to amputees" means we've somehow managed to break the Nerve-to-Nerve Communication Protocol - what might happen is that trying to frame the nervous system in a mathemathical paradigm won't make any sense - there's plenty of apparently senseless design choices in nature. But there has to be some kind of protocol.

Back on topic, anyway: are we assuming that "human-level intelligence" means, basically, "the ability of learning through experience and abstract thinking"? What's a good consensus on such a definition?

Adbot
ADBOT LOVES YOU

Char
Jan 5, 2013

A Wizard of Goatse posted:

The ability to problem-solve and independently learn and perform complex tasks in an uncontrolled environment, without supervision.

I'd add, "given the limits of the tools available to interact with reality", which could be obvious, but maybe isn't. I don't think an hypothetical intelligent machine connected to a photocamera and a 3D printer could ever achieve intelligence.

Liquid Communism posted:

Can you meaningfully distinguish the two?

An artificial chimpanzee would be an artificial intelligence without being an artificial person.

Char
Jan 5, 2013

Cingulate posted:

Neither of these are uncontroversial.

I don't care much for the artificial intelligence/artificial people argument. I think they're different things, but I don't see how discussing it is beneficial to the topic.

But why you think that there's not correlation between the quality and quantity of data an entity manages to gather about its surroundings, the complexity of the interactions such entity can have with its surroundings, and its potential for intelligence?
A blind man still has ad incredibly refined set of receptors and actuators to interact with reality - the focus of his intelligence switches to the other receptors left.

Char
Jan 5, 2013

Cingulate posted:

I guess there's a pretty decent correlation, but where does that take us?

Back to

quote:

The ability to problem-solve and independently learn and perform complex tasks in an uncontrolled environment, without supervision.

So, is an octopus intelligent? I mean, I think it is. An octopus-level AI would be pretty impressive. What's "human level", then?

I think that "intelligence" is achieved by animals as well, given that type of interpretation. I just added the part regarding context because our hypothetical artificial machine would be created in a vacuum, so it should match a level of intelligence comparable to what its capabilities to influence the surroundings are - something that animals managed to obtain with thousands of millennia of evolution.

But I think we're asking ourselves something more like "How far away are we from an AI that could develop into a civilization? Be it a single AI or a society of AIs?"
Mind that I'm putting a quite anthropocentric spin to this question, by using the term "civilization".

So... why there's no octopus civilization? What are they lacking in?

Bear with me as I don't think I'm educated enough to put this whole concept in words, but I'l try nonetheless: to achieve what we'd call "problem-solving intelligence" you need to be able, as an entity, to handle a specific treshold of complexity of the reality surrounding you.
To achieve "civilization-level intelligence", what conditions did we humans meet that other intelligent animals didn't? It cannot be sheer brain size, whales have a huge brain compared to us and they still aren't civilization-capable. It cannot be the opposable thumb, monkeys have that as well.

It has to be something related to our advanced abstract thinking, which allows us, among other things, to write, read and talk. Now, the question moves. What is abstract thinking made of? Why is ours much better than other animals' ?

So, where does that take us? That currently, "testing for intelligence" is a question that still needs development, but basically, if you can prove it can think in an abstract manner at least as well as a human does, it's civilization-intelligent. How you do prove that?
Can you build abstract thinking off neural networks?

Char fucked around with this message at 17:03 on Jan 11, 2017

Char
Jan 5, 2013

Cingulate posted:

And by AIs.

AIs match octopusses (octopi? ...), and actually also humans, on multiple cognitive dimensions, but not on others.

I mostly agree with this. I've been thinking if there was anything wrong in what you were expressing, then it struck me. Something that screws with my thinking is looking into the purpose of the learning softwares we're developing: AlphaGo is meant to play Go and only Go: its purpose is limited. What about living organisms? Aren't them, basically, machines built to propagate and mix information? Which is not so limited - it's narrow but general.
Once again, I'm trying to draw comparisons - AlphaGo is extremely good at performing a very limited set of tasks (only one, actually), and it uses a specific tool: mathematical analysis. Since we're developing it, it's never going to use anything else than what we're allowing.
Is there anything in nature that has similar limitations? Nature used all the chemical and physical tools it could manage to. From carbon-oxygen reactions to electrical impulses to bioluminscence to adaptation to extreme ecosystems. An endless set of tools to fulfill that purpose.

So, would I be wrong into thinking that AlphaGo's problem solving is subconscious? That would place AlphaGo somewhere between plants and insects? Spiders aren't social and have developed an extremely refined method for hunting. But it's how much of this process is conscious? And how much can they adapt their webbing to huge ecosystem alterations? Only natural mutations managed to differentiate the problem solving of organisms without a capable nervous system - which allowed them to use, instead, a wider set of tools to fulfill their purpose.

We're designing our learning algorithms without possibility of mutation, so they'll never adapt to different ecosystems (or, different problems, different point of views) by theirselves. We're forcing the hand onto them: we need them to fullfil the tasks we're designing these intelligences to do, and these tasks are way more specific than what nature gives to living organisms.

Limited toolsets, specific purposes: unlike nature, we're deliberately cherry-picking their cognitive dimensions, avoiding spontanous alterations, so to speak.

quote:

I guess as a linguist, I have to go with language - specifically, communicating near-arbitrary intentions and propositions. It's not sufficient to create human civilization as-is, but it seems to be the key difference between the cognitive (rather than material) situations we find ourselves in.

Of course, machines are pretty decent at generating powerful syntax and general pattern recognition, but they have problems with intentionality and meaning. They're bad at generating a mental representation of where the other person is coming from and going for.

I don't think that's in itself necessary for AI to be True AI (you could imagine our robot overlords taking over without ever really figuring us out), nor am I very confident its antecedents (whatever makes us able to have, represent, and en- and decode intentions and propositions) are.

I agree completely: First, language allows us to trick our biology and keep huge amounts of knowledge across generations - knowledge that would be otherwise lost.
Second, I don't think any true AI has to exactly match our "features": we need to communicate because our inherent weaknesses, compared to reality, force us to cooperate - social behaviour, given our ecosystem and physiology, offers one of the best compromises.
I can forsee an intelligent entity not needing social behaviour, but I cannot forsee one having no ability to understand the unknown. I think any social function any AI could have, would be developed on the basis of hardcoded need. The human point of view on communitacting is extremely biased, given our nature - our hardcoded needs.

And now, I'm having a hard time imagining an entity, or entities living across iterative generations, achieving intelligence without sharing a basic trait with terran life: having a very un-specific purpose, and an endless array of tools to fulfill it.

So... I think civilization-level intelligence could be achieved by man-made machines who satisfied all conditions described so far: being able to handle huge amounts of data, being able to adapt its known frames to the unknown, and have an endless amount of tools to attempt fulfilling a narrow but generalist purpose.

Char fucked around with this message at 13:18 on Jan 12, 2017

  • Locked thread