|
twodot posted:We don't need to know how a bee flies to build to planes. I overall agree with your post, but I wanted to add that any kind of advanced contemporary prothesis works because we've being successful at reverse-engineering parts of our nervous system. It's not designed, but mathematics are a functional abstraction layer of phenomena. "Giving touch back to amputees" means we've somehow managed to break the Nerve-to-Nerve Communication Protocol - what might happen is that trying to frame the nervous system in a mathemathical paradigm won't make any sense - there's plenty of apparently senseless design choices in nature. But there has to be some kind of protocol. Back on topic, anyway: are we assuming that "human-level intelligence" means, basically, "the ability of learning through experience and abstract thinking"? What's a good consensus on such a definition?
|
# ¿ Jan 10, 2017 18:32 |
|
|
# ¿ May 14, 2024 02:04 |
|
A Wizard of Goatse posted:The ability to problem-solve and independently learn and perform complex tasks in an uncontrolled environment, without supervision. I'd add, "given the limits of the tools available to interact with reality", which could be obvious, but maybe isn't. I don't think an hypothetical intelligent machine connected to a photocamera and a 3D printer could ever achieve intelligence. Liquid Communism posted:Can you meaningfully distinguish the two? An artificial chimpanzee would be an artificial intelligence without being an artificial person.
|
# ¿ Jan 11, 2017 10:06 |
|
Cingulate posted:Neither of these are uncontroversial. I don't care much for the artificial intelligence/artificial people argument. I think they're different things, but I don't see how discussing it is beneficial to the topic. But why you think that there's not correlation between the quality and quantity of data an entity manages to gather about its surroundings, the complexity of the interactions such entity can have with its surroundings, and its potential for intelligence? A blind man still has ad incredibly refined set of receptors and actuators to interact with reality - the focus of his intelligence switches to the other receptors left.
|
# ¿ Jan 11, 2017 13:05 |
|
Cingulate posted:I guess there's a pretty decent correlation, but where does that take us? Back to quote:The ability to problem-solve and independently learn and perform complex tasks in an uncontrolled environment, without supervision. So, is an octopus intelligent? I mean, I think it is. An octopus-level AI would be pretty impressive. What's "human level", then? I think that "intelligence" is achieved by animals as well, given that type of interpretation. I just added the part regarding context because our hypothetical artificial machine would be created in a vacuum, so it should match a level of intelligence comparable to what its capabilities to influence the surroundings are - something that animals managed to obtain with thousands of millennia of evolution. But I think we're asking ourselves something more like "How far away are we from an AI that could develop into a civilization? Be it a single AI or a society of AIs?" Mind that I'm putting a quite anthropocentric spin to this question, by using the term "civilization". So... why there's no octopus civilization? What are they lacking in? Bear with me as I don't think I'm educated enough to put this whole concept in words, but I'l try nonetheless: to achieve what we'd call "problem-solving intelligence" you need to be able, as an entity, to handle a specific treshold of complexity of the reality surrounding you. To achieve "civilization-level intelligence", what conditions did we humans meet that other intelligent animals didn't? It cannot be sheer brain size, whales have a huge brain compared to us and they still aren't civilization-capable. It cannot be the opposable thumb, monkeys have that as well. It has to be something related to our advanced abstract thinking, which allows us, among other things, to write, read and talk. Now, the question moves. What is abstract thinking made of? Why is ours much better than other animals' ? So, where does that take us? That currently, "testing for intelligence" is a question that still needs development, but basically, if you can prove it can think in an abstract manner at least as well as a human does, it's civilization-intelligent. How you do prove that? Can you build abstract thinking off neural networks? Char fucked around with this message at 17:03 on Jan 11, 2017 |
# ¿ Jan 11, 2017 16:57 |
|
Cingulate posted:And by AIs. I mostly agree with this. I've been thinking if there was anything wrong in what you were expressing, then it struck me. Something that screws with my thinking is looking into the purpose of the learning softwares we're developing: AlphaGo is meant to play Go and only Go: its purpose is limited. What about living organisms? Aren't them, basically, machines built to propagate and mix information? Which is not so limited - it's narrow but general. Once again, I'm trying to draw comparisons - AlphaGo is extremely good at performing a very limited set of tasks (only one, actually), and it uses a specific tool: mathematical analysis. Since we're developing it, it's never going to use anything else than what we're allowing. Is there anything in nature that has similar limitations? Nature used all the chemical and physical tools it could manage to. From carbon-oxygen reactions to electrical impulses to bioluminscence to adaptation to extreme ecosystems. An endless set of tools to fulfill that purpose. So, would I be wrong into thinking that AlphaGo's problem solving is subconscious? That would place AlphaGo somewhere between plants and insects? Spiders aren't social and have developed an extremely refined method for hunting. But it's how much of this process is conscious? And how much can they adapt their webbing to huge ecosystem alterations? Only natural mutations managed to differentiate the problem solving of organisms without a capable nervous system - which allowed them to use, instead, a wider set of tools to fulfill their purpose. We're designing our learning algorithms without possibility of mutation, so they'll never adapt to different ecosystems (or, different problems, different point of views) by theirselves. We're forcing the hand onto them: we need them to fullfil the tasks we're designing these intelligences to do, and these tasks are way more specific than what nature gives to living organisms. Limited toolsets, specific purposes: unlike nature, we're deliberately cherry-picking their cognitive dimensions, avoiding spontanous alterations, so to speak. quote:I guess as a linguist, I have to go with language - specifically, communicating near-arbitrary intentions and propositions. It's not sufficient to create human civilization as-is, but it seems to be the key difference between the cognitive (rather than material) situations we find ourselves in. I agree completely: First, language allows us to trick our biology and keep huge amounts of knowledge across generations - knowledge that would be otherwise lost. Second, I don't think any true AI has to exactly match our "features": we need to communicate because our inherent weaknesses, compared to reality, force us to cooperate - social behaviour, given our ecosystem and physiology, offers one of the best compromises. I can forsee an intelligent entity not needing social behaviour, but I cannot forsee one having no ability to understand the unknown. I think any social function any AI could have, would be developed on the basis of hardcoded need. The human point of view on communitacting is extremely biased, given our nature - our hardcoded needs. And now, I'm having a hard time imagining an entity, or entities living across iterative generations, achieving intelligence without sharing a basic trait with terran life: having a very un-specific purpose, and an endless array of tools to fulfill it. So... I think civilization-level intelligence could be achieved by man-made machines who satisfied all conditions described so far: being able to handle huge amounts of data, being able to adapt its known frames to the unknown, and have an endless amount of tools to attempt fulfilling a narrow but generalist purpose. Char fucked around with this message at 13:18 on Jan 12, 2017 |
# ¿ Jan 12, 2017 12:56 |