|
So far away that it makes predicting how far away we are practically impossible. Decades or centuries.
|
# ¿ Nov 27, 2016 17:25 |
|
|
# ¿ May 14, 2024 12:14 |
|
TheNakedFantastic posted:I don't think the centuries answer is very useful, it's a non answer that should just be "we don't know" instead of a timeline. This. Technology forecasts for stuff dozens or hundreds of years in the future are pretty unless it's something like fusion reactors where the development plan is mapped out decades in advance.
|
# ¿ Nov 27, 2016 18:21 |
|
DrSunshine posted:However, the creation of a human-level AI would require a truly solidified theory of mind, one that could be codified in mathematics, or replicated in analog with circuitry, which is something that we lack at the moment. I don't see any reason to believe this is true. History is replete with technologies that were invented by people that didn't understand how it worked on a foundational level or even had a completely incorrect understanding. When vaccines were invented every educated person believed that disease was caused by miasma ("bad air") and germ theory was some kooky pseudoscience cooked up by paranoid peasants.
|
# ¿ Nov 27, 2016 18:33 |
|
Owlofcreamcheese posted:It's not pedantic because the idea humans have "general intelligence" is hilariously wrong. Humans have a small set of tasks we are good at. We just toot our own horn by pretending that it's the important set. You're confused as to what general intelligence is. It's not a philosophical statement about the nature of intelligence, it's just a best-fit line that represents covariance of the results of cognitive tasks, and so far we've found only two tasks that g fails to predict reliably: athletic and 'musical' tasks.
|
# ¿ Nov 27, 2016 19:55 |
|
twodot posted:How well does g predict the ability to render H264 video or the ability to store a trillion bytes of information in long term memory? Comparing human tasks to computer tasks just doesn't make any sense. If there's some reason we need more than 7 billion humans (we don't), we already know how to build more. How well does g predict visual awareness and memory? Very well.
|
# ¿ Nov 27, 2016 20:20 |
|
Owlofcreamcheese posted:Yeah but we literally define which information processing tasks are "cognitive" and which aren't by which human brains have a natural ability to do. It's not a "general" intelligence in some universal sense. It just doesn't measure things that would be silly to have on the list because no one can do it. It's general insofar as it predicts, and almost certain affects, proficiency at all or most tasks humans perform.
|
# ¿ Nov 27, 2016 21:21 |
|
Owlofcreamcheese posted:Yeah, of course. We designed it that way. It's the same way we set up all the olympic events so none of them require flight or staying underwater for hours or anything. It's not that humans have a general physical ability, it's that it'd be a waste of time to have olympic events absolutely no one could compete in. Human intelligence isn't actually general, we just don't even bother to do the tests for the stuff that people clearly can't do. That example actually reveals why this isn't a useful way of looking at it: you're saying that only God has traits that could be described as "general".
|
# ¿ Nov 27, 2016 23:04 |
|
Owlofcreamcheese posted:Yeah, but what if I told you your brain was just a bunch of systems that excel at one task each and are terrible at anything else. There are psychologists who believe that, primarily a subset of evolutionary psychologists, but it's not even close to an accepted paradigm. The psychology of intelligence in particular are great evidence against it.
|
# ¿ Nov 27, 2016 23:29 |
|
Cingulate posted:The overlap between g-focused IQ researchers and evolutionary psychologists is pretty substantial though Yeah but most evolutionary psychologists don't foolishly hinge their work on modularity. That's a small but vocal minority, mostly from UC Santa Barbara.
|
# ¿ Nov 27, 2016 23:33 |
|
Owlofcreamcheese posted:To restate the actual point I'm making: In a certain sense there is no such thing as "bear-level strength", but I'm sorry to say that this won't do you any good when you're facing down a bear.
|
# ¿ Nov 28, 2016 00:32 |
|
It seems likely to me that future AI will eventually resemble humans because a) individuality is extremely adaptive which is why we're individuals instead of the sea of LCL fluid from Evangelion and b) humans would prefer it that way.
|
# ¿ Nov 28, 2016 00:46 |
|
Willie Tomg posted:The visual sensory organs of living creatures--of which humans are a middling sample--are extraordinarily acute and to this day the only metric which technology has more reasonably approximated them has been in resolution which is a function of display development and not computational development. It's actually actively regressed in terms of ability to display/capture color information (video has a latitude of roughly 3.5 f-stops in either direction with black being incorrectably black and white being incorrectably white) with silver halide recording volumes of information through mechanical/chemical processes of which only 20% is actually perceptible by the eye without further processes to bring them into the visible color range. If human visual processing is so deficient, why is it such a bastard of a hurdle when making robots that respond to an array of visual stimuli? Of the five senses you could literally have not chosen one in which humans have more of an advantage. Thanks I was going to say something like this but I don't understand visual acuity well enough. I think he might mean that it's impossible since we don't have Windows Media Player installed in our brain and a USB slot in our skull, which is of course completely missing the point.
|
# ¿ Nov 28, 2016 01:11 |
|
It's highly likely that the reason human brains which are good at facial recognition also tend to be good at speaking Spanish isn't an accident of evolution but something much more fundamental about intelligence.
|
# ¿ Nov 28, 2016 01:34 |
|
Willie Tomg posted:*lives in a society all day* Good post but demerits for misusing "qualia".
|
# ¿ Nov 28, 2016 01:37 |
|
|
# ¿ May 14, 2024 12:14 |
|
Cingulate posted:That's not the problem. The problem is his fears rely on linear or better superlinear scaling, and that's simply not what we're currently seeing. Well yeah, we're right on the cusp on the end of development for integrated circuits. You'll have an answer to your question 3-5 years.
|
# ¿ Nov 28, 2016 03:05 |