|
We've been making massive gains in AI recently - the most noteworthy developments being multilayer networks running on GPUs. If you scale this linearly (from 2010 to 2015), we're basically looking at superhuman AI within a decade or 3. The question is, does linear hold? We're observing problem solving capability so far does not scale linearly, but more something like exponentially. This actually leaves open the possibility that we can build a near-human level AI running on a massive supercomputer in 2030, but won't be able to build a 2x human level AI with all the world's ressources in 2040 still, not to speak of Skynet-level orders-of-magnitude-beyond-any-humans. "AI-go-foom" people are sold on the idea that once you have an AI that's around human levels, it will trivially be able to improve itself. But then, we already have 7 billion human level intelligences around, and they haven't really found a way to come up with anything smarter than humans. And we know with computers, it's not as simple as adding more Hz to make it faster; a quad core isn't 4x as fast. On the other hand, khwarezm posted:I'm trying to garner how far along this technology is exactly. Its hard to know if the hype coming from Technological singularity futurist tech fanboys actually has much merit to it. Still though, technology seems to be moving so fast these days. What's open for debate, or even very much doubtful, is something that improves itself at near-linear, or even superlinear, speed.
|
# ¿ Nov 27, 2016 18:02 |
|
|
# ¿ May 13, 2024 21:42 |
|
Raspberry Jam It In Me posted:You are doing psychology research, right? I always wondered, is there actually a correlation between intelligence(in humans) and life satisfaction/mental health? Raspberry Jam It In Me posted:I mean, could you hypothetically increase a human's intelligence through something like doubling his working memory and analytical abilities Different way to look at it: the "shelf" you store memory on is infinitely large, but the reliability and speed with which you retrieve stuff, and the likelihood of breaking it while retrieving it, or retrieving a completely irrelevant thing (possibly a false memory), are real limits to human memory performance. In contrast, computers have this awesome thing where they simply store everything under a specific handle and keep a perfect database of where everything is. So what does this mean wrt. your question? Well, two things. First, humans have developed cultural tools to enlargen their memory, and you can yourself immediately see that everything is completely changed now that we have libraries and Google. If you could extend these cultural tools, you could again expect major changes. Second, it's not quite clear what it would mean to make human memory larger. In principle, there should be room to at least improve the reliability, speed and precision of working memory processes, and within a certain range, we wouldn't necessarily, inherently expect major drawbacks. It really depends on how you do it: currently, humans already ARE able to improve the stability of their working memory, but it's a trade-off: you have to juggle 1. how precise your memory is, 2. how well you'll respond to unexpected external input. With how our memory is set up right now, you can only change one axis so much before the other one suffers. But there shouldn't be anything standing in principle against re-engineering the whole thing to up the limit. And within the boundaries we currently have, it is well within what should be feasible to place yourself on a different place on the trade-off that fits maybe less well with the jungle (with its tigers and stuff), but better with Stanford and tests and long, focused discussions and nights in the lab. See: everyone taking Modafinil, which is doing basically that. This is the vague kind of non-answer you'll always get from neuroscience people I fear.
|
# ¿ Nov 27, 2016 22:50 |
|
twodot posted:How well does g predict the ability to render H264 video or the ability to store a trillion bytes of information in long term memory? The point about humans is we have the same brains as every other higher species when it comes to our brains doing what would require a computer to do ridiculous number crunching in highly specialized areas of perception and movement; and on top of that, we have the very unique ability to use a part of our minds for basically everything ever. It's super poo poo at that whenever you can compare it to a specialized system (i.e., your motor system is a lot better at solving complex nonlinear equations than your conscious mind!), but it's so far the only thing in the world that can do all of these things. All the amazing AI tools we have are super specialized, too. I can right now, given sufficient data, program on my Macbook a system that recognizes 8x8 pixel digits better than any human being. But nobody in the world can build a robot that reads a city map, takes the subway, walks up a flight of stairs AND explains a 3rd grader trivial math problems. All of these in isolation, yes. Together, no.
|
# ¿ Nov 27, 2016 22:57 |
|
blowfish posted:What do you mean by "massive gains"? How do you quantify how close an AI is to becoming superhuman (as you pointed out, giving the AI an arbitrary number of extra processors doesn't make it superhuman by itself)? How do you define superhuman? We are superhuman, if you will. AIs are, as has been repeatedly pointed out in here, in some areas already superhuman, but nobody can really see anything scalable that is as good at general, non-specialized cognition as humans are. Edit: argh, I though you were responding to a different post of mine. By massive gains I basically mean that within a few years, beating humans at a bunch of well-defined tasks has gone from a pipe dream to a Google Engineer's weekend job. I think a big symbol is the 2012 ILSVCR win by a deep conv net. Since then, everything has been deep learning everywhere, and now nobody is really surprised anymore by AlphaGo and self-driving cars. And as I said, the interesting thing would be something that is as general and as capable as humans on a scalable architecture. If you have a massive supercomputer that is just as smart as a human being and whose performance grows logarithmically with added nodes, that doesn't really change much, but if you have something you can easily grow linearly, then things will very rapidly begin to change. Now look at exponential growth and we have guaranteed Skynet by Tuesday, which is what I think a bunch of AI fanatics are talking about, but how realistic is superlinear growth given the limits of physics (which also matter for information processing)? Cingulate fucked around with this message at 23:07 on Nov 27, 2016 |
# ¿ Nov 27, 2016 22:59 |
|
Owlofcreamcheese posted:That seems extremely reasonable. No one would expect there to be a physical device that can do every physical task. It seems perfectly fine to claim different designs for information processing are better or worse at different things.And no design is just usable for everything.
|
# ¿ Nov 27, 2016 23:20 |
|
Owlofcreamcheese posted:Yeah, but what if I told you your brain was just a bunch of systems that excel at one task each and are terrible at anything else.
|
# ¿ Nov 27, 2016 23:29 |
|
Thug Lessons posted:There are psychologists who believe that, primarily a subset of evolutionary psychologists, but it's certainly not an accepted paradigm. The psychology of intelligence in particular are great evidence against it.
|
# ¿ Nov 27, 2016 23:30 |
|
Owlofcreamcheese posted:To restate the actual point I'm making:
|
# ¿ Nov 28, 2016 00:40 |
|
Willie Tomg posted:The visual sensory organs of living creatures--of which humans are a middling sample--are extraordinarily acute and to this day the only metric which technology has more reasonably approximated them has been in resolution which is a function of display development and not computational development. It's actually actively regressed in terms of ability to display/capture color information (video has a latitude of roughly 3.5 f-stops in either direction with black being incorrectably black and white being incorrectably white) with silver halide recording volumes of information through mechanical/chemical processes of which only 20% is actually perceptible by the eye without further processes to bring them into the visible color range. If human visual processing is so deficient, why is it such a bastard of a hurdle when making robots that respond to an array of visual stimuli? Of the five senses you could literally have not chosen one in which humans have more of an advantage. Though of course, on hyperspecialized tasks (like rating pixelated images on a monitor), AIs match human beings. (That's not to dispute the actual point. No machine matches the occipital lobe on teraflops/watt, not even close.) Reveilled posted:If you're interested in this sort of stuff, I'd highly recommend the book Superintelligence by Nick Bostom. It makes a pretty persuasive case that unless we're super careful about how we carry out AI research, we'll probably only ever make one human-level intelligence (before it kills us). It's really dry at the start as it spends multiple chapters laying out the case of why human-level AI is likely to be developed at some point, but very interesting once it starts discussing the various scenarios that could play out depending on when, where, and how the first AIs are developed, and how different security precautions would or would not work. Cingulate fucked around with this message at 01:11 on Nov 28, 2016 |
# ¿ Nov 28, 2016 01:08 |
|
Reveilled posted:There's discussion of the process of developing a superintelligence, and that right now it's impossible to tell how close a human-level AI might be, and what obstacles might exist that could stall things indefinitely at a sub-human level. In terms of the AI improving itself, he has discussion of different kinds of "take-off" which depend on how easily a superintelligent AI can scale itself up, but makes the point that an AI does not necessarily need to be astronomically more intelligent than humans to pose a threat, depending on what other skills it possesses. Much of the book does deal with the fast takeoff scenario, but that's understandably because the book's central thesis is "prepare for the worst".
|
# ¿ Nov 28, 2016 01:35 |
|
Thug Lessons posted:It's highly likely that the reason human brains which are good at facial recognition also tend to be good at speaking Spanish isn't an accident of evolution but something much more fundamental about intelligence. The interesting question is still about the stuff computers are really bad at.
|
# ¿ Nov 28, 2016 01:38 |
|
Reveilled posted:I'd say it's not really overly relevant to his point. Humans don't come anywhere near those limits, so you don't need to be able to process information faster than the speed of light to process information fast than the speed of human. Owlofcreamcheese posted:Your response is a good example. You do not like my 'flawed' reasoning so you respond to it by describing your emotional state and by doing some vague threat that I need to stop having that reasoning because it should effect my emotional state in that negative way. That is not a thing that a computer is going to level up and then just download from somewhere. That is a wicked human response that a computer could not have without a bunch of really really weird programming that is unlikely to be feasible and probably not even desirable.
|
# ¿ Nov 28, 2016 03:00 |
|
Thug Lessons posted:Well yeah, we're right on the cusp on the end of development for integrated circuits. You'll have an answer to your question 3-5 years.
|
# ¿ Nov 28, 2016 04:10 |
|
Dolash posted:The answer to the Chinese Room Puzzle and the Turing Test stuff, to me, is that it's irrelevant if what's going on is "true" intelligence or if it has the "spark" of life or is "self-aware" or "human-level" or any of the other terms we try to use for the ineffable difference between us and the toaster.
|
# ¿ Nov 28, 2016 11:41 |
|
Dolash posted:Qualia isn't real / isn't important / isn't provable in anyone outside yourself / can be done without for the sake of social graces if the inert thing in front of you insists on its autonomy. Take your pick. KOTEX GOD OF BLOOD posted:You are making several of the bad assumptions here which Dreyfus describes, or at minimum the biological and psychological assumptions. http://faculty.ucr.edu/~eschwitz/SchwitzAbs/USAconscious.htm
|
# ¿ Nov 28, 2016 21:02 |
|
Dolash posted:If the question is "how do we get human-level qualia for an artificial intelligence?"
|
# ¿ Nov 28, 2016 23:39 |
|
Blue Star posted:I dont think that's true. I think it's obvious that technological progress is slowing down and will probably stagnate in our lifetimes. Compare the first half of the 20th century to the second half: the first half saw way more progress. Cars, airplanes, electric power, nuclear energy, radio, telephones, television, x-rays, and much more all came out in the period between 1900 and 1950, give or take. But now look at the period from 1950 to 2000, there's way less progress. Yeah computers got smaller and faster, we got video games and cell phones and internet stuff. Visual effects in movies got better. And...that's about it. Next: an election run via twitter, self-driving cars, computers that talk to you, the end of coal in the West, ...
|
# ¿ Nov 29, 2016 00:14 |
|
TheNakedFantastic posted:In general the most profound changes the last couple of decades revolve around the internet and less tangible material shifts. We're living through one of the largest social and economic upheavals in human history but these changes are more subtle than a new electronic media player you can hold in your hand. Condiv posted:a neural net which can only reflect the biases of its creators.
|
# ¿ Nov 29, 2016 13:31 |
|
TheNakedFantastic posted:Well that's true, but people are using those computers because of the internet.
|
# ¿ Nov 29, 2016 14:15 |
|
The art discussion is really unproductive cause it will have to be about what art, particularly meaningful art, is. Andy warhol made soup cans art and some people have a really hard time keeping the difference between craftsmanship and art clear. This isn't gonna lead anywhere.
|
# ¿ Nov 29, 2016 16:04 |
|
Owlofcreamcheese posted:I can neither translate or be programmed in a natural language
|
# ¿ Nov 29, 2016 19:30 |
|
A Wizard of Goatse posted:A million monkeys banging on a million typewriters for a million years might eventually produce the complete works of Shakespeare, but it's gonna take a lot more man-hours to find the Shakespeare in all the gibberish than it did for Shakespeare to just write it. Oh, but it's so easy. For example: two and two are five. Hey, I just controlled you into thinking something akin to "that's wrong". It's pretty hard to get an AI to be controlled by words like that! RedFlag posted:Ladies and Gentlemen, I give you the 2016 Presidential election.
|
# ¿ Nov 29, 2016 22:46 |
|
Can there be a meaningless screwdriver
|
# ¿ Nov 30, 2016 20:34 |
|
BabyFur Denny posted:In the age of computers innovation and tech has grown exponentially. Do you have one? Do you have an idea of what's at stake?
|
# ¿ Dec 2, 2016 17:03 |
|
BabyFur Denny posted:There are still many developments in this area where we're just at the beginning that are very promising. Deep Learning has just been open sourced, running your algorithm on GPUs is happening BabyFur Denny posted:As soon as you can parallelise a problem, it's basically solved. BabyFur Denny posted:We increased the computing resources of our cluster by 10x over the past two years, could easily do another 10x (for a total 100fold increase inperformance) by just throwing a lot of money at it, and only if we go for another 10x after that would be where we had to invest some effort into making it run. Xae posted:MR was old tech when Google rebranded DSNA as MapReduce. Edit: GPU! Cingulate fucked around with this message at 14:28 on Dec 3, 2016 |
# ¿ Dec 3, 2016 00:21 |
|
Xae posted:They are still running into the same barriers Intel is. They just started further back so they are hitting them later. Subjunctive posted:Inference happens on CPUs, but the learning is almost all GPUs these days. There are various other specialization approaches as well.
|
# ¿ Dec 3, 2016 14:28 |
|
Subjunctive posted:Do you not think that, f.e., LSTMs or memnets represent meaningful advances? - training data availability - GPUs - people actually putting it all together That, however, is a massive practical change. It's not just hype. Sure, it's overhyped, but it's also powerful. And Memnet is in a totally different category from LSTMs.
|
# ¿ Dec 23, 2016 01:03 |
|
The interesting point is that the obvious mathematical innovation - LSTMs - predates the actual impact on the field of AI - LSTM networks revolutionizing applied ML - by two decades. Can't really say anything about "memnets" because it's much too early to tell. Maybe they'll be a big thing in 10 years? Maybe not.
|
# ¿ Dec 23, 2016 14:11 |
|
mdemone posted:Blue Brain is already having emergent 40-60 Hz synchronization Pochoclo posted:Plenty of scientific divulgation magazines promised flying cars early in the 1990s. I vividly remember an article about Michael Jackson preordering one, even.
|
# ¿ Dec 23, 2016 22:39 |
|
I'm a bit confused about what the significance of flying car stories from the 80s is right now.
|
# ¿ Dec 23, 2016 23:19 |
|
Elias_Maluco posted:A whole lot of movies, cartoons, sci-fi books and comic books did pictured flying cars during the 80s and 90s
|
# ¿ Dec 24, 2016 12:50 |
|
A Wizard of Goatse posted:there is exactly the same basis for the guarantee we will have flying cars any day now as the guarantee we'll have sapient computers any day now, they come from the same place, and the exact same brainless handwaving about "well things were different in 1800 than they are now therefore who knows what the future will bring??? probably robot girlfriends" works for both equally. They're both asinine, OOCC just wants a robot girlfriend enough more than he wants a sweet hovercar to ignore how asinine the first one is. - in the 90s, scientists said there'd be no ice left in the artic and antarctic by 2015 - that's clearly not happened - global warming is a hoax invented by the chinese to destroy US manufacturing See my point?
|
# ¿ Dec 24, 2016 22:20 |
|
A Wizard of Goatse posted:it's almost like the rightness of what "people" or "scientists" or "they" predicted X years ago about the modern day says nothing at all to inform what's going to happen in the future, and is meaningless noise for idiots.
|
# ¿ Dec 24, 2016 22:24 |
|
KOTEX GOD OF BLOOD posted:On the other hand, Hubert Dreyfus, a philosopher with no training in technology whatsoever who probably has trouble setting up a projector for a class session, said that all the "guys who build the robot people" promising strong AI in the near term were full of poo poo back in the 70s and turned out to be right. So if anything it's more prudent to take things AI researchers say with a heaping mound of salt given their utter inability to deliver on any of the huge promises they have been making for decades. And what is today's AI like? Well, it's not Good Old Fashioned AI. It's neural nets. So the Dreyfus Argument can be used in both ways actually. Pochoclo posted:Going by your posts, I say we close the thread, it's painfully clear that sentient AI is a reality KOTEX GOD OF BLOOD posted:Or useful. I mean, that's the main thing with flying cars - based on movies people expect a regular car that can take off and hover thanks to some magic inertialess drive. The "flying cars" we have now are more like planes with folding wings that can be driven up to 60mph on little taxiing wheels or w/e.
|
# ¿ Dec 25, 2016 03:53 |
|
KOTEX GOD OF BLOOD posted:OK, but neural nets are not a good enough reason to be any less skeptical about AI given its past KOTEX GOD OF BLOOD posted:without any basis in anything other than masturbatory sci-fi dreams.
|
# ¿ Dec 25, 2016 04:38 |
|
CommieGIR posted:You figure the Israelis could come up with something better than Microsoft Tay "If you force us yet again to descend from the face of the Earth to the depths of the Earth — let the Earth roll toward the Nothingness." :hint: :hint:
|
# ¿ Dec 27, 2016 16:43 |
|
Thalantos posted:When it comes to AI, won't we reach a point where it all becomes philosophical anyway?
|
# ¿ Jan 10, 2017 16:23 |
|
Raspberry Jam It In Me posted:Also, you can now slow down or even reverse your Alzheimer's in parts of your brain at home, with a strobe light
|
# ¿ Jan 10, 2017 17:55 |
|
Owlofcreamcheese posted:Yeah, if you have genetically engineered brain cells that are designed to fire when exposed to light. Which you do not. Also, optogenetics will come to a brain near you very soon, or at least so everyone is hoping ...
|
# ¿ Jan 10, 2017 18:42 |
|
|
# ¿ May 13, 2024 21:42 |
|
Thalantos posted:It seems to me we're really kinda close to AIs passing the turing test
|
# ¿ Jan 10, 2017 19:06 |