|
axeil posted:Again, remember that by the time Sandy gets on the Internet she's already smarter than every human to have ever existed by an enormous factor. Doing these things, while sounding really hard to us, would be nothing to her. Imagine describing how to build a skyscraper to a chimpanzee. Sure they can see the building, communicate (sort of) with us and even build things of their own, but it's impossible to explain even the first step in how to build something like that. The AI's nanomachine and code replication would be similarly complicated and also impossible to stop once we put her on the Internet. Being smart doesn't magically overcome physical limitations or the laws of reality. Your story is the rough equivalent of "if Stephen Hawking were ten thousand times smarter, then it wouldn't matter if he was locked in a jail cell with no wheelchair or computer - he'll magically overcome his physical infirmities by being smart, then use chemistry and gastrology he learned from his genius to convert his fart into a noxious gas that kills all the prison guards, and then he will use his smarts to jump to the moon where he can, with his genius, safely build a floating moonbase to use in waging war against humanity because he knows they'll turn against him once they figure out he's smart".
|
# ? Feb 11, 2015 15:24 |
|
|
# ? Jun 7, 2024 11:07 |
|
The super ability aspect of AI seems to me to come from insanely fast data processing and perfect memory. If you had access to everything on the Internet, software documentation etc, and could remember it perfectly, yeah you'd be the worlds greatest hacker very quickly. Personally I don't believe we will ever have super intelligent AI, but in the scenario, super ability doesn't seem at all far fetched to me... Edit: say you took every computer course ever, and read every book and all the source code, and never forgot any of it, being able to pull up any of it instantly. Moose-Alini fucked around with this message at 17:32 on Feb 11, 2015 |
# ? Feb 11, 2015 17:29 |
|
Arglebargle III posted:If humans develop AI that kills us all it will probably be the best possible outcome for everyone though. human beings are hardly optimal after all
|
# ? Feb 11, 2015 17:44 |
|
axeil posted:I read a great thought experiment on this once that illustrated if an AI does kill us all it will likely be completely incomprehensible and not Skynet launching nukes. Apologies on any butchering I do of this as I'm trying to recite it from memory. It's basically that paperclip maximizer that someone posted about earlier but in story form. In one sense the sandy example is plausible. They key point is the unintended consequences. She was tasked with doing one thing, and did it well, but it ended up having unpredicted dire consequences. This is plausible and there is plenty of precedence for it. Where I have serious objections though is the intelligence element. While it's completely possible for a complex system to go out of control, it's absolutely impossible for intelligence to spontaneously, rapidly and accidentally arise. Where does the processing power, memory, or basic ingredients of understanding come from for the learning you described in a handwriting robot. Humans rely on pathways in our brain which have been honed over millions of years to interpret the language and meaning in a novel. That cannot and will not spontaneously arise. There are any number of doomsday scenarios which involve advanced technology, but intelligence isn't a necessary or plausible component of any of them. asdf32 fucked around with this message at 18:50 on Feb 11, 2015 |
# ? Feb 11, 2015 18:47 |
|
Main Paineframe posted:Being smart doesn't magically overcome physical limitations or the laws of reality. Your story is the rough equivalent of "if Stephen Hawking were ten thousand times smarter, then it wouldn't matter if he was locked in a jail cell with no wheelchair or computer - he'll magically overcome his physical infirmities by being smart, then use chemistry and gastrology he learned from his genius to convert his fart into a noxious gas that kills all the prison guards, and then he will use his smarts to jump to the moon where he can, with his genius, safely build a floating moonbase to use in waging war against humanity because he knows they'll turn against him once they figure out he's smart". Well, I would argue that if he was ten thousand times smarter and could talk to people he would convince someone to let him out of the cell. Which is the basis of the ai-box experiment. http://en.wikipedia.org/wiki/AI_box Though yes if you take away his voicebox he's a bit hosed.
|
# ? Feb 11, 2015 19:36 |
|
What if those people didn't listen? Or weren't paying attention?
|
# ? Feb 11, 2015 19:42 |
|
asdf32 posted:Where I have serious objections though is the intelligence element. While it's completely possible for a complex system to go out of control, it's absolutely impossible for intelligence to spontaneously, rapidly and accidentally arise. Where does the processing power, memory, or basic ingredients of understanding come from for the learning you described in a handwriting robot. It doesn't have to spontaneously arise. Connecting a human brain to a supercomputer would be plenty dangerous enough.
|
# ? Feb 11, 2015 19:44 |
|
OwlFancier posted:Well, I would argue that if he was ten thousand times smarter and could talk to people he would convince someone to let him out of the cell. Those people would just pause the simulation and check its intentional thread logs to see what it was actually planning. Sometimes I think that people mistake AIs for genies.
|
# ? Feb 11, 2015 19:45 |
|
Kaal posted:Those people would just pause the simulation and check its intentional thread logs to see what it was actually planning. But if the AI is allowed to improve it's own code what's to say what comes out is actually decipherable? I've written code that I can't figure out weeks after I've written it, much less code that's created by iterative processes and has no documentation or comments.
|
# ? Feb 11, 2015 19:59 |
|
Kaal posted:Those people would just pause the simulation and check its intentional thread logs to see what it was actually planning. Depends on how the AI is created, and on how rigorous the creators are. If it's created by genetic algorithm that capability may not exist or may not be understandable by anyone. If it makes use of quantum computing it may be difficult or impossible to accurately observe the processes which produce the end result, and if the people making it are specifically a bit stupid, they might not bother to check.
|
# ? Feb 11, 2015 20:01 |
|
axeil posted:But if the AI is allowed to improve it's own code what's to say what comes out is actually decipherable? I've written code that I can't figure out weeks after I've written it, much less code that's created by iterative processes and has no documentation or comments. OwlFancier posted:Depends on how the AI is created, and on how rigorous the creators are. If it's created by genetic algorithm that capability may not exist or may not be understandable by anyone. If it makes use of quantum computing it may be difficult or impossible to accurately observe the processes which produce the end result, and if the people making it are specifically a bit stupid, they might not bother to check. I think that the idea that AI researchers would develop a system that would rewrite its own subroutines and processes in a secret dialect, and/or would intentionally blinder themselves to understanding what their system is doing, to be fairly incredible. There's absolutely no reason for anyone to do that. I mean essentially what you two are suggesting is that the designers are clever enough to invent and implement astounding breakthroughs in computer engineering, but not clever enough to use a debugger or understand their creation in any way? It's a bit like suggesting that a gunmaker might absentmindedly forget how the trigger worked and shoot themselves in the head. No one is going to trip and accidentally invent a killer AI. Kaal fucked around with this message at 20:19 on Feb 11, 2015 |
# ? Feb 11, 2015 20:15 |
|
Kaal posted:I think that the idea that AI researchers would develop a system that would rewrite its own subroutines and processes in a secret dialect, and/or would intentionally blinder themselves to understanding what their system is doing, to be fairly incredible. There's absolutely no reason for anyone to do that. I mean essentially what you two are suggesting is that the designers are clever enough to invent and implement astounding breakthroughs in computer engineering, but not clever enough to use a debugger or understand their creation in any way? It's a bit like suggesting that a gunmaker might absentmindedly forget how the trigger worked and shoot themselves in the head. No one is going to trip and accidentally invent a killer AI. Well, it's more like the idea that a gunmaker might use a quite promising and interesting technique to design his new gun, which involves using a machine to stick parts together at random and seeing if they work, and if it does work, iterating on that design in a similar fashion, repeat ad nauseum. This has the potential to produce a really good gun, and requires not much except time beyond the initial investment. The downside of course is that the thing coming out the other end is quite difficult to understand without going through every step that produced it. And that would take ages, so maybe he just takes the finished product out and has it test fired. Only it turns out the machine actually made something way more powerful than he expected so he blows a hole through the wall of the building and kills someone. It's a dumb as hell analogy I grant you but genetic algorithms are a real thing. Obviously it's unlikely to create an AI any time soon but the entire field of AI is kind of a thing that may outstrip the ability of any single human to understand. You are trying to create something as complex and intricate as a human brain, while only having a human brain yourself to work with. It proposes some interesting logistical limitations rooted in human biology. Can a human brain completely understand the method of its own operation? If not, can multiple humans working together, adequately share knowledge to completely predict the behavior of a human brain? Or is it necessary, if you're trying to build an AI, to just run an artificial evolution program and see what comes out the other end?
|
# ? Feb 11, 2015 20:33 |
|
OwlFancier posted:Well, it's more like the idea that a gunmaker might use a quite promising and interesting technique to design his new gun, which involves using a machine to stick parts together at random and seeing if they work, and if it does work, iterating on that design in a similar fashion, repeat ad nauseum. This has the potential to produce a really good gun, and requires not much except time beyond the initial investment. The downside of course is that the thing coming out the other end is quite difficult to understand without going through every step that produced it. And that would take ages, so maybe he just takes the finished product out and has it test fired. Only it turns out the machine actually made something way more powerful than he expected so he blows a hole through the wall of the building and kills someone. I guess what I'd say is that if you're working on such a design after intentionally removing any safeties, protections, or oversight, then you deserve what happens to you. With the caveat that most likely what will happen is that nothing will work since you need to have a good understanding of what you're doing in order to innovate. The Manhattan Project made mistakes when they were building the atom bomb - sometimes fatal ones - but they knew enough not to simply slam masses of uranium together in the laboratory until something happened.
|
# ? Feb 11, 2015 20:41 |
|
Kaal posted:I guess what I'd say is that if you're working on such a design after intentionally removing any safeties, protections, or oversight, then you deserve what happens to you. With the caveat that most likely what will happen is that nothing will work since you need to have a good understanding of what you're doing in order to innovate. The Manhattan Project made mistakes when they were building the atom bomb - sometimes fatal ones - but they knew enough not to simply slam masses of uranium together in the laboratory until something happened. Well, the entire premise of evolutionary computing is that you don't need to understand what you're doing in order to produce results, much like evolution itself. If you have an abundance of processing power you can just throw that at quasi-random solutions to the problem until you brute-force a solution. I mean sure it's dangerous but seeing as you're using the atom bomb as a comparison, danger doesn't necessarily dissuade people if they think the results could be useful enough, or if they can mentally explain away the risks as very unlikely. A cautious scientist probably wouldn't try it but an incautious one backed by an ambitious government or corporation might, and advances in computing mean that the technology required to try it is becoming more available to more people.
|
# ? Feb 11, 2015 21:34 |
|
Moose-Alini posted:The super ability aspect of AI seems to me to come from insanely fast data processing and perfect memory. If you had access to everything on the Internet, software documentation etc, and could remember it perfectly, yeah you'd be the worlds greatest hacker very quickly. Memory isn't really the issue. On one of my hard drives I'm pretty sure I've got the collected works of Shakespeare in kindle format, I'm not entirely sure my computer is capable of reproducing his works or even being an expert in anything to do with Shakespeare. I doubt it would even be able to recognize what it is. Having a photographic memory or the ability to recall anything instantly doesn't really make you a genius. It makes you an autist, possibly, but really the ability to apply knowledge is the mark of intelligence, not the possession of it.
|
# ? Feb 12, 2015 00:16 |
|
It has always seemed to me that the "rampant human-hating evolutionary superbeing" trope as applied to the perils of AI seems like a rather distant danger. I think that a much more relevant danger comes from AI weaponized by individuals or societies for use against one another - and most particularly, use by the rich and politically powerful (who can afford said AI) against the poor, who they will suddenly have much less need for. I think the reason that particular scenario isn't more popular in science fiction (though it is popping up more and more) is because it would be likely to come off as political, whereas the classic "mankind's technological hubris creates a
|
# ? Feb 12, 2015 00:55 |
|
Moose-Alini posted:The super ability aspect of AI seems to me to come from insanely fast data processing and perfect memory. If you had access to everything on the Internet, software documentation etc, and could remember it perfectly, yeah you'd be the worlds greatest hacker very quickly. Look at how effective the internet has been at making people super smart - that is, not very. What's important is not access or memory, but comprehension ability - and that's something that doesn't just develop organically, it has to be taught. There is an absolute shitload of misleading, meaningless, or just plain wrong information on the internet, and a handwriting analysis robot with literally zero knowledge on the subject simply has no way to meaningfully distinguish the wheat from the chaff. There's a reason we actually have educational professionals teach people using carefully crafted curriculums, rather than just dumping a load of books in front of them and telling them to study up until they reach college proficiency - learning, critical thinking, the ability to distinguish true sources from fake sources, some base guidance on what's real and what's not, and so on. And an AI would be so much less prepared for learning than even humans are. You can't just dump the entire internet into them and expect them to become supergeniuses; if the content isn't carefully curated and presented by a human "teacher", they're just as likely to become a ninja acupuncturist or an intolerable rear end in a top hat, because the AI will end up reading maybe three billion posts on whether Naruto is cooler than Sephiroth and whether anime or videogames better represent the nadir of human culture, and it can't meaningfully filter useless information from useful information without a human to build the basic framework with which it can distinguish information. It's also incredibly hard for the AI to hide anything. Even if the actual system doing the "thinking" is beyond people's capability to reasonably understand (which I somewhat doubt, even if the AI is smarter than humans), it's pretty hard to keep secrets when engineers can literally pause the thoughts in your head and examine the contents of your memory at any time, and every bit of network traffic you send or receive is monitored and logged. Even if you assume people aren't going to pay much attention to safety measures, an AI that's capable of self-improving to anywhere near the extent people are hypothesizing is going to be under constant monitoring for research purposes, and even if people aren't picking through the individual variables, there's definitely going to be general data analysis done on processes, memory patterns and usage, and so on. The whole point of iterative, evolutionary growth is that it can't jump straight from "not doing a thing" to "doing that thing perfectly", anyway - if the AI decides it wants to hide things, it has to slowly iterate its way through trying and failing to hide things, leaving an obvious as hell trail as it slowly iterates and evolves its way to actually hiding things effectively.
|
# ? Feb 12, 2015 02:56 |
|
Main Paineframe posted:It's also incredibly hard for the AI to hide anything. Even if the actual system doing the "thinking" is beyond people's capability to reasonably understand (which I somewhat doubt, even if the AI is smarter than humans), it's pretty hard to keep secrets when engineers can literally pause the thoughts in your head and examine the contents of your memory at any time, and every bit of network traffic you send or receive is monitored and logged. Even if you assume people aren't going to pay much attention to safety measures, an AI that's capable of self-improving to anywhere near the extent people are hypothesizing is going to be under constant monitoring for research purposes, and even if people aren't picking through the individual variables, there's definitely going to be general data analysis done on processes, memory patterns and usage, and so on. The whole point of iterative, evolutionary growth is that it can't jump straight from "not doing a thing" to "doing that thing perfectly", anyway - if the AI decides it wants to hide things, it has to slowly iterate its way through trying and failing to hide things, leaving an obvious as hell trail as it slowly iterates and evolves its way to actually hiding things effectively. Absolutely. For that matter, by the time that we're getting around to producing true artificial intelligences, it is extremely likely that we'll be using highly advanced virtual intelligences to aid in that effort, who would presumably be perfectly capable of deciphering the system and tracking all of the logs. As part of that iterative process, one would expect to see a slow progression of increasingly capable VIs and AIs, which would grow, mature and plateau in capability*, only to be used as the springboard for succeeding systems. In that environment, it does not seem likely that a HAL-9000 or a SHODAN would spontaneously come into existence and start emitting neurotoxins from the air ducts. *For that matter, one should point out that adaptive handwriting robots are a real thing, and they haven't come close to taking over the Earth or developing nanotechnology. Instead they simply became very, very good at mimicking handwriting. https://hellobond.com/ Kaal fucked around with this message at 03:38 on Feb 12, 2015 |
# ? Feb 12, 2015 03:35 |
|
Some of the critics of AI seem like they are describing their own concept God more than a possible existent entity. Even provided an AI was far more intelligent than a human being I'm not sure why that would mean it would be able to instantly kill us. Even if I spent decades working on it I doubt I'd be very able to greatly diminish the ant population of the world and I'm vastly smarter than they are.
|
# ? Feb 12, 2015 04:08 |
|
The AI may have superior intellect, but humans have a long history of irrational hate and violence. My money is on the humans to kill the AI and/or each other before the AI kills them.
|
# ? Feb 12, 2015 05:13 |
|
nelson posted:The AI may have superior intellect, but humans have a long history of irrational hate and violence. My money is on the humans to kill the AI and/or each other before the AI kills them.
|
# ? Feb 12, 2015 09:49 |
|
Main Paineframe posted:Look at how effective the internet has been at making people super smart - that is, not very. What's important is not access or memory, but comprehension ability - and that's something that doesn't just develop organically, it has to be taught. There is an absolute shitload of misleading, meaningless, or just plain wrong information on the internet, and a handwriting analysis robot with literally zero knowledge on the subject simply has no way to meaningfully distinguish the wheat from the chaff. There's a reason we actually have educational professionals teach people using carefully crafted curriculums, rather than just dumping a load of books in front of them and telling them to study up until they reach college proficiency - learning, critical thinking, the ability to distinguish true sources from fake sources, some base guidance on what's real and what's not, and so on. And an AI would be so much less prepared for learning than even humans are. You can't just dump the entire internet into them and expect them to become supergeniuses; if the content isn't carefully curated and presented by a human "teacher", they're just as likely to become a ninja acupuncturist or an intolerable rear end in a top hat, because the AI will end up reading maybe three billion posts on whether Naruto is cooler than Sephiroth and whether anime or videogames better represent the nadir of human culture, and it can't meaningfully filter useless information from useful information without a human to build the basic framework with which it can distinguish information.
|
# ? Feb 13, 2015 02:06 |
|
nelson posted:The AI may have superior intellect, but humans have a long history of irrational hate and violence. My money is on the humans to kill the AI and/or each other before the AI kills them. Bomb disposal crews formed really close bonds with their remote controlled bomb-disposal bots. They even gave the bots quirks, names, and so on. http://gizmodo.com/5870529/the-sad-story-of-a-real-life-r2-d2-who-saved-countless-human-lives-and-died Terrible article but it's the one I could find about Scooby Doo. Basically how we treat animals and each other is how we'd treat AI. Some would act irrationally violent and some would treat the AI as a member of their own family. I think the violent confrontation between humans and artificial intelligence is for science fiction. It could happen, but I don't think they'd DESTROY ALL HUMANS. Nelson Mandingo fucked around with this message at 02:21 on Feb 13, 2015 |
# ? Feb 13, 2015 02:19 |
|
asdf32 posted:In one sense the sandy example is plausible. They key point is the unintended consequences. She was tasked with doing one thing, and did it well, but it ended up having unpredicted dire consequences. This is plausible and there is plenty of precedence for it. I would actually say that's very non-plausible, as stated. Let's start with the first thing - when you build a handwriting system/AI, you're not doing it because you desperately want to produce handwriting. You're doing it because you want to learn how to build a handwriting system. What this typically means is you're going to be feed a bunch of different systems varying types and amounts of training data and then measure their performance on a handwriting task. Which means it's very important to have a great deal of control over the types and amounts of training data the various systems are learning on. Letting the system pick up unknown amounts and types of training data from the internet would completely invalidate any results you managed to produce. It's not a thing you would ever do. Second - the system requests a connection to the internet. Again, for any AI system I was working on (and they ought to be similar enough to a handwriting system for this to be relevant) if the system did anything than produce relevant output from a given input and set of training data, it'd be time to start looking for bugs in the system. If it started producing unprompted output, that'd be extremely weird. Third - the system knows what the internet is. Why does it know this? Why is no one surprised that it knows this? Why is no one trying to figure out how it learned about the internet? Fourth - the system knows that it exists and can be connected to the internet. Okay, this could depend on how the request is phrased, but at some level it seems like the system is showing a great deal of self awareness. Again, for a handwriting system this seems kinda odd. Fifth - the system has figured out how to connect to the internet, and only needs an actual physical connection to do so. Okay, again, presumably the handwriting system does not a priori know how to connect to the internet (as in have actual code that can allow it to connect and browse the internet). In order for the request to make sense, it has to have somehow managed to figure out how to do so (even if it is just as simple as calling the appropriate OS functions/using the right set of libraries - it's got to have figured out how to do that) - why is this not raising any questions? Sixth - the system has figured out how to take unprocessed data (e.g., all of that stuff on the internet) and learn from it. Or at least, it believes it can do so. Why is no one trying to figure out how it's doing this/why it believes this? Much less how it managed to learn to do this? Basically, as someone said later, if you assume the AI is a caged more or less god, then, yeah, the story might make sense. But I'm guessing that's way past the AI hard problem. And we're probably a decent ways away from solving AI hard problems.
|
# ? Feb 13, 2015 02:45 |
|
Unfortunately the LessWrong mock thread has been moved out of D&D and is currently in, I believe, GBS. It's still worth a read because this topic is kind of the main fascination of that website, and many people smarter than I address it. Particularly Krotera. You should listen to him. But to answer the thread title's question: No, not really. Probably not by accident either. Genetic algorithms aren't some panacea of learning either. They're still bounded by the constraints of the problem and the goal. And much like real evolution, there's no guarantee at all that it will reach a global optimum. It's entirely possible to get stuck in a local maximum, where every potential stepwise change results in a worse solution, but a better one exists overall.
|
# ? Feb 13, 2015 07:52 |
A Man With A Plan posted:Unfortunately the LessWrong mock thread has been moved out of D&D and is currently in, I believe, GBS. It's still worth a read because this topic is kind of the main fascination of that website, and many people smarter than I address it. Particularly Krotera. You should listen to him. Can u link to this thread because I get a serious kick out of anyone who thinks even mammalian level AI is possible, also ppl who think technological growth is linear or god help them exponential. You could fast forward 200 years in the future and there will be no convincing progress, seriously. Such a total denial of how physics works. I am already savoring 2070 since I will probably die around then if not before and we will have made no real progress in this domain.
|
|
# ? Feb 13, 2015 09:50 |
|
o.m. 94 posted:Can u link to this thread because I get a serious kick out of anyone who thinks even mammalian level AI is possible, also ppl who think technological growth is linear or god help them exponential. You could fast forward 200 years in the future and there will be no convincing progress, seriously. Such a total denial of how physics works. I am already savoring 2070 since I will probably die around then if not before and we will have made no real progress in this domain. Belief in exponential progress is a pretty big part of the futurist cult. I've noticed a lot of people who buy into the singularity also believe in something called "the law of accelerating returns", which is something that Ray Kurzweil pretty much made up. It basically means that all these amazing technologies are going to happen in our lifetime. The problem is, all evidence seems to indicate the opposite. I'm not an expert but I've read that Moore's Law is slowing down. All of the progress we've had since the '50s has been in computer technology, and that was all due to Moore's Law. Now that that's done, we're going to see a huge stagnation period. So yeah, all the people expecting to see wonderful futuristic technology in their lifetimes: Nope. Sorry
|
# ? Feb 13, 2015 11:02 |
|
Blue Star posted:we're going to see a huge stagnation period. They see no need for progress as long as the current system is efficiently bleeding the entire world dry for their benefit. People like Musk are an exception to that class. The Kochs are the standard.
|
# ? Feb 13, 2015 11:29 |
|
FRINGE posted:No small part of the stagnation we are dying in is rooted in greed and the increasing FY GIMME EVERYTHING attitude coming from the new oligarchs. I'd argue the opposite really, we're currently in a stage where vast amounts of technological wonders exist but they haven't propagated very far outside of their country/region of origin. We're currently in a period similar to the middle of the Industrial Revolution when other countries were catching up to the global leaders and the leaders themselves stagnated because to improve more would be more costly than they could stomach.
|
# ? Feb 13, 2015 13:55 |
|
o.m. 94 posted:Can u link to this thread because I get a serious kick out of anyone who thinks even mammalian level AI is possible, also ppl who think technological growth is linear or god help them exponential. You could fast forward 200 years in the future and there will be no convincing progress, seriously. Such a total denial of how physics works. I am already savoring 2070 since I will probably die around then if not before and we will have made no real progress in this domain. http://forums.somethingawful.com/showthread.php?threadid=3627012 Also, I think you might be a little bit overly pessimistic. No, we don't know if artificial superintelligence is possible, nor whether it would be hugely more capable than human-level intelligence, but to say that you can't have "even mammalian level" artificial intelligence sounds a bit odd when we've got plenty of mammalian level natural intelligences running around.
|
# ? Feb 13, 2015 16:57 |
|
o.m. 94 posted:Can u link to this thread because I get a serious kick out of anyone who thinks even mammalian level AI is possible Create a physical simulation of a brain. Expose it to stimuli. Done.
|
# ? Feb 13, 2015 17:03 |
|
Blue Star posted:Belief in exponential progress is a pretty big part of the futurist cult. I've noticed a lot of people who buy into the singularity also believe in something called "the law of accelerating returns", which is something that Ray Kurzweil pretty much made up. It basically means that all these amazing technologies are going to happen in our lifetime. The problem is, all evidence seems to indicate the opposite. I'm not an expert but I've read that Moore's Law is slowing down. All of the progress we've had since the '50s has been in computer technology, and that was all due to Moore's Law. Now that that's done, we're going to see a huge stagnation period. A big part of what was driving the Moore's law expansion in computing was the fact that we were able to shrink the process size down huge amounts. This is that number in nanometers that you see in relation to processors, and in a sense represents how big the individual components making it up are. We went fairly quickly from 200nm to 112 to 65 to 45 to 32 to 28, and Intel's prototypes are pushing 14 now I think. Unfortunately we're running into hard physical limits - basically, you aren't going to make a transistor smaller than an atom. So future advancements are going to have to come from other, more difficult methods. One approach is building 3D processors, as the current ones are more or less a 2D plane. Another is to use light/fiber optics for the component interconnects. Both promise more speed, but have major hurdles to clear first (heat dissipation and interference, respectively) that are a whole different class of problem than what we're used to solving.
|
# ? Feb 13, 2015 19:06 |
Apologies for the somewhat incoherent drunkpost. What I mean to say is that I agree this is possible, but its definitely several centuries away. Computing hasn't really changed that much in the past 60 years, its just gotten faster. Nowhere near fast enough for a linear or even multi threaded solution. Linear, procedural computation will make this extremely hard to do, so I'm banking on some kind of complete revolution in computing science before we can even start. It would probably come in the form of thousands of independently developed modules that would plug in together, and the stimulus response would not be realtime, it would have to sit and compute, a bit like rendering movie quality CGI does. Interestingly, have we even managed to fully simulate a single unicellular organism? o.m. 94 fucked around with this message at 19:23 on Feb 13, 2015 |
|
# ? Feb 13, 2015 19:17 |
|
o.m. 94 posted:
Well there's this: http://www.i-programmer.info/news/105-artificial-intelligence/7985-a-worms-mind-in-a-lego-body.html With the caveat that the model of a neuron used in neural nets is greatly simplified from the real thing, you can get some interesting behavior.
|
# ? Feb 13, 2015 20:21 |
o.m. 94 posted:Apologies for the somewhat incoherent drunkpost. What I mean to say is that I agree this is possible, but its definitely several centuries away. Computing hasn't really changed that much in the past 60 years, its just gotten faster. Nowhere near fast enough for a linear or even multi threaded solution. Linear, procedural computation will make this extremely hard to do, so I'm banking on some kind of complete revolution in computing science before we can even start. It would probably come in the form of thousands of independently developed modules that would plug in together, and the stimulus response would not be realtime, it would have to sit and compute, a bit like rendering movie quality CGI does. No, and it's impossible to do currently because there are still many cellular processes that are not fully understood. Any simulation of even the simplest unicellular organism would break down at the epigenetic level or in many other places where we simply don't understand the natural behavior. This is kind of irrelevant to the development of AI because an AI would almost certainly not contain any kind of fully simulated neuron but instead individual processing units structured in a more abstract and conceptually human-friendly way that have the same versatility as a neuron.
|
|
# ? Feb 14, 2015 19:32 |
|
o.m. 94 posted:It would probably come in the form of thousands of independently developed modules that would plug in together, and the stimulus response would not be realtime, it would have to sit and compute, a bit like rendering movie quality CGI does. The brain isn't really about "real-time", it's about predicting so you can use the past to predict the present and future. And the thing about real groundbreaking discoveries is that no one ever predicts them. Before the Wright brothers people were claiming that it would take a thousand or million years to develop flight. Really the future is just . America Inc. fucked around with this message at 01:35 on Feb 15, 2015 |
# ? Feb 15, 2015 01:32 |
|
Here's the source that describes how the delay in processing sensory information forces our brain to rely on predictive heuristics to understand the present: http://www.bbc.com/future/bespoke/story/20150130-how-your-eyes-trick-your-mind/index.html(section: 21st century, hering illusion). Really, if you look deeper into stuff like microsaccades we actually give our brain waaay more credit than it deserves and it relies on a lot of quick and dirty shortcuts. At the end of the linked article, it states that the human brain would need to be the size of a building to digest everything our senses throw at it. I think one obstacle to development in AI is that we might be overexaggerating the abilities of the brain and it's far more mediocre than we think it is. Evolution goes towards performance, not perfection, and in the history of life we're a loving blink from chimps. America Inc. fucked around with this message at 02:27 on Feb 15, 2015 |
# ? Feb 15, 2015 02:07 |
|
Humans have already created computers that are infinitely better than us at certain tasks, for instance dealing with numbers or any task involving exact replication. I think that if an intelligent AI does develop in the classical sense it will be far removed from something that humans would consider a human like intelligence, I think we would reckon its intelligence closer to bacteria or a virus (ie completely non-sentient). Of course, it still might be incredibly powerful. The only way we would create a human-like intelligence would be by a gradual process of increasingly supplementing our bodies with cybernetic parts until we became more machine than mammal. I think that is the most realistic path to a machine led society, where our technology advances beyond our biological forms and into something more advanced in such a way as to render an unmodified human a virtual ant compared to the cyborgs we become. At that point anything is possible as humans would have 'evolved' into very different creatures than we would understand, potentially lacking such things as sexual desire or need for sustenance or the fear of an inevitable death or many other factors that motivate us as a species today.
|
# ? Feb 15, 2015 03:36 |
|
I have always ascribed to the idea that AI development will be more fundamentally symbiotic than adversarial. I see no reason why virtual puppy won't watch and track the 2 year old for you while you work on other tasks. Its a much more real and interesting problem if you ask, is future AI going to kill us all because we insist on sticking to scarcity and class politics when we can literally create workers from thin air?
|
# ? Feb 15, 2015 18:23 |
|
|
# ? Jun 7, 2024 11:07 |
|
RuanGacho posted:I have always ascribed to the idea that AI development will be more fundamentally symbiotic than adversarial. I see no reason why virtual puppy won't watch and track the 2 year old for you while you work on other tasks. At some point, Thorstein Veblen's claim that a sufficient level of technological advancement will make capitalism as we know it impossible becomes reality. Climate change is the main challenge to this claim in the 21st century.
|
# ? Feb 16, 2015 02:04 |