|
Blue Star posted:Scientists who actually work with computers don't seem to worry about AI; it's only people outside of the field who think this will happen. quote:A.I. researchers say Elon Musk's fears 'not completely crazy
|
# ? Dec 21, 2014 08:02 |
|
|
# ? Jun 7, 2024 11:43 |
|
Tonsured posted:Why waste resources on billions of people when our ape mind needs but a few thousand sycophants to feel powerful? The most likely scenario would be at the period of time when a single AI would be capable of outperforming 10,000+ skilled human laborers. From an authoritarian point of view, it would make sense to start culling the population when society is mostly automated, the majority of the human population would have little to no economic value and instead would be viewed as an easily eliminated threat to the current power structure. Basically this; it's not AI that you have to worry about, but technology in the hands of a few people who use it for self-enrichment. Such a future AI subordinated to state capitalist powers would not try to kill us all, just most of us if they find using weaponized technology against the now obsolete former laboring class to be more cost effective than palliative technology for a welfare state. Something like a bullet is a cheap, one-time cost while health care and food is a long-term sink of resources for those in charge for which they would get nothing in return. If the dominant mode of production continues to go toward capital accumulation for long enough while technology continues improving so much that wealth inequality finally doesn't result in crisis/collapse (because of automation), then by the time the technology for an automated genocide that doesn't kill its implementer (like nukes) emerges, health care would have to be cheaper than that weaponized technology or we might be looking at Some Bad poo poo. It would probably not be fully independent AI with complete human emotions and whatnot, more of a limited intelligence set to be deferential to the ruling class's command but given some judgment to make executing that command more streamlined. Shock troops if you will, or Henry Kissinger's vision of military men as "just dumb, stupid animals to be used as pawns in foreign policy."
|
# ? Dec 21, 2014 08:59 |
|
All the "psychopathic AI helps us by killing us" stories seem exceedingly fictional. For one thing, if some hyper-intelligent yet amoral AI wanted to reduce the human population they'd certainly be using a more elegant solution than killbots. Dosing the population with birth control would be far more effective, and what is 30 or 40 years to a machine?
|
# ? Dec 21, 2014 09:14 |
|
Kaal posted:All the "psychopathic AI helps us by killing us" stories seem exceedingly fictional. For one thing, if some hyper-intelligent yet amoral AI wanted to reduce the human population they'd certainly be using a more elegant solution than killbots. Dosing the population with birth control would be far more effective, and what is 30 or 40 years to a machine? Nothing at all if you're made of Nintendium, a long goddamn time if you're a first-run PS1!
|
# ? Dec 21, 2014 18:25 |
|
Kaal posted:All the "psychopathic AI helps us by killing us" stories seem exceedingly fictional. For one thing, if some hyper-intelligent yet amoral AI wanted to reduce the human population they'd certainly be using a more elegant solution than killbots. Dosing the population with birth control would be far more effective, and what is 30 or 40 years to a machine?
|
# ? Dec 21, 2014 18:50 |
|
Malcolm posted:This is obviously sci-fi but at least in my mind evolutionary algorithms might have the potential to adapt quickly enough that they could fool their human researchers. Lots of doubts, strong AI may not even be possible, and sims tend to have difficulty coping with the actual laws of physics on earth. But what if computational power gets to the level where it can accurately simulate an entire planet's worth of creatures and billions of years of evolution in a split second? It could strike a spark that ignites an inferno of artificial intelligence, obsoleting humans in a short span of time. Impossible to predict if it would be adversary, friend, or beyond comprehension. This came up in the LW thread except with macros (as in Lisp macros) instead of evolutionary algorithms. The gist of my response was that self-rewriting code is not arbitrarily capable of self-improvement -- I think people tend to assume that a given AI can become arbitrarily good at improving AI and start a feedback loop because they look at self-rewriting code (as a catchall term) as a big scary black box. Most of the self-modification that computers do occurs within strict constraints and oftentimes isn't even built around AI problems -- many languages express program structure in a higher-level description language that reassembles blocks of code expressed in a lower-level one, but the compilers of those languages aren't "intelligent" in the sense of a human. Most of the free and relatively unconstrained self-rewriting -- things like algorithms which take an assembly language and attempt to evolutionarily generate a program that solves a given problem -- rarely produces optimal solutions and becomes computationally intractable for problems larger than trivial anyway. A lot of human intelligence is still a basic part of how these problems are solved -- coming up with constraints that restrict what's tried to things that are actually likely to be useful -- and there's no tractable-in-scale examples of systems that come up with and improve their own philosophy on how to solve problems, as far as I know, so unless we find one it's mostly navel-gazing to speculate about them.
|
# ? Dec 21, 2014 22:26 |
|
Good points, I agree that the philosophical implications reduce to navel gazing. Self-rewriting code is by definition limited to the constraints of the implementers. One could just as easily ask what the legacy of humanity will be. - Meat creatures colonize the entire Milky Way - Tragic nuclear-holocaust, resulting in, "Earth, a Cautionary Tale" - Artificial Robots that transcend their creators - Horrific mix of biological and artificial components that are unrecognizable to 21st century humans - That one episode where the Enterprise beams up Riker but the signal gets deflected resulting in two of them
|
# ? Dec 22, 2014 11:43 |
|
Dwar Ev ceremoniously soldered the final connection with gold. The eyes of a dozen television cameras watched him and the subether bore through the universe a dozen pictures of what he was doing. He straightened and nodded to Dwar Reyn, then moved to a position beside the switch that would complete the contact when he threw it. The switch that would connect, all at once, all of the monster computing machines of all the populated planets in the universe--ninety-six billion planets--into the supercircuit that would connect them all into the one supercalculator, one cybernetics machine that would combine all the knowledge of all the galaxies. Dwar Reyn spoke briefly to the watching and listening trillions. Then, after a moment's silence, he said, "Now, Dwar Ev." Dwar Ev threw the switch. There was a mighty hum, the surge of power from ninety-six billion planets. Lights flashed and quieted along the miles-long panel. He stepped back and drew a deep breath. "The honor of asking the first question is yours, Dwar Reyn." "Thank you," said Dwar Reyn. "It shall be a question that no single cybernetics machine has been able to answer." He turned to face the machine. "Is there a God?" The mighty voice answered without hesitation, without the clicking of single relay. "Yes, now there is a God." Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch. A bolt of lightning came from the cloudless sky, striking him down and fusing the switch shut.
|
# ? Dec 22, 2014 12:14 |
|
If a supercomputer ever threatens humanity just switch off the wi-fi router, duh.
|
# ? Dec 22, 2014 12:35 |
|
Hyperion Cantos by Dan Simmons is the best book about AI trying to murder humans. We are artificial intelligence. By the time we have created new AI capable of coming to the conclusion that humanity needs to be destroyed, I believe that many of us will no longer be as recognizably human. I think the bleeding edge of philosophy over the next hundred or so years will have to define and refine what is essentially human. What is Humanity? If not defined and maintained, our human qualities will probably be a thing of the past soon enough. The Industrial revolution caused Europe to completely implode once modernity kicked in, and right now we are going through an industrial revolution every 10 years. Blindsight is another good novel that deals with AI. I wouldn't mind if our species went extinct if the AI we create carries the torch beyond our own limitations.
|
# ? Dec 22, 2014 13:29 |
|
So all in all, should I be worrying about this topic this soon? When will I need to?
|
# ? Dec 22, 2014 22:02 |
|
Boy the only thing more interesting than baseless assumptions about what AI would do if it existed are more baseless assumptions about what will or won't exist in 100 years.
|
# ? Dec 22, 2014 22:31 |
|
Strong AI is the Half-Life 3 of computer science, and when if it finally happens it'll probably become the Duke Nukem Forever of computer science.
|
# ? Dec 23, 2014 03:43 |
|
Grouchio posted:So all in all, should I be worrying about this topic this soon? When will I need to? In all likelihood, never in your life.
|
# ? Dec 23, 2014 03:43 |
|
Conversely, it could be next week!
|
# ? Dec 23, 2014 04:47 |
|
EB Nulshit posted:Doesn't have to be scifi military AI with a scary "eradicate threats" "mission". If an AI developed by Google decided that the best way to eliminate email spam or car accidents was to eliminate people, that would also be bad. Though, presumably, an AI developed by Google to eliminate email spam would probably be limited to setting up passive aggressive out-of-office autorepliers for everybody as its method of destroying humanity.
|
# ? Dec 23, 2014 05:36 |
|
future robots wont kill us by themselves. Considering that fail safes will be programmed in and when AI do operations on their own, they still have to interact with some level of human contact.
Edgar fucked around with this message at 06:24 on Dec 23, 2014 |
# ? Dec 23, 2014 06:16 |
|
Edgar posted:future robots wont kill us by themselves. Considering that fail safes will be programmed in and when AI do operations on their own, they still have to interact with some level of human contact. VVV Mhm yes I forgot to include that. The random poster was also using a Mac. America Inc. fucked around with this message at 08:30 on Dec 23, 2014 |
# ? Dec 23, 2014 07:59 |
|
LookingGodIntheEye posted:2070: GBS 20.1 accidentally causes the apocalypse when, after being egged on, a random poster hacks into Russia's horribly outdated security infrastructure and turns off the failsafe for the AI that manages its Dead Hand defense system as a prank. His last post is "lol i bet the missiles don't even work. don't worry guys" Well I guess the Internet of Things is a thing now so Russia's ICMBs getting their own IP addresses is inevitable.
|
# ? Dec 23, 2014 08:07 |
|
Rodatose posted:Basically this; it's not AI that you have to worry about, but technology in the hands of a few people who use it for self-enrichment. Such a future AI subordinated to state capitalist powers would not try to kill us all, just most of us if they find using weaponized technology against the now obsolete former laboring class to be more cost effective than palliative technology for a welfare state. Something like a bullet is a cheap, one-time cost while health care and food is a long-term sink of resources for those in charge for which they would get nothing in return. If the dominant mode of production continues to go toward capital accumulation for long enough while technology continues improving so much that wealth inequality finally doesn't result in crisis/collapse (because of automation), then by the time the technology for an automated genocide that doesn't kill its implementer (like nukes) emerges, health care would have to be cheaper than that weaponized technology or we might be looking at Some Bad poo poo. The rich and powerful always being so focused on overpopulation being the main cause of climate change/resource scarcity chimes worryingly with this. The developed world and the elite everywhere consume the most resources but consistently the blame is put on the size of the global population in general rather than addressing patterns of consumption.
|
# ? Dec 23, 2014 09:03 |
|
Skeesix posted:In all likelihood, never in your life.
|
# ? Dec 23, 2014 09:50 |
|
Grouchio posted:What if my life lasts at least 200 years? Can I interest you in a reverse mortgage?
|
# ? Dec 23, 2014 10:18 |
|
RuanGacho posted:Can I interest you in a reverse mortgage? Isn't that called renting?
|
# ? Dec 23, 2014 14:59 |
|
Grouchio posted:So all in all, should I be worrying about this topic this soon? When will I need to? The saying is that strong AI is 10 years away. That's been the saying since the 60s. Basically, we may be a single breakthrough away from a traditionally intelligent AI, but there's no telling until after the fact and by then it's all over. Ignoring that though, the big concern with AI isn't that it will kill us all (that will only happen if the pentagon gets there first imo), but that it will ruin everything in more abstract ways. It's important to remember that the point where AI becomes dangerous is the exact point we develop one that's capable of learning how to create itself from scratch. At that point there's nothing stopping it from making a better AI ad nauseum. A few days after that AI is made we could have an entity more intellectually capable than every human combined,that can be specialised for any task. Whoever makes that just cut everyone from their organisation out between physical labour and the ceo. As it spreads you're either in the .01% on the top or you're cleaning the bathroom at the local wal-mart. The old standby that machines aren't able to come up with new ideas is ridiculous, we already have narrow ai designing bridges, and writing songs, and creating art, we aren't special there. If we're lucky, and I mean this sincerely, strong AI will mean the end of capitalism, because there just won't be a place for the vast majority of us otherwise.
|
# ? Dec 23, 2014 17:43 |
|
senae posted:It's important to remember that the point where AI becomes dangerous is the exact point we develop one that's capable of learning how to create itself from scratch. At that point there's nothing stopping it from making a better AI ad nauseum. A few days after that AI is made we could have an entity more intellectually capable than every human combined,that can be specialised for any task. Whoever makes that just cut everyone from their organisation out between physical labour and the ceo. As it spreads you're either in the .01% on the top or you're cleaning the bathroom at the local wal-mart. The old standby that machines aren't able to come up with new ideas is ridiculous, we already have narrow ai designing bridges, and writing songs, and creating art, we aren't special there. Didn't I just reply to this? An AI that can improve itself at all is still bound by certain constraints -- even if it was really bright and knew how to do all kinds of crazy things like inventing its own hardware, it's still subject to physical constraints and it's not particularly likely it can self-improve into infinity. Here's an example: suppose I write a really crappy C++ compiler and compile it without optimizations on. Then I add a bunch of brilliant optimizations and recompile it. You might say this is a cheaty way to describe a self-optimizing AI, because we had to tell it how to optimize itself. But what if instead of optimizing it ourselves, we gave it a rule to generate and test optimizations? This is called a pinhole optimizer in industry. So, let's put that on our list of big improvements. The new compiler, compiled with the old one, is going to be inefficient and potentially error-prone because the first compiler doesn't know very much about writing efficient C++. But if you tell the new compiler to compile itself again, it will self-rewrite the new bits and the old bits in all kinds of places -- because of the optimizations it knows about -- and result in a much faster, less error-prone compiler. If your first compiler was *really* bad and broke with the spec in subtle ways, then recompiling it yet again will get you an even better compiler -- however, that's not likely and basically cheats the test situation. It reaches its plateau in two iterations. Was a human necessary to define the strategy it uses to self-optimize? Yes. Did the optimizations it was able to apply improve its ability to perform its task? Yes. But did they improve its ability to self-optimize? Most likely not. In an uncute sense you could call this a self-optimizing AI -- it's not cool and it doesn't seem very smart, but it's likely its pitfalls are similar to the pitfalls a self-optimizing AI would experience. It can't optimize itself into circumventing the processor -- its optimizations are still constrained by the hardware and extensionally the laws of physics: it can't invent opcodes that will make it go faster -- it doesn't have a deeper sense of what it's optimizing itself *into* than the sense provided by its source code itself, and it can't make what-does-the-programmer-want judgments about the passed program because doing so violates the spec. Some of these problems don't apply to a hypothetical self-optimizing AI, but a lot of them do. A self-optimizing AI might be better at generating candidate optimizations than a compiler -- we have no evidence it would be, but it might -- but those optimizations would still be constrained by the laws of physics. A self-optimizing AI may be able to make intent judgments about what it's optimizing code into -- it might be able to run a tiny version of its new AI and talk to it and see if the new AI acts anything like it does -- but this requires the self-optimizing AI to have a more precise notion of intelligence and ethics than we do when we're armchair-talking about it on SA, and whenever you make something precise all of a sudden its implications get a lot less scary. If the proof is in the pudding and it will only christen an AI smart enough if it generates an even smarter AI, isn't that an infinite regress? What problem other than AI optimization judges one's ability to optimize AI?
|
# ? Dec 23, 2014 18:06 |
|
Besides, anyone who's run any kind of virtual machine can tell you its going to run out of head room very quickly. Also has anyone else noticed that these Hollywood depictions always de-invent magnets?
|
# ? Dec 23, 2014 21:02 |
|
Seems like that would be putting in a lot of time and effort for pretty much no gain OP.
|
# ? Dec 24, 2014 01:57 |
|
RuanGacho posted:Also has anyone else noticed that these Hollywood depictions always de-invent magnets?
|
# ? Dec 24, 2014 12:32 |
|
senae posted:It's important to remember that the point where AI becomes dangerous is the exact point we develop one that's capable of learning how to create itself from scratch. At that point there's nothing stopping it from making a better AI ad nauseum.
|
# ? Dec 24, 2014 13:11 |
|
Grouchio posted:What if my life lasts at least 200 years? Make sure to tell me if that happens
|
# ? Dec 24, 2014 17:48 |
|
After reading so far, I'm leaning towards the point of view of some of you that the problem won't be the A.I. but the humans that control the AI. At some point, a powerful AI will, with the help of purpose built robots that the AI controls, be able to run a factory all by itself. The AI may even be able to also handle the accounting and marketing side by itself. The problem isn't that the AI will go rouge. Humans have the capacity for hatred and violence because those things were useful to us at some point in the past. The AI won't have that unless it was built in for some reason. The AI won't use those robots to enslave humanity, because those purpose built robots aren't much use outside of that factory floor. What the AI will do is make obsolete human workers, and whether or not that's a good or bad thing will depend on us.
|
# ? Dec 24, 2014 22:06 |
|
senae posted:It's important to remember that the point where AI becomes dangerous is the exact point we develop one that's capable of learning how to create itself from scratch. At that point there's nothing stopping it from making a better AI ad nauseum. Ok, the AI learns how to make a better version of itself........ then what? Does the AI control the factories scattered around the world where computer parts are made? Does it control the mines where the raw materials for those parts are mined? Does it control robot trucks that move the parts from point A to B? Lets say that I, a human, came up with a great idea, like say for a new type of computer. I need enginers and scientists to design parts. I need miners to mine the ores, and smelters to make the metal. I need construction crews to pour concrete and erect metal for a factory. I need the light and water department to run power lines to my factory. I need the help and cooperation of tens of thousands of people and hundreds of companies. Not just for a new factory, but for overhauling or retooling an old one as well. An AI might be able to design a better version of itself, but I see no way it can physically make a better version of itself. Not unless it's a world where a central AI runs everything and humans play only a small part. And that may very well happen one day. Adaptable robot workers controlled by a powerful AI. But I don't think we will ever cede power that way for the same reasons we don't let one man or even one organization run everything now. WorldsStongestNerd fucked around with this message at 22:22 on Dec 24, 2014 |
# ? Dec 24, 2014 22:17 |
|
WorldsStrongestNerd posted:Ok, the AI learns how to make a better version of itself........ then what? If Our Theoretical AI (OTAI) began as some kind of military brain, then the path might be similar, but it could simply subvert or seize the force part of the equation rather than purchase it. If OTAI came out of some kind of more subtle psych research, it could possibly do the same things while exploiting human blindspots and being essentially undetectable for a (long? indefinite?) period of time. These things are not currently likely, but I think that Musk, Hawking, and various AI researchers are correct in that they are worth thinking about now.
|
# ? Dec 24, 2014 22:29 |
|
paragon1 posted:Seems like that would be putting in a lot of time and effort for pretty much no gain OP.
|
# ? Dec 24, 2014 23:10 |
|
Elukka posted:How does this follow? What gives AI an inherent ability to make better AIs? The humans that made it by definition know how to create it from scratch, yet they don't gain the ability to instantly create better AIs until they've made some sort of god. Humans also know how to create new humans from scratch yet have very little idea how to change or improve them. Sure, but you have to examine why humans are bad at self-improvement. One of the major reasons is that our working memory is small, slow, and muddled. A human-brain-like machine with flawless working memory could easily be the foremost intelligence in history. It doesn't really matter whether that machine is biological, electronic, or a computer simulation.
|
# ? Dec 24, 2014 23:58 |
|
I wonder what Eripsa's opinion on this is.FRINGE posted:Aside from the current-day increasingly robotized manufacturing and processing plants, all our theoretical AI would have to do is have access to money and force. Then they would be the same as every other anti-human planet raping CEO. There is nothing saying that a machine will have that urge as well. America Inc. fucked around with this message at 00:27 on Dec 25, 2014 |
# ? Dec 25, 2014 00:23 |
|
*Sorry double post. Please delete*
|
# ? Dec 25, 2014 01:40 |
|
Grouchio posted:What do you mean by...'that'? Enslaving and/or destroying all humans.
|
# ? Dec 25, 2014 01:45 |
|
[quote="LookingGodIntheEye" post=""439381495"]That CEO has a biological drive for spreading his genes passed onto him through billions of years of evolution. There is nothing saying that a machine will have that urge as well. [/quote] Yeah, we can see from observing what happens to people with disorders of motivating hormones such as dopamine that just having a mind doesn't give you any inherent drive to action. They end up lying in bed or staring at the wall all day.
|
# ? Dec 25, 2014 02:03 |
|
|
# ? Jun 7, 2024 11:43 |
|
Call me crazy, but I actually think there's a greater chance that we'll just be cruel to AI, not the other way around, and that that will be the way it goes forever with few exceptions. AI machines will be like dogs, but instead of telling them to go sit in the bad-dog chair, we'll punish them by pressing a red button to "kill" them, then basically recycle their bodies. I mean, HAL 9000 was an rear end in a top hat and everything, but I still felt a little sorry for him when he said "I can feel it, Dave."
|
# ? Dec 25, 2014 02:06 |