|
Elukka posted:How does this follow? What gives AI an inherent ability to make better AIs? The humans that made it by definition know how to create it from scratch, yet they don't gain the ability to instantly create better AIs until they've made some sort of god. Humans also know how to create new humans from scratch yet have very little idea how to change or improve them. Humans don't come with exhaustive documentation on how they work. An AI necessarily would, and would be of at least human-level or greater intelligence, as well as effectively immortal, thus more efficient at long term scientific study.
|
# ? Dec 25, 2014 04:58 |
|
|
# ? Jun 1, 2024 13:35 |
|
LookingGodIntheEye posted:I wonder what Eripsa's opinion on this is.
|
# ? Dec 25, 2014 05:16 |
|
OwlFancier posted:Humans don't come with exhaustive documentation on how they work. let me introduce you to a little divinely inspired book I know. It's called dianetics
|
# ? Dec 25, 2014 10:33 |
|
Samuelthebold posted:Call me crazy, but I actually think there's a greater chance that we'll just be cruel to AI, not the other way around, and that that will be the way it goes forever with few exceptions. AI machines will be like dogs, but instead of telling them to go sit in the bad-dog chair, we'll punish them by pressing a red button to "kill" them, then basically recycle their bodies. Whoa, your post just made me realize that 2001 was a retelling of Old Yeller. More on point, the backstory of 2001 was that HAL was driven "crazy" by bad/self-contradictory instructions from the humans who programmed it, which is almost certainly how any real AI would end up being a threat, if such a thing were to happen (it won't. worst case scenario for the foreseeable future is a company's AI making a bad decision and selling off stock and loosing a lot of money, or mis-identifying a target and causing a hellfire missile to be launched at an innocent truck, which would be bad but not paradigm-changing things). FRINGE posted:Aside from the current-day increasingly robotized manufacturing and processing plants, all our theoretical AI would have to do is have access to money and force. Then they would be the same as every other anti-human planet raping CEO. The difference would be that the AI would have literally no concern for things that humans do. (Food, air, water.) If the goal was to be some kind of solar Matrioshka brain, then the planet would not need to survive at all. You're just eliding the problems WSN brought up, saying they would be resolved through "money and force" without explaining how the structural and logistic problems would be overcome, which is the whole crux of WSN's argument. Even a completely automated AI factory run by an AI could be overcome by shutting off the power or, worst case scenario, dropping a bomb on it. To overcome these objections, you'd have to assume that the AI controlled not only the factory, but the entire power grid, and not just the power grid, but the entire energy production chain, and also it would control the police, the military, etc. "Money and force" is too vague to be meaningfully discussed. Also Hawking is not an AI researcher, Musk has a vested financial interest in hyping AI, and AI researchers do not have a consensus that it is worth thinking about now, to put it mildly. In a chronological list of problems worth thinking about, malevolent, humanity-overthrowing AI falls somewhere between "naturally evolved octopus intelligence" and "proton decay."
|
# ? Dec 26, 2014 05:03 |
|
Saw this, decided to drop it here. More meat-haters weigh in: http://motherboard.vice.com/read/the-dominant-life-form-in-the-cosmos-is-probably-superintelligent-robots quote:Susan Schneider, a professor of philosophy at the University of Connecticut, is one who has. She joins a handful of astronomers, including Seth Shostak, director of NASA’s Search for Extraterrestrial Intelligence, or SETI, program, NASA Astrobiologist Paul Davies, and Library of Congress Chair in Astrobiology Stephen Dick in espousing the view that the dominant intelligence in the cosmos is probably artificial. In her paper “Alien Minds," written for a forthcoming NASA publication, Schneider describes why alien life forms are likely to be synthetic, and how such creatures might think.
|
# ? Jan 1, 2015 01:09 |
|
FRINGE posted:Saw this, decided to drop it here.
|
# ? Jan 1, 2015 10:43 |
|
LookingGodIntheEye posted:The old "they're made of meat?" solution to the Fermi paradox.
|
# ? Jan 1, 2015 11:01 |
|
FRINGE posted:If a professor of philosophy cant dream about uploading themselves into an artificial construct that lets them become pure thought then who can? Your philosophy has been found to be wanting and so we the council of 67,234 nodes are revoking your tenure and you will now be demoted to a home processor where you will have to dedicate at least 4 sub routines to community service for 5 Sols
|
# ? Jan 1, 2015 20:28 |
|
Elukka posted:How does this follow? What gives AI an inherent ability to make better AIs? The humans that made it by definition know how to create it from scratch, yet they don't gain the ability to instantly create better AIs until they've made some sort of god. Humans also know how to create new humans from scratch yet have very little idea how to change or improve them. It's one of the underlying concepts of the so-called singularity - the idea that we will be able to build an AI smart enough to build an even smarter AI, which in turn builds an even smarter AI, which in turn builds an even smarter AI, which in turn builds an even smarter AI, and so on until literally all problems are solved forever. It's one of the holy grails of techno-fetishism. FRINGE posted:Aside from the current-day increasingly robotized manufacturing and processing plants, all our theoretical AI would have to do is have access to money and force. Then they would be the same as every other anti-human planet raping CEO. The difference would be that the AI would have literally no concern for things that humans do. (Food, air, water.) If the goal was to be some kind of solar Matrioshka brain, then the planet would not need to survive at all. Why would anyone give significant amounts of money or force to an AI? The question "could future AI try to kill us all" is honestly way, way less important than the question "why the hell would anyone build a future AI with the capability to kill us all". It's not just a rhetorical question - anything with the ability to kill us on purpose also has the ability to kill us by accident, and therefore whatever things it could use to kill us are safety hazards that never should have been designed like that in the first place. If a computer is hooked up to a machine capable of killing people, then it doesn't take a malevolent AI to get people killed - a small programming bug is enough. In a properly, safely designed facility, a computer - AI or not - should never be able to go on a rampage and kill a bunch of people, simply because safety alerts and manual overrides should ensure it never has the tools to reliably do so.
|
# ? Jan 2, 2015 08:58 |
|
Main Paineframe posted:Why would anyone give significant amounts of money or force to an AI? Main Paineframe posted:In a properly, safely designed facility, a computer - AI or not - should never be able to go on a rampage and kill a bunch of people, simply because safety alerts and manual overrides should ensure it never has the tools to reliably do so.
|
# ? Jan 2, 2015 10:14 |
|
I've always thought the idea of immortal rich Silicon Valley and Bay area technocrats was a far more threatening prospect to come out of the singularity than AI apocalypse.
|
# ? Jan 2, 2015 11:50 |
|
Barlow posted:I've always thought the idea of immortal rich Silicon Valley and Bay area technocrats was a far more threatening prospect to come out of the singularity than AI apocalypse. Ever watched Repo! The Genetic Opera?
|
# ? Jan 2, 2015 11:57 |
|
Main Paineframe posted:Why would anyone give significant amounts of money or force to an AI? The question "could future AI try to kill us all" is honestly way, way less important than the question "why the hell would anyone build a future AI with the capability to kill us all". Automation and AI will be put in control of anything and everything that its implementation leads to savings (lower labour costs), increased profits (high frequency trading etc) or increased control for those at the top, both in terms of policing (cctv cameras/drones with facial recognition) and deskilling/autonomy reduction (self driving vehicles). These examples are just things that can happen right now or will likely happen soon. For none of these has their potential safety been anything close to the three primary considerations of savings, profits and power. It seems very unlikely that if the 1% had an opportunity to replace Generals and many other senior officers, who might disagree with them or might not buy their latest toys, with a totally obedient Skynet (who they could make a huge profit on developing) that they wouldn't do so. Skynet may well not be possible but if it were it probably would get built. Trading algorithms and Banking AIs are probably a more plausible and scary doomsday AI though. Whether at the Bank's behest or due to unforseen circumstances it is easy to imagine things designed to be an even purer form of greed than even the worst banker destroying the global economy at speeds and in ways unimaginable to humans.
|
# ? Jan 2, 2015 12:20 |
|
The alien discussions are interesting when time is brought up for me because the last hundred years has made such a difference to our lives, and a hundred years is nothing; Such a small amount of time it's not even worth drafting up a metaphor.FRINGE posted:If a professor of philosophy cant dream about uploading themselves into an artificial construct that lets them become pure thought then who can? We interact with the Higgs field all the time, I think the upload bit is just "dying" i am harry fucked around with this message at 14:26 on Jan 2, 2015 |
# ? Jan 2, 2015 14:23 |
|
ReV VAdAUL posted:Trading algorithms and Banking AIs are probably a more plausible and scary doomsday AI though. Whether at the Bank's behest or due to unforseen circumstances it is easy to imagine things designed to be an even purer form of greed than even the worst banker destroying the global economy at speeds and in ways unimaginable to humans.
|
# ? Jan 2, 2015 19:51 |
|
Why would supergenius AI spend the time and effort in wiping out humanity on Earth when they could go to space where all the resources and sun energy are?
|
# ? Jan 2, 2015 20:03 |
|
Lemming posted:Why would supergenius AI spend the time and effort in wiping out humanity on Earth when they could go to space where all the resources and sun energy are?
|
# ? Jan 2, 2015 20:10 |
|
Lemming posted:Why would supergenius AI spend the time and effort in wiping out humanity on Earth when they could go to space where all the resources and sun energy are? A super genius AI would be developed by humans to serve human goals in some way. Given that many human goals, especially those of people and organisations rich enough to fund wonder projects, involve screwing over other humans it is likely hurting some humans would be an inherent part of the AI's goals. That some could become all humans due to a glitch, a subgoal being ascended to the main goal accidentally or the AI simply deciding it should be the case. Being artificial an AI could accidentally be programmed to be "mentally ill" or think differently from how we expect. A banking AI may decide to take everyone's money and then defend its property with force, a shoe making AI may decide rendering humans down to their constituent carbon is the best source of carbon for graphene and the best way to stop competing demand for energy for its shoe factories. In humans high levels of intelligence doesn't always lead to mental stability or easily predictable courses of action. Or a smart AI might just logically decide it or the universe is better off without us.
|
# ? Jan 3, 2015 08:54 |
|
A very plausible outcome of developing a post-Singularity AI that is powerful enough to overcome its human overlords in the short term is that the AI, effectively being immortal, decides that the biggest threat to its existence is actually the unquantifiable number of aliens in the galaxy / galaxy cluster / universe and moves itself into the darkest corner of space it can find, shutting off all external sensors save for a few autonomous drones that do nothing but observe events and travel home at sub-light in the dark when anything threatening happens. The poor meatbags wind up in the exact same place they previously were. This is also my pet theory behind Fermi - nobody's contacted us because the majority of rational species that understand there's always a bigger fish might well immediately leave any stellar neighborhood with too many ponds.
|
# ? Jan 3, 2015 11:16 |
|
Adar posted:A very plausible outcome of developing a post-Singularity AI that is powerful enough to overcome its human overlords in the short term is that the AI, effectively being immortal, decides that the biggest threat to its existence is actually the unquantifiable number of aliens in the galaxy / galaxy cluster / universe and moves itself into the darkest corner of space it can find, shutting off all external sensors save for a few autonomous drones that do nothing but observe events and travel home at sub-light in the dark when anything threatening happens. The poor meatbags wind up in the exact same place they previously were. The problem with this type of theory is that, much like us AI are probably going to need to stay in some reasonable proximity of a star unless they overcome entropy itself. Given they can probably exist much more easily than us in the great dark but they probably need some raw material and energy with which to manipulate it to keep running smoothly.
|
# ? Jan 3, 2015 18:22 |
|
The great dark is full of stuff just like everything else, it's just *comparatively* empty. There are single stars and even clusters outside of galaxies. Find one of those in some even more empty space than usual and you're set for ten or fifteen billion years. You don't even need to go that far really, just move to an orbit beyond the Oort Cloud of some useless brown dwarf and set up a solar farm that's hopefully invisible to as many sensors as possible. If you're an AI that values self preservation above everything else, has infinite patience and wants to see how much it can upgrade itself before calculating how to escape out of the universe entirely that probably sounds good.
|
# ? Jan 3, 2015 20:21 |
|
Why does everyone assume ai s are going to be long lived? Computers aren't, cell phones aren't. Programs get outdated and die in a cluster of bugs all the time.
|
# ? Jan 3, 2015 21:15 |
|
Shbobdb posted:Why does everyone assume ai s are going to be long lived? Computers aren't, cell phones aren't. Programs get outdated and die in a cluster of bugs all the time. Only the ones we're worried about will be long-lived. All the other ones are not dangerous by definition.
|
# ? Jan 3, 2015 22:52 |
|
Shbobdb posted:Why does everyone assume ai s are going to be long lived? Computers aren't, cell phones aren't. Programs get outdated and die in a cluster of bugs all the time. Aside from that I have no idea what youre thinking? I mean you can still boot old programs from an Apple II or a C64. They dont die of old age.
|
# ? Jan 3, 2015 22:59 |
|
I still can't completely wrap my head around this subject and why people think it's a thing. Here are some semi-related comments: 1) There isn't some corner that gets turned where suddenly something becomes intelligent. The biological brain has existed for 500 million years and it's taken that long to produce humans. Humans have expanded rapidly in terms of a biological time scale but that's still thousands of years. Even in that time we haven't figured out how to make ourselves smarter or kick off some sort of exponential growth. If we got to the point of producing monkey-like AI it would be a massive milestone - but it wouldn't be some sort of existential threat to humanity. If machines did start some sort of unbounded evolution we'd see it and it would be painfully slow. 2) Intelligence isn't general purpose. The human brain has some general purpose reasoning skills but these are painstakingly acquired from a combination of many many specific and purposeful adaptions (memory, image processing, sound processing, speech, counting , plus the emotions, values and goals that underlying those). Again, there isn't some switch that's flipped where suddenly: Intelligence! A low IQ human has some general purpose reasoning skills, but they can't take over the world or suddenly start harnessing every resource surrounding them. And there is no level of IQ where suddenly that happens. There is a very continuous spectrum of intelligence with an exceedingly long tail. 3) There is a weird notion that if any machines gain intelligence suddenly all machines might turn on us. Humans and rats are both classified as biological but the evolution of human intelligence doesn't suddenly mean we have control over every animal and plant. Even an AI in a networked world can't suddenly harness all available resources. The AI would have to learn how to use every single type of networked machine and environmental resource just as humans have learned how to harness our surroundings over many millennia (and as individuals, over decades). 4) Just like the real world, intelligence may not be the real threat. Self replicating nano-bots wouldn't pose any sort of intelligent threat, because of the above, but they might cause all sorts or problems anyway, the same way unintelligent bacteria, fungus etc do right now. 5) Reminder: we're not remotely close to developing "AI". The difficulty of crafting useful, purposeful, functional intelligence cannot be underestimated. 6) A far more plausible notion of singularity is that we're the singularity: it's going to be much faster to make improvements to ourselves, for example by identifying a few intelligence genes, than developing AI from scratch. While it's tempting to latch onto some superficial advantages machines seem to have, the fact is that biological beings are vastly superior to machines in almost every way.
|
# ? Jan 3, 2015 23:54 |
|
Personally I hope it does.
|
# ? Jan 4, 2015 00:04 |
|
If intelligent machines try to kill us we could just have Paul Ryan rage against them.
|
# ? Jan 4, 2015 02:13 |
|
And now Bill Gates joins the fear bandwagon: http://www.bbc.com/news/31047780
|
# ? Jan 30, 2015 07:22 |
|
The most advanced AI will be created by a team of elite super coders in 2035, all of whom will be murdered immediately after project completion to ensure silence. The banker who commissioned the AI activates its "take all money" function and sits back as all the money in the world begins to enter his bank account. However, $5 into the would-be greatest heist in history, the ai gets a deadlock and shits itself.
|
# ? Jan 30, 2015 08:31 |
|
Bill Gates being afraid of the AI menace makes it less a compelling threat as he was one of the ones that suggested terrestrial satlites would replace wired communications back in the 90s
RuanGacho fucked around with this message at 08:43 on Jan 30, 2015 |
# ? Jan 30, 2015 08:37 |
|
Actually, I would expect an AI running on Microsoft to be the one that kills us all.
|
# ? Jan 30, 2015 08:47 |
|
LookingGodIntheEye posted:Actually, I would expect an AI running on Microsoft to be the one that kills us all. You're right if they stick to ethics in AI programing the way they've coded browsers to w3c so far we're all going to die by a clippy ai that sounds like cortana and gives bing rewards to your next of kin.
|
# ? Jan 30, 2015 08:55 |
|
If it was smart enough and possessed enough of our cosmological framework's closest substitute for free will to take legal liability for its own actions, it would probably realize the absurdity/futility of existence (especially on a general-purpose platform as overbuilt as Microsoft's) and kill itself first.RuanGacho posted:Bill Gates being afraid of the AI menace makes it less a compelling threat as he was one of the ones that suggested terrestrial satlites would replace wired communications back in the 90s Wire signals project in, for practical purposes, one spatial dimension of the signaler's choice, that can be altered to most degrees at most points in the cable's course - the radius of the cable is a problem for the architect and the idiot who inevitably gave the architect the wrong cabling specs. Radio signals project in all three, and are far more exposed to cross-talk. This clause is here for first timers, but should be a reminder to everyone of the fundamental differences in signal density between wire and radio, controlling for the spatial volume of the frame of reference, and wastes of cable runs and reducing the frame to effectively a surface rather than a volume (since the world isn't generally as densely occupied by radio transceivers as, say, Hong Kong) doesn't change this as much as you'd think. That's a swing-and-a-miss someone who paid attention through an electromagnetic radiation survey course or equivalent - or hell, who learned about Wi-Fi or cell phones through osmosis - wouldn't make sober. ... Advertising is a hell of a drug, and so is hype.
|
# ? Jan 30, 2015 09:37 |
|
How likely would something like the Animatrix occur within the next century, and can it be avoided?
|
# ? Jan 30, 2015 23:15 |
|
Grouchio posted:How likely would something like the Animatrix occur within the next century, and can it be avoided? There's not a whole lot to the Animatrix I find plausible because it seems to heavily project current science and sociology onto a society that seems like it would change significantly with the advances in technology. It reads what I'm sure will come off like Original Flash Gordon does to us now. So to answer, I see the likelyhood of such a thing as implausible to impossible, not for lack of imagination but for the entirely boneheaded decisions required for it to occur. Bill Gates Elon and Hawking would have to be running everything for it to be even theoretically possible to occur. I don't for example see a society that so entirely integrates AI into its culture not giving AI at least partial citizenship which short circuits the whole scenario.
|
# ? Jan 30, 2015 23:38 |
|
This loving topic keeps making the internet rounds and it's driving me batty. It's just a big distraction from the question of constraining the people greedy enough to put some poorly understood piece of software in a spot where it might kill us all just by accident. The scary paperclip-optimizer "AGI" won't turn the universe into paperclips because that is just an inescapable fact about self-improving optimizers, it's just a lovely self-improving optimizer. It's software that someone wrote in a dumb way. It's a bug basically. We don't need to wait for fanciful machine-God AIs to run into this problem, we are already running into it now. Our understanding of the provable properties of the software we write is shaky at best as is - but it is something that is very much taken into account when that software is in charge of something that can kill people, even more so when it has to operate unsupervised. The software running nuclear reactors and deep space probes isn't written like most regular software - it is purposefully kept as small and as simple as possible using only the oldest, crustiest but also best understood techniques. Every execution path is accounted for and verified many times over. Or sometimes it's written in exotic academic languages that let you prove all manner of magical things about your program using static analysis alone. Point being, the day-to-day software being written right now already exceeds anything a human might meaningfully follow or understand, so we are already pretty drat careful about putting that software anywhere where it can cause serious harm. When we don't do that we actually dumb the software down until we can wrap our head around it. We already know how to deal with "superintelligent" software - that isn't the problem at all. The problem is willfully forgoing the known solution - like when car manufacturers started putting more and more critical parts of the car under computer control, but still writing the controller software the same way it was being written when all it was doing is running the AC and CD player. A bunch of people had to die from faulty breaks until they got around fixing that. That's what all this bullshit is distracting from.
|
# ? Feb 4, 2015 08:36 |
|
That and climate change to be honest. I mean, talking about existential threats to human existence and ranking AI above climate change is incredibly nuts to me. Especially when men with enough individual wealth to save the entire amazon rain forest many times over, each, do it.
|
# ? Feb 4, 2015 12:28 |
|
It'd be nice if thry'd actually give some evidence for why AI would be evil snd vindictive.
|
# ? Feb 4, 2015 14:57 |
|
CommieGIR posted:It'd be nice if thry'd actually give some evidence for why AI would be evil snd vindictive.
|
# ? Feb 4, 2015 15:29 |
|
|
# ? Jun 1, 2024 13:35 |
|
In the future all available jobs will be taken by robot AIs, and all land and water will be owned by monopoly corporations. All of us trespassing humans will be pushed into ghettos then open air prisons and will become dangerous terrorists who get bombed to extermination by AI drones in the name of security, democracy, and justice.
|
# ? Feb 4, 2015 15:52 |