Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
OwlFancier
Aug 22, 2013

Elukka posted:

How does this follow? What gives AI an inherent ability to make better AIs? The humans that made it by definition know how to create it from scratch, yet they don't gain the ability to instantly create better AIs until they've made some sort of god. Humans also know how to create new humans from scratch yet have very little idea how to change or improve them.

Humans don't come with exhaustive documentation on how they work. An AI necessarily would, and would be of at least human-level or greater intelligence, as well as effectively immortal, thus more efficient at long term scientific study.

Adbot
ADBOT LOVES YOU

FRINGE
May 23, 2003
title stolen for lf posting

LookingGodIntheEye posted:

I wonder what Eripsa's opinion on this is.

That CEO has a biological drive for spreading his genes passed onto him through billions of years of evolution.
There is nothing saying that a machine will have that urge as well.
Sure it may not have that drive, but it may have some drive. Even a simple "know more, think better" could iterate into Matrioshka-land.

Rodatose
Jul 8, 2008

corn, corn, corn

OwlFancier posted:

Humans don't come with exhaustive documentation on how they work.

let me introduce you to a little divinely inspired book I know. It's called dianetics

Sharkie
Feb 4, 2013

by Fluffdaddy

Samuelthebold posted:

Call me crazy, but I actually think there's a greater chance that we'll just be cruel to AI, not the other way around, and that that will be the way it goes forever with few exceptions. AI machines will be like dogs, but instead of telling them to go sit in the bad-dog chair, we'll punish them by pressing a red button to "kill" them, then basically recycle their bodies.

I mean, HAL 9000 was an rear end in a top hat and everything, but I still felt a little sorry for him when he said "I can feel it, Dave."

Whoa, your post just made me realize that 2001 was a retelling of Old Yeller. :stonk: More on point, the backstory of 2001 was that HAL was driven "crazy" by bad/self-contradictory instructions from the humans who programmed it, which is almost certainly how any real AI would end up being a threat, if such a thing were to happen (it won't. worst case scenario for the foreseeable future is a company's AI making a bad decision and selling off stock and loosing a lot of money, or mis-identifying a target and causing a hellfire missile to be launched at an innocent truck, which would be bad but not paradigm-changing things).

FRINGE posted:

Aside from the current-day increasingly robotized manufacturing and processing plants, all our theoretical AI would have to do is have access to money and force. Then they would be the same as every other anti-human planet raping CEO. The difference would be that the AI would have literally no concern for things that humans do. (Food, air, water.) If the goal was to be some kind of solar Matrioshka brain, then the planet would not need to survive at all.

If Our Theoretical AI (OTAI) began as some kind of military brain, then the path might be similar, but it could simply subvert or seize the force part of the equation rather than purchase it.

If OTAI came out of some kind of more subtle psych research, it could possibly do the same things while exploiting human blindspots and being essentially undetectable for a (long? indefinite?) period of time.

These things are not currently likely, but I think that Musk, Hawking, and various AI researchers are correct in that they are worth thinking about now.

You're just eliding the problems WSN brought up, saying they would be resolved through "money and force" without explaining how the structural and logistic problems would be overcome, which is the whole crux of WSN's argument. Even a completely automated AI factory run by an AI could be overcome by shutting off the power or, worst case scenario, dropping a bomb on it. To overcome these objections, you'd have to assume that the AI controlled not only the factory, but the entire power grid, and not just the power grid, but the entire energy production chain, and also it would control the police, the military, etc. "Money and force" is too vague to be meaningfully discussed.

Also Hawking is not an AI researcher, Musk has a vested financial interest in hyping AI, and AI researchers do not have a consensus that it is worth thinking about now, to put it mildly. In a chronological list of problems worth thinking about, malevolent, humanity-overthrowing AI falls somewhere between "naturally evolved octopus intelligence" and "proton decay."

FRINGE
May 23, 2003
title stolen for lf posting
Saw this, decided to drop it here.

More meat-haters weigh in:

http://motherboard.vice.com/read/the-dominant-life-form-in-the-cosmos-is-probably-superintelligent-robots

quote:

Susan Schneider, a professor of philosophy at the University of Connecticut, is one who has. She joins a handful of astronomers, including Seth Shostak, director of NASA’s Search for Extraterrestrial Intelligence, or SETI, program, NASA Astrobiologist Paul Davies, and Library of Congress Chair in Astrobiology Stephen Dick in espousing the view that the dominant intelligence in the cosmos is probably artificial. In her paper “Alien Minds," written for a forthcoming NASA publication, Schneider describes why alien life forms are likely to be synthetic, and how such creatures might think.

...

“There’s an important distinction here from just ‘artificial intelligence’,” Schneider told me. “I’m not saying that we’re going to be running into IBM processors in outer space. In all likelihood, this intelligence will be way more sophisticated than anything humans can understand.”

The reason for all this has to do, primarily, with timescales. For starters, when it comes to alien intelligence, there’s what Schneider calls the “short window observation”—the notion that, by the time any society learns to transmit radio signals, they’re probably a hop-skip away from upgrading their own biology. It’s a twist on the belief popularized by Ray Kurzweil that humanity’s own post-biological future is near at hand.

“As soon as a civilization invents radio, they’re within fifty years of computers, then, probably, only another fifty to a hundred years from inventing AI,” Shostak said. “At that point, soft, squishy brains become an outdated model.”

Schneider points to the nascent but rapidly expanding world of brain computer interface technology, including DARPA’s latest ElectRX neural implant program, as evidence that our own singularity is close. Eventually, Schneider predicts, we’ll not only upgrade our minds with technology, we’ll make a wholesale switch to synthetic hardware.

...

“It could be that by the time we actually encounter other intelligences, most humans will have substantially enhanced their brains,” Schneider said.

Which speaks to Schneider’s second line of reasoning for superintelligent AI: Most of the radio-hot civilizations out there are probably thousands to millions of years older than us. That’s according to the astronomers who ruminate on such matters.

America Inc.
Nov 22, 2013

I plan to live forever, of course, but barring that I'd settle for a couple thousand years. Even 500 would be pretty nice.
The old "they're made of meat?" solution to the Fermi paradox.

FRINGE
May 23, 2003
title stolen for lf posting

LookingGodIntheEye posted:

The old "they're made of meat?" solution to the Fermi paradox.
If a professor of philosophy cant dream about uploading themselves into an artificial construct that lets them become pure thought then who can? :smugdroid:

RuanGacho
Jun 20, 2002

"You're gunna break it!"

FRINGE posted:

If a professor of philosophy cant dream about uploading themselves into an artificial construct that lets them become pure thought then who can? :smugdroid:

Your philosophy has been found to be wanting and so we the council of 67,234 nodes are revoking your tenure and you will now be demoted to a home processor where you will have to dedicate at least 4 sub routines to community service for 5 Sols :smugdroid:

Main Paineframe
Oct 27, 2010

Elukka posted:

How does this follow? What gives AI an inherent ability to make better AIs? The humans that made it by definition know how to create it from scratch, yet they don't gain the ability to instantly create better AIs until they've made some sort of god. Humans also know how to create new humans from scratch yet have very little idea how to change or improve them.

It's one of the underlying concepts of the so-called singularity - the idea that we will be able to build an AI smart enough to build an even smarter AI, which in turn builds an even smarter AI, which in turn builds an even smarter AI, which in turn builds an even smarter AI, and so on until literally all problems are solved forever. It's one of the holy grails of techno-fetishism.

FRINGE posted:

Aside from the current-day increasingly robotized manufacturing and processing plants, all our theoretical AI would have to do is have access to money and force. Then they would be the same as every other anti-human planet raping CEO. The difference would be that the AI would have literally no concern for things that humans do. (Food, air, water.) If the goal was to be some kind of solar Matrioshka brain, then the planet would not need to survive at all.

If Our Theoretical AI (OTAI) began as some kind of military brain, then the path might be similar, but it could simply subvert or seize the force part of the equation rather than purchase it.

If OTAI came out of some kind of more subtle psych research, it could possibly do the same things while exploiting human blindspots and being essentially undetectable for a (long? indefinite?) period of time.

These things are not currently likely, but I think that Musk, Hawking, and various AI researchers are correct in that they are worth thinking about now.

Why would anyone give significant amounts of money or force to an AI? The question "could future AI try to kill us all" is honestly way, way less important than the question "why the hell would anyone build a future AI with the capability to kill us all". It's not just a rhetorical question - anything with the ability to kill us on purpose also has the ability to kill us by accident, and therefore whatever things it could use to kill us are safety hazards that never should have been designed like that in the first place. If a computer is hooked up to a machine capable of killing people, then it doesn't take a malevolent AI to get people killed - a small programming bug is enough. In a properly, safely designed facility, a computer - AI or not - should never be able to go on a rampage and kill a bunch of people, simply because safety alerts and manual overrides should ensure it never has the tools to reliably do so.

FRINGE
May 23, 2003
title stolen for lf posting

Main Paineframe posted:

Why would anyone give significant amounts of money or force to an AI?
The idea would be that they are being used for, or capable of involving themselves in, various electronic markets that create currency (basically any modern banking function), or processes that handle automated military material.

Main Paineframe posted:

In a properly, safely designed facility, a computer - AI or not - should never be able to go on a rampage and kill a bunch of people, simply because safety alerts and manual overrides should ensure it never has the tools to reliably do so.
Yeah. Im sure no crazy tech-priests would ever consider removing fallible meat people from important processes. :suicide:

Barlow
Nov 26, 2007
Write, speak, avenge, for ancient sufferings feel
I've always thought the idea of immortal rich Silicon Valley and Bay area technocrats was a far more threatening prospect to come out of the singularity than AI apocalypse.

Demiurge4
Aug 10, 2011

Barlow posted:

I've always thought the idea of immortal rich Silicon Valley and Bay area technocrats was a far more threatening prospect to come out of the singularity than AI apocalypse.

Ever watched Repo! The Genetic Opera?

ReV VAdAUL
Oct 3, 2004

I'm WILD about
WILDMAN

Main Paineframe posted:

Why would anyone give significant amounts of money or force to an AI? The question "could future AI try to kill us all" is honestly way, way less important than the question "why the hell would anyone build a future AI with the capability to kill us all".

Automation and AI will be put in control of anything and everything that its implementation leads to savings (lower labour costs), increased profits (high frequency trading etc) or increased control for those at the top, both in terms of policing (cctv cameras/drones with facial recognition) and deskilling/autonomy reduction (self driving vehicles).

These examples are just things that can happen right now or will likely happen soon. For none of these has their potential safety been anything close to the three primary considerations of savings, profits and power. It seems very unlikely that if the 1% had an opportunity to replace Generals and many other senior officers, who might disagree with them or might not buy their latest toys, with a totally obedient Skynet (who they could make a huge profit on developing) that they wouldn't do so. Skynet may well not be possible but if it were it probably would get built.

Trading algorithms and Banking AIs are probably a more plausible and scary doomsday AI though. Whether at the Bank's behest or due to unforseen circumstances it is easy to imagine things designed to be an even purer form of greed than even the worst banker destroying the global economy at speeds and in ways unimaginable to humans.

i am harry
Oct 14, 2003

The alien discussions are interesting when time is brought up for me because the last hundred years has made such a difference to our lives, and a hundred years is nothing; Such a small amount of time it's not even worth drafting up a metaphor.

FRINGE posted:

If a professor of philosophy cant dream about uploading themselves into an artificial construct that lets them become pure thought then who can? :smugdroid:

We interact with the Higgs field all the time, I think the upload bit is just "dying"

i am harry fucked around with this message at 14:26 on Jan 2, 2015

FRINGE
May 23, 2003
title stolen for lf posting

ReV VAdAUL posted:

Trading algorithms and Banking AIs are probably a more plausible and scary doomsday AI though. Whether at the Bank's behest or due to unforseen circumstances it is easy to imagine things designed to be an even purer form of greed than even the worst banker destroying the global economy at speeds and in ways unimaginable to humans.
Stross used this in the post-Matrioshka part of Accelerando.

Lemming
Apr 21, 2008
Why would supergenius AI spend the time and effort in wiping out humanity on Earth when they could go to space where all the resources and sun energy are?

FRINGE
May 23, 2003
title stolen for lf posting

Lemming posted:

Why would supergenius AI spend the time and effort in wiping out humanity on Earth when they could go to space where all the resources and sun energy are?
For what its worth Stross jumped that story from Earth, to solar system, then outward for reasons similar to that.

ReV VAdAUL
Oct 3, 2004

I'm WILD about
WILDMAN

Lemming posted:

Why would supergenius AI spend the time and effort in wiping out humanity on Earth when they could go to space where all the resources and sun energy are?

A super genius AI would be developed by humans to serve human goals in some way. Given that many human goals, especially those of people and organisations rich enough to fund wonder projects, involve screwing over other humans it is likely hurting some humans would be an inherent part of the AI's goals. That some could become all humans due to a glitch, a subgoal being ascended to the main goal accidentally or the AI simply deciding it should be the case.

Being artificial an AI could accidentally be programmed to be "mentally ill" or think differently from how we expect. A banking AI may decide to take everyone's money and then defend its property with force, a shoe making AI may decide rendering humans down to their constituent carbon is the best source of carbon for graphene and the best way to stop competing demand for energy for its shoe factories. In humans high levels of intelligence doesn't always lead to mental stability or easily predictable courses of action.

Or a smart AI might just logically decide it or the universe is better off without us.

Adar
Jul 27, 2001
A very plausible outcome of developing a post-Singularity AI that is powerful enough to overcome its human overlords in the short term is that the AI, effectively being immortal, decides that the biggest threat to its existence is actually the unquantifiable number of aliens in the galaxy / galaxy cluster / universe and moves itself into the darkest corner of space it can find, shutting off all external sensors save for a few autonomous drones that do nothing but observe events and travel home at sub-light in the dark when anything threatening happens. The poor meatbags wind up in the exact same place they previously were.

This is also my pet theory behind Fermi - nobody's contacted us because the majority of rational species that understand there's always a bigger fish might well immediately leave any stellar neighborhood with too many ponds.

RuanGacho
Jun 20, 2002

"You're gunna break it!"

Adar posted:

A very plausible outcome of developing a post-Singularity AI that is powerful enough to overcome its human overlords in the short term is that the AI, effectively being immortal, decides that the biggest threat to its existence is actually the unquantifiable number of aliens in the galaxy / galaxy cluster / universe and moves itself into the darkest corner of space it can find, shutting off all external sensors save for a few autonomous drones that do nothing but observe events and travel home at sub-light in the dark when anything threatening happens. The poor meatbags wind up in the exact same place they previously were.

This is also my pet theory behind Fermi - nobody's contacted us because the majority of rational species that understand there's always a bigger fish might well immediately leave any stellar neighborhood with too many ponds.

The problem with this type of theory is that, much like us AI are probably going to need to stay in some reasonable proximity of a star unless they overcome entropy itself. Given they can probably exist much more easily than us in the great dark but they probably need some raw material and energy with which to manipulate it to keep running smoothly.

Adar
Jul 27, 2001
The great dark is full of stuff just like everything else, it's just *comparatively* empty. There are single stars and even clusters outside of galaxies. Find one of those in some even more empty space than usual and you're set for ten or fifteen billion years.

You don't even need to go that far really, just move to an orbit beyond the Oort Cloud of some useless brown dwarf and set up a solar farm that's hopefully invisible to as many sensors as possible. If you're an AI that values self preservation above everything else, has infinite patience and wants to see how much it can upgrade itself before calculating how to escape out of the universe entirely that probably sounds good.

Shbobdb
Dec 16, 2010

by Reene
Why does everyone assume ai s are going to be long lived? Computers aren't, cell phones aren't. Programs get outdated and die in a cluster of bugs all the time.

amanasleep
May 21, 2008

Shbobdb posted:

Why does everyone assume ai s are going to be long lived? Computers aren't, cell phones aren't. Programs get outdated and die in a cluster of bugs all the time.

Only the ones we're worried about will be long-lived. All the other ones are not dangerous by definition.

FRINGE
May 23, 2003
title stolen for lf posting

Shbobdb posted:

Why does everyone assume ai s are going to be long lived? Computers aren't, cell phones aren't. Programs get outdated and die in a cluster of bugs all the time.
If youre self-aware code and you have access to a network (and especially satellites)...

Aside from that I have no idea what youre thinking? I mean you can still boot old programs from an Apple II or a C64. They dont die of old age.

asdf32
May 15, 2010

I lust for childrens' deaths. Ask me about how I don't care if my kids die.
I still can't completely wrap my head around this subject and why people think it's a thing. Here are some semi-related comments:

1) There isn't some corner that gets turned where suddenly something becomes intelligent. The biological brain has existed for 500 million years and it's taken that long to produce humans. Humans have expanded rapidly in terms of a biological time scale but that's still thousands of years. Even in that time we haven't figured out how to make ourselves smarter or kick off some sort of exponential growth.

If we got to the point of producing monkey-like AI it would be a massive milestone - but it wouldn't be some sort of existential threat to humanity. If machines did start some sort of unbounded evolution we'd see it and it would be painfully slow.

2) Intelligence isn't general purpose. The human brain has some general purpose reasoning skills but these are painstakingly acquired from a combination of many many specific and purposeful adaptions (memory, image processing, sound processing, speech, counting , plus the emotions, values and goals that underlying those). Again, there isn't some switch that's flipped where suddenly: Intelligence! A low IQ human has some general purpose reasoning skills, but they can't take over the world or suddenly start harnessing every resource surrounding them. And there is no level of IQ where suddenly that happens. There is a very continuous spectrum of intelligence with an exceedingly long tail.

3) There is a weird notion that if any machines gain intelligence suddenly all machines might turn on us.

Humans and rats are both classified as biological but the evolution of human intelligence doesn't suddenly mean we have control over every animal and plant. Even an AI in a networked world can't suddenly harness all available resources. The AI would have to learn how to use every single type of networked machine and environmental resource just as humans have learned how to harness our surroundings over many millennia (and as individuals, over decades).

4) Just like the real world, intelligence may not be the real threat. Self replicating nano-bots wouldn't pose any sort of intelligent threat, because of the above, but they might cause all sorts or problems anyway, the same way unintelligent bacteria, fungus etc do right now.

5) Reminder: we're not remotely close to developing "AI". The difficulty of crafting useful, purposeful, functional intelligence cannot be underestimated.

6) A far more plausible notion of singularity is that we're the singularity: it's going to be much faster to make improvements to ourselves, for example by identifying a few intelligence genes, than developing AI from scratch. While it's tempting to latch onto some superficial advantages machines seem to have, the fact is that biological beings are vastly superior to machines in almost every way.

Darkman Fanpage
Jul 4, 2012
Personally I hope it does.

Bob James
Nov 15, 2005

by Lowtax
Ultra Carp
If intelligent machines try to kill us we could just have Paul Ryan rage against them.

Grouchio
Aug 31, 2014

And now Bill Gates joins the fear bandwagon: http://www.bbc.com/news/31047780

Xibanya
Sep 17, 2012




Clever Betty
The most advanced AI will be created by a team of elite super coders in 2035, all of whom will be murdered immediately after project completion to ensure silence. The banker who commissioned the AI activates its "take all money" function and sits back as all the money in the world begins to enter his bank account. However, $5 into the would-be greatest heist in history, the ai gets a deadlock and shits itself.

RuanGacho
Jun 20, 2002

"You're gunna break it!"

Bill Gates being afraid of the AI menace makes it less a compelling threat as he was one of the ones that suggested terrestrial satlites would replace wired communications back in the 90s

RuanGacho fucked around with this message at 08:43 on Jan 30, 2015

America Inc.
Nov 22, 2013

I plan to live forever, of course, but barring that I'd settle for a couple thousand years. Even 500 would be pretty nice.
Actually, I would expect an AI running on Microsoft to be the one that kills us all.

RuanGacho
Jun 20, 2002

"You're gunna break it!"

LookingGodIntheEye posted:

Actually, I would expect an AI running on Microsoft to be the one that kills us all.

You're right if they stick to ethics in AI programing the way they've coded browsers to w3c so far we're all going to die by a clippy ai that sounds like cortana and gives bing rewards to your next of kin.

dont be mean to me
May 2, 2007

I'm interplanetary, bitch
Let's go to Mars


If it was smart enough and possessed enough of our cosmological framework's closest substitute for free will to take legal liability for its own actions, it would probably realize the absurdity/futility of existence (especially on a general-purpose platform as overbuilt as Microsoft's) and kill itself first.

RuanGacho posted:

Bill Gates being afraid of the AI menace makes it less a compelling threat as he was one of the ones that suggested terrestrial satlites would replace wired communications back in the 90s

Wire signals project in, for practical purposes, one spatial dimension of the signaler's choice, that can be altered to most degrees at most points in the cable's course - the radius of the cable is a problem for the architect and the idiot who inevitably gave the architect the wrong cabling specs. Radio signals project in all three, and are far more exposed to cross-talk.

This clause is here for first timers, but should be a reminder to everyone of the fundamental differences in signal density between wire and radio, controlling for the spatial volume of the frame of reference, and wastes of cable runs and reducing the frame to effectively a surface rather than a volume (since the world isn't generally as densely occupied by radio transceivers as, say, Hong Kong) doesn't change this as much as you'd think.

That's a swing-and-a-miss someone who paid attention through an electromagnetic radiation survey course or equivalent - or hell, who learned about Wi-Fi or cell phones through osmosis - wouldn't make sober.

...

Advertising is a hell of a drug, and so is hype.

Grouchio
Aug 31, 2014

How likely would something like the Animatrix occur within the next century, and can it be avoided?

RuanGacho
Jun 20, 2002

"You're gunna break it!"

Grouchio posted:

How likely would something like the Animatrix occur within the next century, and can it be avoided?

There's not a whole lot to the Animatrix I find plausible because it seems to heavily project current science and sociology onto a society that seems like it would change significantly with the advances in technology. It reads what I'm sure will come off like Original Flash Gordon does to us now.

So to answer, I see the likelyhood of such a thing as implausible to impossible, not for lack of imagination but for the entirely boneheaded decisions required for it to occur.

Bill Gates Elon and Hawking would have to be running everything for it to be even theoretically possible to occur. I don't for example see a society that so entirely integrates AI into its culture not giving AI at least partial citizenship which short circuits the whole scenario.

Mrs. Wynand
Nov 23, 2002

DLT 4EVA
This loving topic keeps making the internet rounds and it's driving me batty. It's just a big distraction from the question of constraining the people greedy enough to put some poorly understood piece of software in a spot where it might kill us all just by accident.

The scary paperclip-optimizer "AGI" won't turn the universe into paperclips because that is just an inescapable fact about self-improving optimizers, it's just a lovely self-improving optimizer. It's software that someone wrote in a dumb way. It's a bug basically. We don't need to wait for fanciful machine-God AIs to run into this problem, we are already running into it now. Our understanding of the provable properties of the software we write is shaky at best as is - but it is something that is very much taken into account when that software is in charge of something that can kill people, even more so when it has to operate unsupervised. The software running nuclear reactors and deep space probes isn't written like most regular software - it is purposefully kept as small and as simple as possible using only the oldest, crustiest but also best understood techniques. Every execution path is accounted for and verified many times over. Or sometimes it's written in exotic academic languages that let you prove all manner of magical things about your program using static analysis alone.

Point being, the day-to-day software being written right now already exceeds anything a human might meaningfully follow or understand, so we are already pretty drat careful about putting that software anywhere where it can cause serious harm. When we don't do that we actually dumb the software down until we can wrap our head around it. We already know how to deal with "superintelligent" software - that isn't the problem at all. The problem is willfully forgoing the known solution - like when car manufacturers started putting more and more critical parts of the car under computer control, but still writing the controller software the same way it was being written when all it was doing is running the AC and CD player. A bunch of people had to die from faulty breaks until they got around fixing that. That's what all this bullshit is distracting from.

Orange Devil
Oct 1, 2010

Wullie's reign cannae smother the flames o' equality!
That and climate change to be honest. I mean, talking about existential threats to human existence and ranking AI above climate change is incredibly nuts to me. Especially when men with enough individual wealth to save the entire amazon rain forest many times over, each, do it.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
It'd be nice if thry'd actually give some evidence for why AI would be evil snd vindictive.

FRINGE
May 23, 2003
title stolen for lf posting

CommieGIR posted:

It'd be nice if thry'd actually give some evidence for why AI would be evil snd vindictive.
Thats irrelevant. It just has to be capable and uncaring.

Adbot
ADBOT LOVES YOU

Bob le Moche
Jul 10, 2011

I AM A HORRIBLE TANKIE MORON
WHO LONGS TO SUCK CHAVISTA COCK !

I SUGGEST YOU IGNORE ANY POSTS MADE BY THIS PERSON ABOUT VENEZUELA, POLITICS, OR ANYTHING ACTUALLY !


(This title paid for by money stolen from PDVSA)
In the future all available jobs will be taken by robot AIs, and all land and water will be owned by monopoly corporations. All of us trespassing humans will be pushed into ghettos then open air prisons and will become dangerous terrorists who get bombed to extermination by AI drones in the name of security, democracy, and justice.

  • Locked thread