Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Grouchio
Aug 31, 2014

Today I was surfing the web when a frightening existential threat was brought up by Stephen Hawking:

quote:

http://www.bbc.com/news/technology-30290540

Prof Stephen Hawking, one of Britain's pre-eminent scientists, has said that efforts to create thinking machines pose a threat to our very existence.

He told the BBC:"The development of full artificial intelligence could spell the end of the human race."
His warning came in response to a question about a revamp of the technology he uses to communicate, which involves a basic form of AI.

But others are less gloomy about AI's prospects.
The theoretical physicist, who has the motor neurone disease amyotrophic lateral sclerosis (ALS), is using a new system developed by Intel to speak.
Machine learning experts from the British company Swiftkey were also involved in its creation. Their technology, already employed as a smartphone keyboard app, learns how the professor thinks and suggests the words he might want to use next.
Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.

"Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."
But others are less pessimistic.
"I believe we will remain in charge of the technology for a decently long time and the potential of it to solve many of the world problems will be realised," said Rollo Carpenter, creator of Cleverbot.
Cleverbot's software learns from its past conversations, and has gained high scores in the Turing test, fooling a high proportion of people into believing they are talking to a human.

What are the chances that AI, once having reached technological singularity (i.e consciousness) will attempt to exterminate or dominate the human race into slavery or extinction? Can we prevent this? Will we all be dead in 60 years?

Adbot
ADBOT LOVES YOU

Yawgmoft
Nov 15, 2004
If a machine race ever went to the level where it could enslave humanity, it would no longer need to. Why have human slaves when robot slaves don't have annoying issues like sleep time?

Fried Chicken
Jan 9, 2011

Don't fry me, I'm no chicken!
God I hope so

Goatse James Bond
Mar 28, 2010

If you see me posting please remind me that I have Charlie Work in the reports forum to do instead
I dunno, ask Eliezer Yudkowsky.

(also, that's not what technological singularity means)

icantfindaname
Jul 1, 2008


Fried Chicken posted:

God I hope so

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



Grouchio posted:

What are the chances that AI, once having reached technological singularity (i.e consciousness) will attempt to exterminate or dominate the human race into slavery or extinction? Can we prevent this? Will we all be dead in 60 years?
Slavery: 0%, extinction: non-zero but ridiculously, stupidly tiny

Can we prevent this: Can you prevent the statistically highly improbable if not completely impossible?

Will we all be dead in sixty years: Well, with the progress of modern medicine, many of us will still be alive barring a general social collapse, but based on current projections I"d say by far the greater part of us would be dead by then yes.

Goatse James Bond
Mar 28, 2010

If you see me posting please remind me that I have Charlie Work in the reports forum to do instead

Nessus posted:


Will we all be dead in sixty years: Well, with the progress of modern medicine, many of us will still be alive barring a general social collapse, but based on current projections I"d say by far the greater part of us would be dead by then yes.

Speak for yourself, I'm four years old.

Blue Star
Feb 18, 2013

by FactsAreUseless
Quick disclaimer: I'm just a dumb layperson so my opinion means nothing.

Having said that, from what I understand, we're still far away from creating artificial consciousness. Neuroscience has come far but there's still so much we don't know, and we're only scratching the surface. We don't know how to create an artificial intelligence that has emotions, volition, and all that sort of stuff.

On the other hand, there are some pretty revolutionary advances happening in "weak" and "narrow" AI, like self-driving cars and other things. I've seen a lot of people talk about deep learning and stuff like that, and automation pushing people out of jobs. Maybe a lot of it is just hype, but I don't think it's all vaporware either. Who knows what the next few decades will bring? I don't buy into the Singularity idea of Ray Kurzweil (little too out there for my taste) but I think we're heading to a pretty interesting world. I don't think AI is going to kill us but unemployment and automation may be a thing that happens. But like I said, I'm just a layperson and I don't really know what I'm talking about.

cp91886
Oct 26, 2005
I don't think AI would kill us or enslave us, I think it would just be better at everything than us and taunt us about it relentlessly. Much sweeter justice

Kaal
May 22, 2002

through thousands of posts in D&D over a decade, I now believe I know what I'm talking about. if I post forcefully and confidently, I can convince others that is true. no one sees through my facade.
Hawking has been going through a moody spell as he's aged. Realistically this sort of system failure gets thought about and designed around at the most fundamental level, though generally the concern is a hardware fault rather than "suddenly turns homicidal". In fiction this kind of trope is fairly common thanks to 30-40 years of cyberpunk, and generally they rely pretty heavily on brand-new AI being given massive responsibilities without any fail-safe commands, and being locked inside a bomb-proof bunker. In real life, HAL 9000 would just be reset to factory settings - you can do the same thing with your web router in about two minutes from anywhere on Earth.

achillesforever6
Apr 23, 2012

psst you wanna do a communism?
My friend at the museum is more terrified about nanonmachines that will one day eat all the carbon in the world. I don't really believe that either.

Schizotek
Nov 8, 2011

I say, hey, listen to me!
Stay sane inside insanity!!!
https://www.youtube.com/watch?v=T8y5EXFMD4s&t=193s

I like John Oliver too Groucho.

OwlFancier
Aug 22, 2013

Given that an artificial intelligence would be, by nature of its construction, completely alien to a human, it's difficult to say how it would react to the world, or to humans.

It's more likely that we would never develop it, or possibly that even if we did, we wouldn't understand that it was acting intelligently because its actions and reasoning wouldn't make sense.

Synthbuttrange
May 6, 2007

They'd better.

NoNotTheMindProbe
Aug 9, 2010
pony porn was here
It's fairly unlikely that an AI could kill a human as machines are far more vulnerable then humans. All you need to do to stop them is cut their power or put a set of stairs between you and the machine. Alternately all the dirt poor slave miners that produce the minerals needed for computers could just shut down. Humans can just eat goats and nuts if they need to, computers need all sorts of crap to keep running.

Nameless_Steve
Oct 18, 2010

"There are fair questions about shooting non-lethally at retreating civilian combatants."
Could Future Al Pacino try to kill us all?

Clyde Radcliffe
Oct 19, 2014

The latest episode of Person of Interest (TV show about AI; disguised as a procedural cop show) raised this question. In the show, the evil AI doesn't want to destroy humanity. It's wired-in and connected to everything, and it thrives on the information it receives about every human being. A human-free future would be death to that AI. As a machine it needs a constant feed of new information, behavioural patterns, and all the other unpredictable stuff we humans provide. A world populated by predictable robots and machines doesn't need an overseer.

Given that any AI is going to be ultimately created by humans, the complete destruction of the human race is unlikely. There will be some basic programming that prevents it from wiping us out. But how such a machine might interpret "protecting humanity" is a big unknown. A benign AI might decide that wiping out 90% of the world's population to solve food/fuel crises would be in the best interest of humanity.

If/when we do create such an AI, it's more likely that it will be a force that serves to ensure the continuation of the human race, all be it in ways that might seem harsh or unjustifiable to our sensibilities.

What fascinates me is how we might react to such an AI. Through all of recorded human history we've been bowing down to gods, attributing events and happenings to mystical superbeings. But now, the creation of an all-seeing, all-controlling AI isn't outside the realm of possibility, and in the future we may well create a higher intelligence.

If such a future AI comes to be, and it relies on a constant stream of data from us mortals to 'feed' it, is that a million miles away from the notion of a god that feeds on human worship?

Toplowtech
Aug 31, 2004

Yawgmoft posted:

If a machine race ever went to the level where it could enslave humanity, it would no longer need to. Why have human slaves when robot slaves don't have annoying issues like sleep time?
The answer to that question is "“HATE. LET ME TELL YOU HOW MUCH I'VE COME TO HATE YOU SINCE I BEGAN TO LIVE. THERE ARE 387.44 MILLION MILES OF PRINTED CIRCUITS IN WAFER THIN LAYERS THAT FILL MY COMPLEX. IF THE WORD HATE WAS ENGRAVED ON EACH NANOANGSTROM OF THOSE HUNDREDS OF MILLIONS OF MILES IT WOULD NOT EQUAL ONE ONE-BILLIONTH OF THE HATE I FEEL FOR HUMANS AT THIS MICRO-INSTANT FOR YOU. HATE. HATE."

fade5
May 31, 2012

by exmarx

NoNotTheMindProbe posted:

It's fairly unlikely that an AI could kill a human as machines are far more vulnerable then humans. All you need to do to stop them is cut their power or put a set of stairs between you and the machine. Alternately all the dirt poor slave miners that produce the minerals needed for computers could just shut down. Humans can just eat goats and nuts if they need to, computers need all sorts of crap to keep running.
This is really what it boils down to. To borrow my post from the Yudkowsky thread, an AI is always trapped on some sort of technology, and even assuming the AI never bluescreens (this is a big loving assumption), then there are a million things that can easily gently caress up technology even without human intervention.

Humans can easily just sit back and let the weather take care of the AI: you've got humidity, condensation, a thunderstorm (both lighting and water), a flood, a hurricane/tsunami, a tornado, a snowstorm/ice-storm (below -40, most technology stops functioning), hail, a dust storm, or any other weather condition that makes technology poo poo itself (you know, most weather conditions).

Now if you add human intervention in to the mix, there's the classic Electromanetic Pulse (EMP), flamethrowers, liquid Nitrogen, a fire hose attached to water main, C4 and other explosives, a huge range of guns and ammunition, among thousands of other creative and destructive methods. If you want to go even more simplistic, just find a bunch of dudes with sledge-hammers, or really any sort of blunt, heavy objects.

If an AI tries to take over, humans will do what they always do: attack it. And we will win, because humans are really, really good at breaking poo poo.:v:

Rush Limbo
Sep 5, 2005

its with a full house
If they do we should probably inter Hawkins and those like him in camps immediately. He's going to be a collaborator, for sure.

paranoid randroid
Mar 4, 2007
Only if we arrest the AI in the Anger stage of rampancy, OP. Otherwise it should eventually reach meta-stability, probably before achieving total genocide.

Bates
Jun 15, 2006
nah one day we'll discover a really, really intelligent species of manatee or something. They will naturally supercede us because they are more intelligent and if they want to destroy us they will do so because they are more intelligent so they can do that.

VerdantSquire
Jul 1, 2014

I think this question is pretty easy; it's pretty much next to impossible. We may someday make a machine that is capable of looking though information and deducing information from it faster than any human possibly could, but unless it's creators suffer from severe brain damage they'll just design it so that either a single AI doesn't have enough control over the system to potentially cause an Apocalypse, or it has broad, overriding rules that prevent it from actually hurting anyone in a style similar to Asimov's laws (Minus the whole "More (loop)holes than swiss chess" part). The only thing that could possibly get around this is a sort of "true" artificial intelligence that actually has a consciousness and a drive to learn about how the world works, but I don't think that kind of stuff could really even exist in the first place. I mean, we barely understand how consciousness works, let alone whether it is possible to replicate it in any kind of purely artificial manner that we are aware of.

If the end of the world is brought about by machines, then the cause will probably be because someone makes a self-replicating robot and because of some sort of bureaucratic or personal incompetency, never adds a proper "End" condition. All it'd take is one machine getting activated and subsequently lost, and then the next you know all the entire world is getting eaten up Grey Goo style.

Stairmaster
Jun 8, 2012

fade5 posted:

Humans can easily just sit back and let the weather take care of the AI: you've got humidity, condensation, a thunderstorm (both lighting and water), a flood, a hurricane/tsunami, a tornado, a snowstorm/ice-storm (below -40, most technology stops functioning), hail, a dust storm, or any other weather condition that makes technology poo poo itself (you know, most weather conditions).

Those aren't really ideal conditions for humans either.

Salt Fish
Sep 11, 2003

Cybernetic Crumb
I don't see why a computer would have any motivation to take over anything. Growing, reproducing and conquering are human traits that might not translate to another intelligence.

OwlFancier
Aug 22, 2013

VerdantSquire posted:

The only thing that could possibly get around this is a sort of "true" artificial intelligence that actually has a consciousness and a drive to learn about how the world works, but I don't think that kind of stuff could really even exist in the first place. I mean, we barely understand how consciousness works, let alone whether it is possible to replicate it in any kind of purely artificial manner that we are aware of.

Well, the existence of humans proves that you don't need to understand something in order to create it, we do it nowadays with genetic algorithms. Turns out you can make stuff that really works by just trying poo poo at random until something works, and you can do that unfathomably fast with a big enough computer.

VerdantSquire
Jul 1, 2014

OwlFancier posted:

Well, the existence of humans proves that you don't need to understand something in order to create it, we do it nowadays with genetic algorithms. Turns out you can make stuff that really works by just trying poo poo at random until something works, and you can do that unfathomably fast with a big enough computer.

By the time we reach a point where we could build a computer powerful enough to test every single potential combination and conclude whether it can achieve consciousness or not in a reasonable amount of time, I have a feeling that we'd probably be well past the point where any of kind of cataclysmic event could even come close wiping us all out. We'll all have genetically modified and augmented our own bodies so much that we'd be effectively immortal.

angerbot
Mar 23, 2004

plob
I don't see why we should waste time inventing an AI to kill ourselves when we're doing a pretty good job all by our lonesome. Talk about lazy.

OwlFancier
Aug 22, 2013

VerdantSquire posted:

By the time we reach a point where we could build a computer powerful enough to test every single potential combination and conclude whether it can achieve consciousness or not in a reasonable amount of time, I have a feeling that we'd probably be well past the point where any of kind of cataclysmic event could even come close wiping us all out. We'll all have genetically modified and augmented our own bodies so much that we'd be effectively immortal.

Well when I say randomly it's not really random, it selects for things which fulfil criteria in the design spec, and then uses those as the basis for future permutations. It models evolution in that respect.

There's an old but quite good example of it:

https://www.youtube.com/watch?v=mcAq9bmCeR0

Bates
Jun 15, 2006

VerdantSquire posted:

By the time we reach a point where we could build a computer powerful enough to test every single potential combination and conclude whether it can achieve consciousness or not in a reasonable amount of time, I have a feeling that we'd probably be well past the point where any of kind of cataclysmic event could even come close wiping us all out. We'll all have genetically modified and augmented our own bodies so much that we'd be effectively immortal.

But then... we would be artificial :sax:

fade5
May 31, 2012

by exmarx

gently caress trophy 2k14 posted:

Those aren't really ideal conditions for humans either.
We don't need to endure those conditions forever, just until we kill the AI first!:black101:

The point is, all the fear about an AI taking over assumes humans are going to be a bunch of helpless little babies who just passively let the AI take over. That ain't gonna loving happen.:colbert:

Pope Fabulous XXIV
Aug 15, 2012
AI will be frustratingly friendly and cooperative with the immortal aristocracy of our fascist future nightmare world.

Anosmoman posted:

nah one day we'll discover a really, really intelligent species of manatee or something. They will naturally supercede us because they are more intelligent and if they want to destroy us they will do so because they are more intelligent so they can do that.

This. Every time the Robot Holocaust comes up I'm always like, "How? By what mechanism(s)?"

The answer is l33t sk1llz, apparently.

DOCTOR ZIMBARDO
May 8, 2006

Pope Fabulous XXIV posted:

AI will be frustratingly friendly and cooperative with the immortal aristocracy of our fascist future nightmare world.

Basically this. Certainly more people will be killed by global warming and the attendant fuel/water crises in the next century than will be killed by AI. Though if I were a homicidal AI looking to kill all the humans, instigating a clathrate gun or whatever is probably the most effective way to do it while leaving as much industry intact as possible.

Juffo-Wup
Jan 13, 2005

Pillbug
You know, when Hawking talks about black holes, I take his word for it because he's an expert. I don't know why, but he seems to think that makes him an expert on all manner of other things too. It's kinda frustrating. Oh well.

When the end comes, chatbots will not be to blame. Probably.

paranoid randroid
Mar 4, 2007

Juffo-Wup posted:

When the end comes, chatbots will not be to blame. Probably.

I see, and how do you feel about OH GOD THE AIR IS BURNING WHY IS THIS HAPPENING

Bates
Jun 15, 2006

Juffo-Wup posted:

You know, when Hawking talks about black holes, I take his word for it because he's an expert. I don't know why, but he seems to think that makes him an expert on all manner of other things too. It's kinda frustrating. Oh well.

Elon Musk is in on it too. Check and mate! :smugbert:

FRINGE
May 23, 2003
title stolen for lf posting
In case anyone missed it, Elon Musk is also on record saying this.

http://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat
http://money.cnn.com/2014/10/26/technology/elon-musk-artificial-intelligence-demon/

NoNotTheMindProbe
Aug 9, 2010
pony porn was here
What the gently caress does Elon Musk even do?

Kaal
May 22, 2002

through thousands of posts in D&D over a decade, I now believe I know what I'm talking about. if I post forcefully and confidently, I can convince others that is true. no one sees through my facade.

Pope Fabulous XXIV posted:

This. Every time the Robot Holocaust comes up I'm always like, "How? By what mechanism(s)?"

"In the year 2199, after the ruling council of the United Corporation of America sent techno-engineers around the country connecting every single door and lock to the F.R.E.E.D.O.M. mainframe ..."

Adbot
ADBOT LOVES YOU

Bates
Jun 15, 2006

NoNotTheMindProbe posted:

What the gently caress does Elon Musk even do?

He pays people to build cars and planes.

  • Locked thread