Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
GWBBQ
Jan 2, 2005


Rape is pervasive in modern entertainment as a drop-in plot element that boils down to "something that everyone recognizes as a terrible thing." Need a reprehensible bad guy? have him rape someone, nobody sympathizes with a rapist. It's almost always written horribly because it also treats victims as quick plot devices and trivializes the aftermath if it's even addressed; ironically, Yudkowsky avoids the usually poor and distasteful handling of it by making it "unquestionably bad thing is OK now" and handwaving it away with magic future technology.

In other news, Elon Musk warns that developing AI is "Summoning the demon" but manages to say we should be cautious without going into crazy torture machine god stories.

quote:

I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.
http://www.washingtonpost.com/blogs/innovations/wp/2014/10/24/elon-musk-with-artificial-intelligence-we-are-summoning-the-demon/

GWBBQ fucked around with this message at 17:20 on Oct 26, 2014

Adbot
ADBOT LOVES YOU

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
I'll never understand why people are so terribly paranoid about AI in particular. Of course creating a new mind is dangerous, but in the same way that having a child is. It might grow up to be Hannibal Lecter and eat your skin, but that doesn't mean it will even want to.

GWBBQ
Jan 2, 2005


Cardiovorax posted:

I'll never understand why people are so terribly paranoid about AI in particular. Of course creating a new mind is dangerous, but in the same way that having a child is. It might grow up to be Hannibal Lecter and eat your skin, but that doesn't mean it will even want to.
Several times during the Cold War, on both sides, early warning systems malfunctioned and brought us to the point that people had to make the decision whether to launch a nuclear attack or not. Had those computers been able to retaliate automatically, they might have.

While I don't think it's rational to think that an AGI would go crazy and kill/enslave us all or that we're even capable of creating an AGI, I think it is rational to worry that if we do, we're more than capable of screwing up and ending up with unpredictable or disastrous results.

Telarra
Oct 9, 2012

Ah but don't you see? The only way to create an AI is one that can self-improve rapidly and indefinitely so it'll quickly turn into a god and take over the world from sheer smartness and there will be nothing we can do about it!

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

GWBBQ posted:

Several times during the Cold War, on both sides, early warning systems malfunctioned and brought us to the point that people had to make the decision whether to launch a nuclear attack or not. Had those computers been able to retaliate automatically, they might have.

While I don't think it's rational to think that an AGI would go crazy and kill/enslave us all or that we're even capable of creating an AGI, I think it is rational to worry that if we do, we're more than capable of screwing up and ending up with unpredictable or disastrous results.

That danger has nothing to do with the AI-ness of the computer and everything to do with the has-a-nuke-ness of the computer, though.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.

GWBBQ posted:

Several times during the Cold War, on both sides, early warning systems malfunctioned and brought us to the point that people had to make the decision whether to launch a nuclear attack or not. Had those computers been able to retaliate automatically, they might have.

While I don't think it's rational to think that an AGI would go crazy and kill/enslave us all or that we're even capable of creating an AGI, I think it is rational to worry that if we do, we're more than capable of screwing up and ending up with unpredictable or disastrous results.
That applies to everyone who his put in charge of dangerous or important objects, though. An AI would in any case be able to make the same informed decisions that people do, which is the whole point of trying to make them, after all.

Lottery of Babylon posted:

That danger has nothing to do with the AI-ness of the computer and everything to do with the has-a-nuke-ness of the computer, though.
More succinct than I could have put it.

RPATDO_LAMD
Mar 22, 2013

🐘🪠🍆
All of the "danger" from creating an advanced AI is based on the stupid Yudkowsky idea of AI, where you use a General AI for every task and give it complete control with no human oversight.

Pavlov
Oct 21, 2012

I've long been fascinated with how the alt-right develops elaborate and obscure dog whistles to try to communicate their meaning without having to say it out loud
Stepan Andreyevich Bandera being the most prominent example of that
The funny thing is that AI could be an existential threat, its just not nearly an immanent one. "Summoning a Demon" again implies that this poo poo will happen all at once. Nobody is going to summon AI; It's going to get built up slowly. There's no point in regulating it until we actually have it and get to see what form it takes.

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



What seems far more likely is that limited AIs get put in charge of decision making for large corporations and ruthlessly cut jobs and productive activities in favor of the stock valuations and other such intangibles. They are never stopped from this 'paperclip maximizing' because they're getting great returns for their owners. Mass impoverishment results.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
Sounds like any other management to me, honestly.

Epitope
Nov 27, 2006

Grimey Drawer

Cardiovorax posted:

Sounds like any other management to me, honestly.

:thejoke:

Also guys, we can admit there are dangers inherent in AI development without saying that MIRI is right or good.

atelier morgan
Mar 11, 2003

super-scientific, ultra-gay

Lipstick Apathy

Nessus posted:

What seems far more likely is that limited AIs get put in charge of decision making for large corporations and ruthlessly cut jobs and productive activities in favor of the stock valuations and other such intangibles. They are never stopped from this 'paperclip maximizing' because they're getting great returns for their owners. Mass impoverishment results.

Already started happening years ago.

Not My Leg
Nov 6, 2002

AYN RAND AKBAR!

Hobo By Design posted:

Years ago I listened to a presentation that was delivered at a skeptic convention on Bayes theorem, trying to disprove the Abrahamic god specifically with it. "A god with a bunch of specific associated claims are inherently less likely than one with no claims! Better stop believing things about god, Christians!" It also shoehorned in Buffy the Vampire Slayer. Looking it up again I'm a bit disappointed it wasn't Yudkowsky himself.



Utilitarianism absolutely does define utility! Pleasure, as an end, justifies itself. "Why do you want to enjoy yourself" is a nonsense question.

Mill is not the end point for what counts as utilitarianism. Preference maximizing utilitarianism is probably more influential today than hedonistic utilitarianism or (to the extent it's different) eudaimonia based utilitarianism.

Also, "why do you want to enjoy yourself" may be a nonsense question, but "why does it matter, ethically, whether you enjoy yourself" is not a nonsense question.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.

Epitope posted:

:thejoke:

Also guys, we can admit there are dangers inherent in AI development without saying that MIRI is right or good.
That doesn't really have anything to do with it. I just don't really see any possible harm an artificial intelligence could cause that couldn't also come from the biological kind. Most of it is predicated on people being spectacularly stupid, like putting a pyromaniac in sole control of your nuclear stockpile.

LaughMyselfTo
Nov 15, 2012

by XyloJW

Not My Leg posted:

Mill is not the end point for what counts as utilitarianism. Preference maximizing utilitarianism is probably more influential today than hedonistic utilitarianism or (to the extent it's different) eudaimonia based utilitarianism.

Also, "why do you want to enjoy yourself" may be a nonsense question, but "why does it matter, ethically, whether you enjoy yourself" is not a nonsense question.

Because I want to. :colbert:

Cardiovorax posted:

That doesn't really have anything to do with it. I just don't really see any possible harm an artificial intelligence could cause that couldn't also come from the biological kind. Most of it is predicated on people being spectacularly stupid, like putting a pyromaniac in sole control of your nuclear stockpile.

The trouble is if the artificial intelligence is radically more intelligent than the biological kind, and is able to, say, trick us into putting it in control of our nuclear stockpile.

Telarra
Oct 9, 2012

So "the trouble" is vague scifi bullshit based on nothing but wild conjecture and appeals to ignorance. And on top of all that it's misplaced and fruitless to try to guard against it because we have absolutely no clue how to actually make a superhuman AGI in the first place.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.

LaughMyselfTo posted:

The trouble is if the artificial intelligence is radically more intelligent than the biological kind, and is able to, say, trick us into putting it in control of our nuclear stockpile.
Somehow I can't find it in myself to be too concerned about that. It's already a pretty big assumption that we'll be able to make an AI at all, at least any time soon. It's an even bigger one to think that anything we produce will be smarter than a retarded baby for at least the first few dozen generations, never even mind superhuman. By the time we manage something of adult human intellect, we'll probably have enough experience to instill it with a basic sense of empathy and not-being-a-psychopath-ness.

Epitope
Nov 27, 2006

Grimey Drawer

LaughMyselfTo posted:

The trouble is if the artificial intelligence is radically more intelligent than the biological kind, and is able to, say, trick us into putting it in control of our nuclear stockpile.

You don't even need to come up with sci-fi scenarios to justify calling AI dangerous. Making business practices even more inhumane is a danger of AI, even if it is a danger without AI also.

Musk calling it the biggest existential threat may have something to do with his profession. I know people in virology that think avian influenza is maybe the biggest threat.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
Epidemics have come close to wiping out humanity on more than one occasion, so I'm thinking the virologists have the historical high ground there.

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



My concern with the flash crash thing is a rhetorical loop of "well, that's what the computer said, and the computer is faster and smarter than us, and technology is inevitable and if you disagree you're a luddite; once my AI stock traders and automatic CFOs give me all the money, think of all the jobs I can create." Basically the current system but with fewer points of humanity, and possibly other stupid poo poo -- isn't there already some article saying we don't need to do science any more, just farm big data for trends and go "Well, drowning deaths spike alongside ice cream consumption, so obviously we need to stop selling ice cream at the beach!" I think that was earlier in this thread.

Strom Cuzewon
Jul 1, 2010

Nessus posted:

My concern with the flash crash thing is a rhetorical loop of "well, that's what the computer said, and the computer is faster and smarter than us, and technology is inevitable and if you disagree you're a luddite; once my AI stock traders and automatic CFOs give me all the money, think of all the jobs I can create." Basically the current system but with fewer points of humanity, and possibly other stupid poo poo -- isn't there already some article saying we don't need to do science any more, just farm big data for trends and go "Well, drowning deaths spike alongside ice cream consumption, so obviously we need to stop selling ice cream at the beach!" I think that was earlier in this thread.

That slight humming sound you hear? Francis Bacon spinning in his grave.

Epitope
Nov 27, 2006

Grimey Drawer

Cardiovorax posted:

Epidemics have come close to wiping out humanity on more than one occasion, so I'm thinking the virologists have the historical high ground there.

Right, agreed. Is it a bigger threat than meteors or nuclear war though? My point is saying AI is the biggest threat is mockable, but falling over ourselves to say AI is no threat at all is really no better. Walking the the fridge has risks.

Curvature of Earth
Sep 9, 2011

Projected cost of
invading Canada:
$900

Nessus posted:

What seems far more likely is that limited AIs get put in charge of decision making for large corporations and ruthlessly cut jobs and productive activities in favor of the stock valuations and other such intangibles. They are never stopped from this 'paperclip maximizing' because they're getting great returns for their owners. Mass impoverishment results.

There's a sci-fi story about exactly that: Manna, by Marshall Brain.* The whole thing's free to read on his website, if you want it.

Manna takes a way more optimistic view though, in the sense that everything after chapter four is just pages and pages of "Let me describe my utopian post-scarcity society!" (But no :tvtropes: garbage, so it's still less lovely than Big Yud's fiction.)

*Goddamn, if that isn't the perfect sci-fi author's name.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.

Epitope posted:

Right, agreed. Is it a bigger threat than meteors or nuclear war though? My point is saying AI is the biggest threat is mockable, but falling over ourselves to say AI is no threat at all is really no better. Walking the the fridge has risks.
I'd say it's easier to accidentally wipe out humanity with a biological weapon than an nuclear one, but fair point. I never said it wasn't any kind of threat, though, just not necessarily much more of one than any clever and malicious human would be. I fully expect humans to start augmenting our brains with artificial parts eventually, which makes the distinction between artificial and natural intelligence rather blurry, so I don't think there's really all that much of a qualitative difference between the two.

Telarra
Oct 9, 2012

Epitope posted:

Right, agreed. Is it a bigger threat than meteors or nuclear war though? My point is saying AI is the biggest threat is mockable, but falling over ourselves to say AI is no threat at all is really no better. Walking the the fridge has risks.

Where has anyone been saying that? Everyone in here has been arguing against the "AI-God of the Singularity/Apocalypse". It's obvious that change of any form comes with risks attached.

Epitope
Nov 27, 2006

Grimey Drawer

Moddington posted:

Where has anyone been saying that? Everyone in here has been arguing against the "AI-God of the Singularity/Apocalypse". It's obvious that change of any form comes with risks attached.

Uh, it was you that said the most blatant "there's nothing to worry about" stuff. I guess I misinterpreted it? Sorry then.

wait no, the other guy with that avatar, RPATDO_LAMD

Not My Leg
Nov 6, 2002

AYN RAND AKBAR!

Epitope posted:

Right, agreed. Is it a bigger threat than meteors or nuclear war though? My point is saying AI is the biggest threat is mockable, but falling over ourselves to say AI is no threat at all is really no better. Walking the the fridge has risks.

To be fair, fridges are really heavy and don't walk on their own, so walking a fridge may not be a good idea.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

In this case, the fridge is floating in orbit around Altair, so there's no chance of anyone walking into it anytime soon. And even if someone were to walk into it, it would still be no more dangerous than walking into any of the countless fridges we already have here on earth. Which sort of makes it stupid to worry about the Altairian Fridge or act like it's a special threat.

Curvature of Earth
Sep 9, 2011

Projected cost of
invading Canada:
$900

Not My Leg posted:

To be fair, fridges are really heavy and don't walk on their own, so walking a fridge may not be a good idea.

Hell, even a stationary fridge is too dangerous for some people. (Well, stationary heavy box that holds food. You get the point.)

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

Curvature of Earth posted:

(Well, stationary heavy box that holds food. You get the point.)
Most of them are refrigerators too, in any case.

SubG
Aug 19, 2004

It's a hard world for little things.

LaughMyselfTo posted:

The trouble is if the artificial intelligence is radically more intelligent than the biological kind, and is able to, say, trick us into putting it in control of our nuclear stockpile.
This all seems more plausible and frightening if you know essentially nothing about either humans or technology. The superhuman AI will be fantastically terrifying until it spontaneously bluescreens. Or its plan for world domination is interrupted by a couple Comcast routers making GBS threads the bed. Or it decides to dedicate all of its human-like intelligence to surfing for porn and playing DOTA all day. Or whatever the gently caress.

Accidentally ending up with a world-destroying or world-dominating superhuman AI seems about as likely as accidentally ending up with a self-sustaining colony on Mars.

SolTerrasa
Sep 2, 2011


SubG posted:

This all seems more plausible and frightening if you know essentially nothing about either humans or technology. The superhuman AI will be fantastically terrifying until it spontaneously bluescreens. Or its plan for world domination is interrupted by a couple Comcast routers making GBS threads the bed. Or it decides to dedicate all of its human-like intelligence to surfing for porn and playing DOTA all day. Or whatever the gently caress.

Accidentally ending up with a world-destroying or world-dominating superhuman AI seems about as likely as accidentally ending up with a self-sustaining colony on Mars.

That's a totally reasonable thing to believe, but the thing is, all this terrible poo poo we've wired together does keep mostly working most of the time. The whole internet is cobbled out of bullshit, but we're still here. I've worked on a lot of things people think of as pretty solid: Amazon Web Services, Yelp, Google Maps... It's all bullshit piled on bullshit the second you look under the hood, and it's piled on top of the bullshit which the internet is built out of. But it mostly works, right? So I don't buy that we can't make anything big work out, because as techies we are amazing at making bullshit keep working.

Peel
Dec 3, 2007

As automation improves we'll put automated systems in charge of more and more and they'll mess up, or act as intended, in various entertaining and horrible ways. Watching how this happens will teach us how to manage them and design automated systems and their failsafes.

If you want to minimise the chance of AIs killing us all the best approach right now is to work on creating AIs, so that we have empirical evidence on what they do and how they go wrong.

SolTerrasa
Sep 2, 2011


Peel posted:

As automation improves we'll put automated systems in charge of more and more and they'll mess up, or act as intended, in various entertaining and horrible ways. Watching how this happens will teach us how to manage them and design automated systems and their failsafes.

If you want to minimise the chance of AIs killing us all the best approach right now is to work on creating AIs, so that we have empirical evidence on what they do and how they go wrong.

Unless you believe that the AI will go FOOM, which brings me to the third part of the Hanson-Yudkowsky debate, coming as soon as I get over this goddamn pneumonia, to a thread near you.

SubG
Aug 19, 2004

It's a hard world for little things.

SolTerrasa posted:

That's a totally reasonable thing to believe, but the thing is, all this terrible poo poo we've wired together does keep mostly working most of the time. The whole internet is cobbled out of bullshit, but we're still here. I've worked on a lot of things people think of as pretty solid: Amazon Web Services, Yelp, Google Maps... It's all bullshit piled on bullshit the second you look under the hood, and it's piled on top of the bullshit which the internet is built out of. But it mostly works, right? So I don't buy that we can't make anything big work out, because as techies we are amazing at making bullshit keep working.
Yeah, but a bunch of random technology tinkertoy'd together isn't a plausible mechanism for producing a globe-dominating AI. I mean as far as we know nothing is. But this is even less plausible-sounding than most of them. The general argument that we keep building new technologies and that sometimes there are unintended consequences only implies all-consuming AIs via one of the largest spit syllogisms ever imagined.

Political Whores
Feb 13, 2012

Yeah. I can see thinking that AI could be a major threat in some ways, but a fear that some CS professor is gonna accidentally manufacture the demiurge in his university comp lab is a very specific and very implausible scenario. And MIRI does seem to be focusing on that scenario alone. They don't talk about doing research into fault tolerance to prevent catastrophic failure in an AI system. They talk specifically about being the group to create a "friendly" AI, so that they get the be the first group to reify God.

E: I mean this thing would have to be some sort of code entity that functioned on machine language I guess? because it's assumed that out could take control of every other device. Just that seems ridiculous to me.

Political Whores fucked around with this message at 04:45 on Oct 28, 2014

Telarra
Oct 9, 2012

I think it says a lot about how little we understand about intelligence that "what if a superhuman AI springs fully-formed from the primordial goo" is given credence without a single fact to support it.

I mean yes, it could be a legitimate threat, but we need to discover a lot more about AI before it stops being superstition. And MIRI is doing jack poo poo to get us there.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.

Political Whores posted:

E: I mean this thing would have to be some sort of code entity that functioned on machine language I guess? because it's assumed that out could take control of every other device. Just that seems ridiculous to me.
It kind of is, especially in a world that's mostly held together by duct tape and prayer. There is not a way in hell that an incorporeal intelligence with no way to influence the physical world meaningfully could take over a world full of humans that really don't want it around. There are only so many ways you can cleverly manipulate the traffic lights to stop people from cutting your power lines.

SubG
Aug 19, 2004

It's a hard world for little things.

Moddington posted:

I think it says a lot about how little we understand about intelligence that "what if a superhuman AI springs fully-formed from the primordial goo" is given credence without a single fact to support it.
I think this is true. But I think it even goes beyond that. The proposition is what if a superhuman AI, whatever that means, springs fully-formed from nothing and happens to be connected to more or less literally everything.

Like I guess in the near-Singularity future we just all join hands and tear down all boundaries or something? And that explains why a suddenly sentient computer that decides it's the new god-king of the world isn't just Emperor Norton 2.0.

Adbot
ADBOT LOVES YOU

Toph Bei Fong
Feb 29, 2008



A lot of this sounds like the same style of fear that people have when it comes to biological fuckery.

The odds that a scientist, while trying to make sure that a particular strain of corn doesn't wither and die while sprayed with "Round Up," will accidentally mutate a strain that turns into air born Ebola-Pneumonia-AIDS is basically 0%. It simply isn't going to happen. Such a disease is going to occur in the wild before it happens in a lab. And the CDC monitors the latter poo poo really closely. Instead on the commercial science end, what we have are crops that get get shittier and shittier each year thanks to the selective breeding and massive lawsuits involving who can buy corn seeds from who, and which farms are violating trademark by growing which seeds, etc etc. Optimization and refinement are great. It's greed that's fucks everyone.

Similarly, we had a big AI related gently caress-up a couple years back, the aforementioned Flash Crash of 2010. The computers fell into a stupid pattern, the humans noticed, they shut everything down, cancelled all the trades, and everything went back to normal. Because we aren't stupid enough to hand over the keys of the kingdom to a bunch of sub-:spergin: level machines with no concept of priority or consequence, or a holistic world view. Hence why the "paperclip optimizer" is so stupid. What would happen is that Joe factory owner would hit the big red "Cancel" button, and what, the computer would have magically managed to override it's "stop making paperclip" function and learned to rewrite all industrial supply lines and divert electrical grids and armed itself with military grade technology without somehow questioning its core purpose of making paperclips? Bullshit. It would run out of metal, then start whining about not having any metal. Or it would have made itself so efficient that Joe's boss, David, would have fired him and just looked at his tablet every so often to make sure the factory was working properly every so often. If it wasn't, they'd send in the technicians. The problem is that Dave has fired everyone at the paperclip factory and no one has jobs, not that the factory AI is going to magically arise and turn the world into an anime.

  • Locked thread