|
Rape is pervasive in modern entertainment as a drop-in plot element that boils down to "something that everyone recognizes as a terrible thing." Need a reprehensible bad guy? have him rape someone, nobody sympathizes with a rapist. It's almost always written horribly because it also treats victims as quick plot devices and trivializes the aftermath if it's even addressed; ironically, Yudkowsky avoids the usually poor and distasteful handling of it by making it "unquestionably bad thing is OK now" and handwaving it away with magic future technology. In other news, Elon Musk warns that developing AI is "Summoning the demon" but manages to say we should be cautious without going into crazy torture machine god stories. quote:I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out. GWBBQ fucked around with this message at 17:20 on Oct 26, 2014 |
# ? Oct 26, 2014 17:13 |
|
|
# ? Jun 9, 2024 21:54 |
|
I'll never understand why people are so terribly paranoid about AI in particular. Of course creating a new mind is dangerous, but in the same way that having a child is. It might grow up to be Hannibal Lecter and eat your skin, but that doesn't mean it will even want to.
|
# ? Oct 26, 2014 17:19 |
|
Cardiovorax posted:I'll never understand why people are so terribly paranoid about AI in particular. Of course creating a new mind is dangerous, but in the same way that having a child is. It might grow up to be Hannibal Lecter and eat your skin, but that doesn't mean it will even want to. While I don't think it's rational to think that an AGI would go crazy and kill/enslave us all or that we're even capable of creating an AGI, I think it is rational to worry that if we do, we're more than capable of screwing up and ending up with unpredictable or disastrous results.
|
# ? Oct 26, 2014 17:27 |
|
Ah but don't you see? The only way to create an AI is one that can self-improve rapidly and indefinitely so it'll quickly turn into a god and take over the world from sheer smartness and there will be nothing we can do about it!
|
# ? Oct 26, 2014 17:29 |
|
GWBBQ posted:Several times during the Cold War, on both sides, early warning systems malfunctioned and brought us to the point that people had to make the decision whether to launch a nuclear attack or not. Had those computers been able to retaliate automatically, they might have. That danger has nothing to do with the AI-ness of the computer and everything to do with the has-a-nuke-ness of the computer, though.
|
# ? Oct 26, 2014 17:32 |
|
GWBBQ posted:Several times during the Cold War, on both sides, early warning systems malfunctioned and brought us to the point that people had to make the decision whether to launch a nuclear attack or not. Had those computers been able to retaliate automatically, they might have. Lottery of Babylon posted:That danger has nothing to do with the AI-ness of the computer and everything to do with the has-a-nuke-ness of the computer, though.
|
# ? Oct 26, 2014 17:34 |
|
All of the "danger" from creating an advanced AI is based on the stupid Yudkowsky idea of AI, where you use a General AI for every task and give it complete control with no human oversight.
|
# ? Oct 26, 2014 17:36 |
|
The funny thing is that AI could be an existential threat, its just not nearly an immanent one. "Summoning a Demon" again implies that this poo poo will happen all at once. Nobody is going to summon AI; It's going to get built up slowly. There's no point in regulating it until we actually have it and get to see what form it takes.
|
# ? Oct 26, 2014 18:53 |
What seems far more likely is that limited AIs get put in charge of decision making for large corporations and ruthlessly cut jobs and productive activities in favor of the stock valuations and other such intangibles. They are never stopped from this 'paperclip maximizing' because they're getting great returns for their owners. Mass impoverishment results.
|
|
# ? Oct 26, 2014 19:15 |
|
Sounds like any other management to me, honestly.
|
# ? Oct 26, 2014 19:16 |
|
Cardiovorax posted:Sounds like any other management to me, honestly. Also guys, we can admit there are dangers inherent in AI development without saying that MIRI is right or good.
|
# ? Oct 26, 2014 19:22 |
|
Nessus posted:What seems far more likely is that limited AIs get put in charge of decision making for large corporations and ruthlessly cut jobs and productive activities in favor of the stock valuations and other such intangibles. They are never stopped from this 'paperclip maximizing' because they're getting great returns for their owners. Mass impoverishment results. Already started happening years ago.
|
# ? Oct 26, 2014 19:23 |
|
Hobo By Design posted:Years ago I listened to a presentation that was delivered at a skeptic convention on Bayes theorem, trying to disprove the Abrahamic god specifically with it. "A god with a bunch of specific associated claims are inherently less likely than one with no claims! Better stop believing things about god, Christians!" It also shoehorned in Buffy the Vampire Slayer. Looking it up again I'm a bit disappointed it wasn't Yudkowsky himself. Mill is not the end point for what counts as utilitarianism. Preference maximizing utilitarianism is probably more influential today than hedonistic utilitarianism or (to the extent it's different) eudaimonia based utilitarianism. Also, "why do you want to enjoy yourself" may be a nonsense question, but "why does it matter, ethically, whether you enjoy yourself" is not a nonsense question.
|
# ? Oct 26, 2014 19:28 |
|
Epitope posted:
|
# ? Oct 26, 2014 19:32 |
|
Not My Leg posted:Mill is not the end point for what counts as utilitarianism. Preference maximizing utilitarianism is probably more influential today than hedonistic utilitarianism or (to the extent it's different) eudaimonia based utilitarianism. Because I want to. Cardiovorax posted:That doesn't really have anything to do with it. I just don't really see any possible harm an artificial intelligence could cause that couldn't also come from the biological kind. Most of it is predicated on people being spectacularly stupid, like putting a pyromaniac in sole control of your nuclear stockpile. The trouble is if the artificial intelligence is radically more intelligent than the biological kind, and is able to, say, trick us into putting it in control of our nuclear stockpile.
|
# ? Oct 26, 2014 20:43 |
|
So "the trouble" is vague scifi bullshit based on nothing but wild conjecture and appeals to ignorance. And on top of all that it's misplaced and fruitless to try to guard against it because we have absolutely no clue how to actually make a superhuman AGI in the first place.
|
# ? Oct 26, 2014 20:47 |
|
LaughMyselfTo posted:The trouble is if the artificial intelligence is radically more intelligent than the biological kind, and is able to, say, trick us into putting it in control of our nuclear stockpile.
|
# ? Oct 26, 2014 20:50 |
|
LaughMyselfTo posted:The trouble is if the artificial intelligence is radically more intelligent than the biological kind, and is able to, say, trick us into putting it in control of our nuclear stockpile. You don't even need to come up with sci-fi scenarios to justify calling AI dangerous. Making business practices even more inhumane is a danger of AI, even if it is a danger without AI also. Musk calling it the biggest existential threat may have something to do with his profession. I know people in virology that think avian influenza is maybe the biggest threat.
|
# ? Oct 26, 2014 20:52 |
|
Epidemics have come close to wiping out humanity on more than one occasion, so I'm thinking the virologists have the historical high ground there.
|
# ? Oct 26, 2014 20:54 |
My concern with the flash crash thing is a rhetorical loop of "well, that's what the computer said, and the computer is faster and smarter than us, and technology is inevitable and if you disagree you're a luddite; once my AI stock traders and automatic CFOs give me all the money, think of all the jobs I can create." Basically the current system but with fewer points of humanity, and possibly other stupid poo poo -- isn't there already some article saying we don't need to do science any more, just farm big data for trends and go "Well, drowning deaths spike alongside ice cream consumption, so obviously we need to stop selling ice cream at the beach!" I think that was earlier in this thread.
|
|
# ? Oct 26, 2014 20:56 |
|
Nessus posted:My concern with the flash crash thing is a rhetorical loop of "well, that's what the computer said, and the computer is faster and smarter than us, and technology is inevitable and if you disagree you're a luddite; once my AI stock traders and automatic CFOs give me all the money, think of all the jobs I can create." Basically the current system but with fewer points of humanity, and possibly other stupid poo poo -- isn't there already some article saying we don't need to do science any more, just farm big data for trends and go "Well, drowning deaths spike alongside ice cream consumption, so obviously we need to stop selling ice cream at the beach!" I think that was earlier in this thread. That slight humming sound you hear? Francis Bacon spinning in his grave.
|
# ? Oct 26, 2014 21:04 |
|
Cardiovorax posted:Epidemics have come close to wiping out humanity on more than one occasion, so I'm thinking the virologists have the historical high ground there. Right, agreed. Is it a bigger threat than meteors or nuclear war though? My point is saying AI is the biggest threat is mockable, but falling over ourselves to say AI is no threat at all is really no better. Walking the the fridge has risks.
|
# ? Oct 26, 2014 21:04 |
|
Nessus posted:What seems far more likely is that limited AIs get put in charge of decision making for large corporations and ruthlessly cut jobs and productive activities in favor of the stock valuations and other such intangibles. They are never stopped from this 'paperclip maximizing' because they're getting great returns for their owners. Mass impoverishment results. There's a sci-fi story about exactly that: Manna, by Marshall Brain.* The whole thing's free to read on his website, if you want it. Manna takes a way more optimistic view though, in the sense that everything after chapter four is just pages and pages of "Let me describe my utopian post-scarcity society!" (But no garbage, so it's still less lovely than Big Yud's fiction.) *Goddamn, if that isn't the perfect sci-fi author's name.
|
# ? Oct 26, 2014 21:05 |
|
Epitope posted:Right, agreed. Is it a bigger threat than meteors or nuclear war though? My point is saying AI is the biggest threat is mockable, but falling over ourselves to say AI is no threat at all is really no better. Walking the the fridge has risks.
|
# ? Oct 26, 2014 21:07 |
|
Epitope posted:Right, agreed. Is it a bigger threat than meteors or nuclear war though? My point is saying AI is the biggest threat is mockable, but falling over ourselves to say AI is no threat at all is really no better. Walking the the fridge has risks. Where has anyone been saying that? Everyone in here has been arguing against the "AI-God of the Singularity/Apocalypse". It's obvious that change of any form comes with risks attached.
|
# ? Oct 26, 2014 21:10 |
|
Moddington posted:Where has anyone been saying that? Everyone in here has been arguing against the "AI-God of the Singularity/Apocalypse". It's obvious that change of any form comes with risks attached. Uh, it was you that said the most blatant "there's nothing to worry about" stuff. I guess I misinterpreted it? Sorry then. wait no, the other guy with that avatar, RPATDO_LAMD
|
# ? Oct 26, 2014 21:14 |
|
Epitope posted:Right, agreed. Is it a bigger threat than meteors or nuclear war though? My point is saying AI is the biggest threat is mockable, but falling over ourselves to say AI is no threat at all is really no better. Walking the the fridge has risks. To be fair, fridges are really heavy and don't walk on their own, so walking a fridge may not be a good idea.
|
# ? Oct 26, 2014 21:26 |
|
In this case, the fridge is floating in orbit around Altair, so there's no chance of anyone walking into it anytime soon. And even if someone were to walk into it, it would still be no more dangerous than walking into any of the countless fridges we already have here on earth. Which sort of makes it stupid to worry about the Altairian Fridge or act like it's a special threat.
|
# ? Oct 26, 2014 21:33 |
|
Not My Leg posted:To be fair, fridges are really heavy and don't walk on their own, so walking a fridge may not be a good idea. Hell, even a stationary fridge is too dangerous for some people. (Well, stationary heavy box that holds food. You get the point.)
|
# ? Oct 26, 2014 21:37 |
|
Curvature of Earth posted:(Well, stationary heavy box that holds food. You get the point.)
|
# ? Oct 27, 2014 16:23 |
|
LaughMyselfTo posted:The trouble is if the artificial intelligence is radically more intelligent than the biological kind, and is able to, say, trick us into putting it in control of our nuclear stockpile. Accidentally ending up with a world-destroying or world-dominating superhuman AI seems about as likely as accidentally ending up with a self-sustaining colony on Mars.
|
# ? Oct 28, 2014 01:47 |
|
SubG posted:This all seems more plausible and frightening if you know essentially nothing about either humans or technology. The superhuman AI will be fantastically terrifying until it spontaneously bluescreens. Or its plan for world domination is interrupted by a couple Comcast routers making GBS threads the bed. Or it decides to dedicate all of its human-like intelligence to surfing for porn and playing DOTA all day. Or whatever the gently caress. That's a totally reasonable thing to believe, but the thing is, all this terrible poo poo we've wired together does keep mostly working most of the time. The whole internet is cobbled out of bullshit, but we're still here. I've worked on a lot of things people think of as pretty solid: Amazon Web Services, Yelp, Google Maps... It's all bullshit piled on bullshit the second you look under the hood, and it's piled on top of the bullshit which the internet is built out of. But it mostly works, right? So I don't buy that we can't make anything big work out, because as techies we are amazing at making bullshit keep working.
|
# ? Oct 28, 2014 02:18 |
|
As automation improves we'll put automated systems in charge of more and more and they'll mess up, or act as intended, in various entertaining and horrible ways. Watching how this happens will teach us how to manage them and design automated systems and their failsafes. If you want to minimise the chance of AIs killing us all the best approach right now is to work on creating AIs, so that we have empirical evidence on what they do and how they go wrong.
|
# ? Oct 28, 2014 02:39 |
|
Peel posted:As automation improves we'll put automated systems in charge of more and more and they'll mess up, or act as intended, in various entertaining and horrible ways. Watching how this happens will teach us how to manage them and design automated systems and their failsafes. Unless you believe that the AI will go FOOM, which brings me to the third part of the Hanson-Yudkowsky debate, coming as soon as I get over this goddamn pneumonia, to a thread near you.
|
# ? Oct 28, 2014 02:45 |
|
SolTerrasa posted:That's a totally reasonable thing to believe, but the thing is, all this terrible poo poo we've wired together does keep mostly working most of the time. The whole internet is cobbled out of bullshit, but we're still here. I've worked on a lot of things people think of as pretty solid: Amazon Web Services, Yelp, Google Maps... It's all bullshit piled on bullshit the second you look under the hood, and it's piled on top of the bullshit which the internet is built out of. But it mostly works, right? So I don't buy that we can't make anything big work out, because as techies we are amazing at making bullshit keep working.
|
# ? Oct 28, 2014 04:37 |
|
Yeah. I can see thinking that AI could be a major threat in some ways, but a fear that some CS professor is gonna accidentally manufacture the demiurge in his university comp lab is a very specific and very implausible scenario. And MIRI does seem to be focusing on that scenario alone. They don't talk about doing research into fault tolerance to prevent catastrophic failure in an AI system. They talk specifically about being the group to create a "friendly" AI, so that they get the be the first group to reify God. E: I mean this thing would have to be some sort of code entity that functioned on machine language I guess? because it's assumed that out could take control of every other device. Just that seems ridiculous to me. Political Whores fucked around with this message at 04:45 on Oct 28, 2014 |
# ? Oct 28, 2014 04:43 |
|
I think it says a lot about how little we understand about intelligence that "what if a superhuman AI springs fully-formed from the primordial goo" is given credence without a single fact to support it. I mean yes, it could be a legitimate threat, but we need to discover a lot more about AI before it stops being superstition. And MIRI is doing jack poo poo to get us there.
|
# ? Oct 28, 2014 04:45 |
|
Political Whores posted:E: I mean this thing would have to be some sort of code entity that functioned on machine language I guess? because it's assumed that out could take control of every other device. Just that seems ridiculous to me.
|
# ? Oct 28, 2014 04:50 |
|
Moddington posted:I think it says a lot about how little we understand about intelligence that "what if a superhuman AI springs fully-formed from the primordial goo" is given credence without a single fact to support it. Like I guess in the near-Singularity future we just all join hands and tear down all boundaries or something? And that explains why a suddenly sentient computer that decides it's the new god-king of the world isn't just Emperor Norton 2.0.
|
# ? Oct 28, 2014 04:52 |
|
|
# ? Jun 9, 2024 21:54 |
|
A lot of this sounds like the same style of fear that people have when it comes to biological fuckery. The odds that a scientist, while trying to make sure that a particular strain of corn doesn't wither and die while sprayed with "Round Up," will accidentally mutate a strain that turns into air born Ebola-Pneumonia-AIDS is basically 0%. It simply isn't going to happen. Such a disease is going to occur in the wild before it happens in a lab. And the CDC monitors the latter poo poo really closely. Instead on the commercial science end, what we have are crops that get get shittier and shittier each year thanks to the selective breeding and massive lawsuits involving who can buy corn seeds from who, and which farms are violating trademark by growing which seeds, etc etc. Optimization and refinement are great. It's greed that's fucks everyone. Similarly, we had a big AI related gently caress-up a couple years back, the aforementioned Flash Crash of 2010. The computers fell into a stupid pattern, the humans noticed, they shut everything down, cancelled all the trades, and everything went back to normal. Because we aren't stupid enough to hand over the keys of the kingdom to a bunch of sub- level machines with no concept of priority or consequence, or a holistic world view. Hence why the "paperclip optimizer" is so stupid. What would happen is that Joe factory owner would hit the big red "Cancel" button, and what, the computer would have magically managed to override it's "stop making paperclip" function and learned to rewrite all industrial supply lines and divert electrical grids and armed itself with military grade technology without somehow questioning its core purpose of making paperclips? Bullshit. It would run out of metal, then start whining about not having any metal. Or it would have made itself so efficient that Joe's boss, David, would have fired him and just looked at his tablet every so often to make sure the factory was working properly every so often. If it wasn't, they'd send in the technicians. The problem is that Dave has fired everyone at the paperclip factory and no one has jobs, not that the factory AI is going to magically arise and turn the world into an anime.
|
# ? Oct 28, 2014 05:06 |