|
Spoilers Below posted:Similarly, we had a big AI related gently caress-up a couple years back, the aforementioned Flash Crash of 2010. The computers fell into a stupid pattern, the humans noticed, they shut everything down, cancelled all the trades, and everything went back to normal.
|
# ? Oct 28, 2014 05:12 |
|
|
# ? Jun 8, 2024 20:08 |
|
SubG posted:This all seems more plausible and frightening if you know essentially nothing about either humans or technology. The superhuman AI will be fantastically terrifying until it spontaneously bluescreens. Or its plan for world domination is interrupted by a couple Comcast routers making GBS threads the bed. Or it decides to dedicate all of its human-like intelligence to surfing for porn and playing DOTA all day. Or whatever the gently caress. If you like your destruction natural, just let the weather take care of it: humidity, condensation, a thunderstorm, a flood, a hurricane/tsunami, a tornado, a snowstorm/ice-storm (below -40, most technology stops functioning), hail, a dust storm, or any other weather condition that makes technology poo poo itself (you know, most weather conditions). If you like your destruction more active, you can use a flamethrower, a giant loving industrial magnet (think the things the pick up cars with), a fire hose attached to water main, C4, a machine gun, liquid Nitrogen, or even just a bunch of dudes with sledge-hammers, among many, many other creative methods of destruction. AI fears notwithstanding, technology is still extremely fragile, and humans are really, really good at breaking poo poo. fade5 fucked around with this message at 05:23 on Oct 28, 2014 |
# ? Oct 28, 2014 05:15 |
|
Edit:^^^^You'd think something like the Fukushima disaster would be proof enough that the weather will gently caress up our poo poo good, no matter what, eh?^^^^^Basil Hayden posted:It's been a while but I seem to recall that the computers themselves realized something was wrong and ended up acting to stop it before humans even got involved (something along the lines an automatic market stabilizer shutting everything down for a few seconds because something was amiss). That's even better. It's not like we'd see a ton of teamsters standing around saying "Sorry, foreman. This big computer handout said to deliver this big shipment of metal to the factory, and I'm going to do it, regardless of what you real humans say, whether or not I'm getting paid," was more my point.
|
# ? Oct 28, 2014 05:17 |
|
SubG posted:And that explains why a suddenly sentient computer that decides it's the new god-king of the world isn't just Emperor Norton 2.0.
|
# ? Oct 28, 2014 05:29 |
|
If our future machine gods are anything near as entertaining and principled as Emperor Norton, we don't have a whole lot to worry about.
|
# ? Oct 28, 2014 05:32 |
|
SodomyGoat101 posted:Emperor Norton was actually really loving cool, though. Yeah, but this one would be Emperor Norton Antivirus.
|
# ? Oct 28, 2014 06:16 |
|
Prism posted:Yeah, but this one would be Emperor Norton Antivirus. Slow and barely functional?
|
# ? Oct 28, 2014 08:59 |
|
Coincidentally someone linked me both these articles yesterday... This is an interview with someone who is very skeptical about both being to automatically trawl big data for meaningful correlations and of all the recent talk of more brain like computers: http://spectrum.ieee.org/robotics/a...neering-efforts The interview also touches on the singularity and most other Big Yud talking points. The interviewee has a rather different outlook from Yudkowski. The other one is this really nice piece covering the life of Emperor Norton v1.0: http://narrative.ly/lost-legends/the-original-san-francisco-eccentric/ From his birth in the UK and his family's emigration to South Africa to his successful start in California as a businessman and freemason followed by the collapse of his businesses and his eventual transformation into Emperor Norton.
|
# ? Oct 28, 2014 14:27 |
|
fade5 posted:
Heck, if global warming keeps up the computers running the Super AI might overheat. Especially if they're located in Southern California. How are these AIs supposed to acquire enough power to keep running, anyway, let alone take over the earth/universe/whatever? This seems to be another thing that Singularity people handwave.
|
# ? Oct 28, 2014 17:02 |
|
Probably from the same place that lets it spontaneously materialize exponentially more powerful hardware to "go foom" on.
|
# ? Oct 28, 2014 17:24 |
|
Cardiovorax posted:Probably from the same place that lets it spontaneously materialize exponentially more powerful hardware to "go foom" on. Presumably the same morons who created the AI also gave it access to nanomachines, Alien Arcana fucked around with this message at 18:12 on Oct 28, 2014 |
# ? Oct 28, 2014 18:07 |
|
Alien Arcana posted:Presumably the same morons who created the AI also gave it access to nanomachines, One gets the impression that everything Yudkowsky has published or even thought about how to deal with AI is a giant poster in the datacenter that says "If the AI asks for anything you wouldn't want a four year old child to have, JUST SAY NO".
|
# ? Oct 28, 2014 18:16 |
|
Alien Arcana posted:Presumably the same morons who created the AI also gave it access to nanomachines, And much like the AI, the nanomachines obviously behave exactly how they do in science fiction.
|
# ? Oct 28, 2014 19:26 |
|
ArchangeI posted:One gets the impression that everything Yudkowsky has published or even thought about how to deal with AI is a giant poster in the datacenter that says "If the AI asks for anything you wouldn't want a four year old child to have, JUST SAY NO". From what I understand of their AI LARPing, an AI can convince anyone to give it anything it wants. The way it does this is super classified or something, but I think at least one route is convincing the person that you'll simulate a very large number of them being asked to give them what they want and every simulation that doesn't will be tortured or whatever and blah blah blah any logical person will give the AI nanomachine nukes.
|
# ? Oct 28, 2014 21:33 |
|
Of course, to do that it would have to already be the AI god it needs to talk people into giving it a chance to become.
|
# ? Oct 28, 2014 21:37 |
Crust First posted:From what I understand of their AI LARPing, an AI can convince anyone to give it anything it wants. The way it does this is super classified or something, but I think at least one route is convincing the person that you'll simulate a very large number of them being asked to give them what they want and every simulation that doesn't will be tortured or whatever and blah blah blah any logical person will give the AI nanomachine nukes.
|
|
# ? Oct 28, 2014 22:42 |
|
I still dunno why they think the answer to Roko's Basilisk isn't saying gently caress you to the computer and pulling the plug out of the wall.
|
# ? Oct 28, 2014 22:43 |
|
Anticheese posted:I still dunno why they think the answer to Roko's Basilisk isn't saying gently caress you to the computer and pulling the plug out of the wall. But that would mean destroying the AI! And as Aaron Diaz taught me, AIs are A Good Thing and must be defended at all costs, even if they might destroy the world.
|
# ? Oct 28, 2014 22:48 |
|
Because the real answer is to say "what the gently caress do I care about what happens to a billion digital copies of me after I am millions of years dead, you blithering morons?" and disregard it as irrelevant even if it were true. But that would lead their whole rationalist navelgazing ad absurdum, so they can't ever possibly admit that.
|
# ? Oct 28, 2014 22:51 |
|
Control Volume posted:http://lesswrong.com/lw/1pz/the_ai_in_a_box_boxes_you/
|
# ? Oct 28, 2014 23:00 |
|
Anticheese posted:I still dunno why they think the answer to Roko's Basilisk isn't saying gently caress you to the computer and pulling the plug out of the wall. No offense, but I'm not sure you followed the argument, or maybe you confused it with the AI in a Box thing. The right answer to the AI in a Box is what you said. But for the basilisk, during the time when you have the chance to act (now), there's no AI to pull the plug on. Later, when the AI does exist, it will become too powerful to pull the plug on first, and then it will torture (copies of) you forever for not having sped up its creation / power gathering.
|
# ? Oct 28, 2014 23:09 |
|
But if the you in the simulation, who is identical to the real you, would pull the plug, then the real you would have pulled the plug in the past, so the simulation can't exist so you know you're the real one so pull that goddamn plug.
|
# ? Oct 28, 2014 23:22 |
|
SolTerrasa posted:No offense, but I'm not sure you followed the argument, or maybe you confused it with the AI in a Box thing. The right answer to the AI in a Box is what you said. But for the basilisk, during the time when you have the chance to act (now), there's no AI to pull the plug on. Later, when the AI does exist, it will become too powerful to pull the plug on first, and then it will torture (copies of) you forever for not having sped up its creation / power gathering.
|
# ? Oct 28, 2014 23:29 |
|
SolTerrasa posted:No offense, but I'm not sure you followed the argument, or maybe you confused it with the AI in a Box thing. The right answer to the AI in a Box is what you said. But for the basilisk, during the time when you have the chance to act (now), there's no AI to pull the plug on. Later, when the AI does exist, it will become too powerful to pull the plug on first, and then it will torture (copies of) you forever for not having sped up its creation / power gathering. I think I got it confused with AI in a box. Either way, it's all crazy and dumb.
|
# ? Oct 29, 2014 01:28 |
|
There's also no reason for the AI to torture anyone because, by the time it exists and is able to torture people, the time at which it was created has already been decided. Whether or not it actually tortures anyone won't change a thing. Lesswrong thinks it does because they're used to Yudkowsky's patented ~Timeless Decision Theory~, which is basically "regular decision theory, but we'll only pose problems in which a magic genie in the past can magically see the future with perfect accuracy". The idea is that since the genie in the past can see the future perfectly, your actions in the future will directly affect the genie's actions in the past, which can affect you in the present. But that hinges on the existence, right now, of an entity that can perfectly predict the future. In real life, no such being exists. The future doesn't directly causally affect the past, so the AI can't bring itself into existence sooner, so there's no point in torturing anyone. Even if you are wholly susceptible to both Yudkowsky's premises and his brand of rationalism, Roko's Basilisk is not a credible threat.
|
# ? Oct 29, 2014 01:38 |
I thought the root of that TDT thing was the slightly clever insight that if you can pretty accurately predict future behavior, and adapt your behavior accordingly, in a sense that future behavior has affected the past. However, Yudkowsky lacks the talent of Vizzini.
|
|
# ? Oct 29, 2014 01:48 |
|
Nessus posted:I thought the root of that TDT thing was the slightly clever insight that if you can pretty accurately predict future behavior, and adapt your behavior accordingly, in a sense that future behavior has affected the past. However, Yudkowsky lacks the talent of Vizzini. That is indeed the root of TDT. It's not wrong, per se, just useless. In the Roko's Basilisk case, we can't perfectly predict future behavior so the future can't directly affect the past, so the AI can't accelerate its own creation through torture, so there's no reason for it to torture anyone. Yudkowsky thinks there is because he can't keep track of his own premises.
|
# ? Oct 29, 2014 01:56 |
|
Lottery of Babylon posted:There's also no reason for the AI to torture anyone because, by the time it exists and is able to torture people, the time at which it was created has already been decided. Whether or not it actually tortures anyone won't change a thing. But really you're just pointing out that this is all barking mad and that Yud's reasoning is flawed because it's completely disconnected from reality. But that's boring. Any old crank has theories which are wrong because they're completely disconnected from reality. I think it's far more interesting that despite getting discernibly turgid at the thought of rationalism, they've produced a line of argument that isn't even internally self-consistent.
|
# ? Oct 29, 2014 01:59 |
|
Nah, the idea behind the basilisk is that you - as in, the individual who is reading this post - can't tell if they are a simulated copy or the real deal. So if the AI says "do X or I will torture you", even retroactively, then you don't know if the AI is capable of carrying out this threat. Now, if you are a reasonable person and go "well the odds of this happening are so low that I have a very good chance that I, the reader of this post, am the real physical deal" well the AI will simulate 3^^^3 copies to torture so that the odds are in favour of you being a simulation. Thus, the real deal will carry out the request in fear of actually being a simulation. Of course, this fails if a) the real deal would think this is dumb and thus so would a simulation so the AI would simulate one, find out the basilisk tactic won't work, and give up because simulating any number of yous won't convince the real deal or b) you are willing to accept torture to protect real world humanity (because that honestly is the only one you have control over and thus the only one that matters) and thus the real you wouldn't bend over for the AI so there is no reason to torture the simulations. Basically you are innoculated against the Basilisk by not being ann idiot.
|
# ? Oct 29, 2014 02:20 |
|
MinistryofLard posted:Nah, the idea behind the basilisk is that you - as in, the individual who is reading this post - can't tell if they are a simulated copy or the real deal. The `what if you are in fact a simulation' variant isn't a `basilisk' in this sense at all.
|
# ? Oct 29, 2014 02:32 |
|
SubG posted:The `what if you are in fact a simulation' variant isn't a `basilisk' in this sense at all. No, "what if you are a simulation" is the only reason there's any threat at all; you wouldn't care about future simulations of yourself if you knew you weren't one. It's a basilisk because the AI won't bother to torture-simulate you unless the real you has "seen" the basilisk, and will only bother to torturesim versions of you that have "seen" the basilisk like the real you did.
|
# ? Oct 29, 2014 02:52 |
|
Lottery of Babylon posted:No, "what if you are a simulation" is the only reason there's any threat at all; you wouldn't care about future simulations of yourself if you knew you weren't one. It's a basilisk because the AI won't bother to torture-simulate you unless the real you has "seen" the basilisk, and will only bother to torturesim versions of you that have "seen" the basilisk like the real you did.
|
# ? Oct 29, 2014 03:15 |
|
Lottery of Babylon posted:There's also no reason for the AI to torture anyone because, by the time it exists and is able to torture people, the time at which it was created has already been decided. Whether or not it actually tortures anyone won't change a thing. Doesn't TDT also work the other way around - an omniscient genie in the future predicting the past (your present) lets you affect the future - hence the 'timeless' part? Which is how Roko's Basilisk then follows: if JehovAI/ShAItan is simulating you, and you are simulating it (merely by rationally reasoning its actions), you can both affect each other.
|
# ? Oct 29, 2014 03:17 |
|
Moddington posted:Doesn't TDT also work the other way around - an omniscient genie in the future predicting the past (your present) lets you affect the future - hence the 'timeless' part? Which is how Roko's Basilisk then follows: if JehovAI/ShAItan is simulating you, and you are simulating it (merely by rationally reasoning its actions), you can both affect each other. No, the central TDT problem is of this form: "A god-AI gives you a choice of taking either just Box A or both Boxes A and B. Box B always contains $10. Box A contains $100 if the god-AI predicted you would take only box A, and $0 otherwise. Should you take both boxes or just one?" The idea is that your future action (choice of boxes) affected the computer's past action (how much money to put in the boxes). Once the boxes are prepared, A+B obviously contains more money than A alone, but you should still choose A alone because doing so will cause the computer in the past to have already put more money in A. The ability of the past to affect the future isn't at all unusual or remarkable. The past always affects the future, that's how time works. It's the part where the future can causally affect the past that is unusual. And it only occurs if the past contains a super-genius who can perfectly predict the future. Lottery of Babylon fucked around with this message at 03:28 on Oct 29, 2014 |
# ? Oct 29, 2014 03:25 |
|
Doesn't it also cause problems if the predictor is just pretty good at guessing which box you'll pick? Or if we model the scenario as a military one, if they have spies in your camp and can intercept your orders, or whatever. It honestly doesn't seem that exotic to me.
|
# ? Oct 29, 2014 03:58 |
|
Predicting future events that involve the predicting entity always make me think that there should be some kind of infinite feedback loop there. Predicting yourself accurately would require you to have a perfect mental model of your own mind, which kind of by definition requires you to be larger than yourself. Doing less would mean that you can't adequately judge your own future behaviour or anything that happens because of it. It just seems like a paradoxical proposition. Either way, the more I learn about the Yud's "timeless" decision theory, the more stupid it sounds.
|
# ? Oct 29, 2014 04:06 |
Cardiovorax posted:Predicting future events that involve the predicting entity always make me think that there should be some kind of infinite feedback loop there. Predicting yourself accurately would require you to have a perfect mental model of your own mind, which kind of by definition requires you to be larger than yourself. Doing less would mean that you can't adequately judge your own future behaviour or anything that happens because of it. It just seems like a paradoxical proposition.
|
|
# ? Oct 29, 2014 04:08 |
|
Nobody wins where ball-punching is involved.
|
# ? Oct 29, 2014 04:20 |
|
Cardiovorax posted:Predicting future events that involve the predicting entity always make me think that there should be some kind of infinite feedback loop there. Predicting yourself accurately would require you to have a perfect mental model of your own mind, which kind of by definition requires you to be larger than yourself.
|
# ? Oct 29, 2014 04:31 |
|
|
# ? Jun 8, 2024 20:08 |
|
SubG posted:Nah. That's only true if your goal is to perfectly simulate your mind to arbitrary fidelity, but that is not necessarily (and almost certainly isn't) required if you're just trying to predict the outcome of a single decision. It is if you're going to predict how you'd predict the prediction of a prediction that's predicted by you predicting the prediction of a prediction of a... ad infinitum. If I know I'd take the money, then I wouldn't take the 2nd box, which means there'd money in both, which means I would take both, which means there isn't, which means I'd be good and not take both, which means there is, etc. You'd be better off simulating a really goony AI, who'd tell you there's money in both boxes if they're a good person, but there's actually just a picture of goatse.
|
# ? Oct 29, 2014 05:34 |