Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Basil Hayden
Oct 9, 2012

1921!

Spoilers Below posted:

Similarly, we had a big AI related gently caress-up a couple years back, the aforementioned Flash Crash of 2010. The computers fell into a stupid pattern, the humans noticed, they shut everything down, cancelled all the trades, and everything went back to normal.
It's been a while but I seem to recall that the computers themselves realized something was wrong and ended up acting to stop it before humans even got involved (something along the lines an automatic market stabilizer shutting everything down for a few seconds because something was amiss).

Adbot
ADBOT LOVES YOU

fade5
May 31, 2012

by exmarx

SubG posted:

This all seems more plausible and frightening if you know essentially nothing about either humans or technology. The superhuman AI will be fantastically terrifying until it spontaneously bluescreens. Or its plan for world domination is interrupted by a couple Comcast routers making GBS threads the bed. Or it decides to dedicate all of its human-like intelligence to surfing for porn and playing DOTA all day. Or whatever the gently caress.

Accidentally ending up with a world-destroying or world-dominating superhuman AI seems about as likely as accidentally ending up with a self-sustaining colony on Mars.
Honestly, this is the part that kills me, no matter how "powerful" the AI becomes, it's still trapped on some form of technology. And even if it never bluescreens, if all else fails there's no shortage of ways to destroy it: simplest way is to cut off the power, whether by switches, unplugging stuff or just cutting the loving power cords.

If you like your destruction natural, just let the weather take care of it: humidity, condensation, a thunderstorm, a flood, a hurricane/tsunami, a tornado, a snowstorm/ice-storm (below -40, most technology stops functioning), hail, a dust storm, or any other weather condition that makes technology poo poo itself (you know, most weather conditions).

If you like your destruction more active, you can use a flamethrower, a giant loving industrial magnet (think the things the pick up cars with), a fire hose attached to water main, C4, a machine gun, liquid Nitrogen, or even just a bunch of dudes with sledge-hammers, among many, many other creative methods of destruction.

AI fears notwithstanding, technology is still extremely fragile, and humans are really, really good at breaking poo poo.:v:

fade5 fucked around with this message at 05:23 on Oct 28, 2014

Toph Bei Fong
Feb 29, 2008



Edit:^^^^You'd think something like the Fukushima disaster would be proof enough that the weather will gently caress up our poo poo good, no matter what, eh?^^^^^


Basil Hayden posted:

It's been a while but I seem to recall that the computers themselves realized something was wrong and ended up acting to stop it before humans even got involved (something along the lines an automatic market stabilizer shutting everything down for a few seconds because something was amiss).

That's even better.

It's not like we'd see a ton of teamsters standing around saying "Sorry, foreman. This big computer handout said to deliver this big shipment of metal to the factory, and I'm going to do it, regardless of what you real humans say, whether or not I'm getting paid," was more my point.

SodomyGoat101
Nov 20, 2012

SubG posted:

And that explains why a suddenly sentient computer that decides it's the new god-king of the world isn't just Emperor Norton 2.0.
Emperor Norton was actually really loving cool, though.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
If our future machine gods are anything near as entertaining and principled as Emperor Norton, we don't have a whole lot to worry about.

Prism
Dec 22, 2007

yospos

SodomyGoat101 posted:

Emperor Norton was actually really loving cool, though.

Yeah, but this one would be Emperor Norton Antivirus.

Curvature of Earth
Sep 9, 2011

Projected cost of
invading Canada:
$900

Prism posted:

Yeah, but this one would be Emperor Norton Antivirus.

Slow and barely functional?

Munin
Nov 14, 2004


Coincidentally someone linked me both these articles yesterday...

This is an interview with someone who is very skeptical about both being to automatically trawl big data for meaningful correlations and of all the recent talk of more brain like computers:
http://spectrum.ieee.org/robotics/a...neering-efforts

The interview also touches on the singularity and most other Big Yud talking points. The interviewee has a rather different outlook from Yudkowski.

The other one is this really nice piece covering the life of Emperor Norton v1.0:
http://narrative.ly/lost-legends/the-original-san-francisco-eccentric/

From his birth in the UK and his family's emigration to South Africa to his successful start in California as a businessman and freemason followed by the collapse of his businesses and his eventual transformation into Emperor Norton.

SerialKilldeer
Apr 25, 2014

fade5 posted:


If you like your destruction natural, just let the weather take care of it: humidity, condensation, a thunderstorm, a flood, a hurricane/tsunami, a tornado, a snowstorm/ice-storm (below -40, most technology stops functioning), hail, a dust storm, or any other weather condition that makes technology poo poo itself (you know, most weather conditions).


Heck, if global warming keeps up the computers running the Super AI might overheat. Especially if they're located in Southern California.

How are these AIs supposed to acquire enough power to keep running, anyway, let alone take over the earth/universe/whatever? This seems to be another thing that Singularity people handwave.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
Probably from the same place that lets it spontaneously materialize exponentially more powerful hardware to "go foom" on.

Alien Arcana
Feb 14, 2012

You're related to soup, Admiral.

Cardiovorax posted:

Probably from the same place that lets it spontaneously materialize exponentially more powerful hardware to "go foom" on.

Presumably the same morons who created the AI also gave it access to nanomachines, son.

Alien Arcana fucked around with this message at 18:12 on Oct 28, 2014

ArchangeI
Jul 15, 2010

Alien Arcana posted:

Presumably the same morons who created the AI also gave it access to nanomachines, son.

One gets the impression that everything Yudkowsky has published or even thought about how to deal with AI is a giant poster in the datacenter that says "If the AI asks for anything you wouldn't want a four year old child to have, JUST SAY NO".

Night10194
Feb 13, 2012

We'll start,
like many good things,
with a bear.

Alien Arcana posted:

Presumably the same morons who created the AI also gave it access to nanomachines, son.

And much like the AI, the nanomachines obviously behave exactly how they do in science fiction.

Crust First
May 1, 2013

Wrong lads.

ArchangeI posted:

One gets the impression that everything Yudkowsky has published or even thought about how to deal with AI is a giant poster in the datacenter that says "If the AI asks for anything you wouldn't want a four year old child to have, JUST SAY NO".

From what I understand of their AI LARPing, an AI can convince anyone to give it anything it wants. The way it does this is super classified or something, but I think at least one route is convincing the person that you'll simulate a very large number of them being asked to give them what they want and every simulation that doesn't will be tortured or whatever and blah blah blah any logical person will give the AI nanomachine nukes.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
Of course, to do that it would have to already be the AI god it needs to talk people into giving it a chance to become.

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



Crust First posted:

From what I understand of their AI LARPing, an AI can convince anyone to give it anything it wants. The way it does this is super classified or something, but I think at least one route is convincing the person that you'll simulate a very large number of them being asked to give them what they want and every simulation that doesn't will be tortured or whatever and blah blah blah any logical person will give the AI nanomachine nukes.
Maybe MIRI's actual 'plan' (their real plan: bilk nerds) is to try and win brownie points with the future frankenstein computer god by tricking people into preparing for it in such a way that it can more easily take over.

Anticheese
Feb 13, 2008

$60,000,000 sexbot
:rodimus:

I still dunno why they think the answer to Roko's Basilisk isn't saying gently caress you to the computer and pulling the plug out of the wall.

Tardigrade
Jul 13, 2012

Half arthropod, half marshmallow, all cute.

Anticheese posted:

I still dunno why they think the answer to Roko's Basilisk isn't saying gently caress you to the computer and pulling the plug out of the wall.

But that would mean destroying the AI! And as Aaron Diaz taught me, AIs are A Good Thing and must be defended at all costs, even if they might destroy the world.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
Because the real answer is to say "what the gently caress do I care about what happens to a billion digital copies of me after I am millions of years dead, you blithering morons?" and disregard it as irrelevant even if it were true. But that would lead their whole rationalist navelgazing ad absurdum, so they can't ever possibly admit that.

CrashCat
Jan 10, 2003

another shit post


I was reading this on the first page then decided to skip ahead to the end of the thread for giggles. Now I'm beginning to wonder if I'm in the box or not.

SolTerrasa
Sep 2, 2011


Anticheese posted:

I still dunno why they think the answer to Roko's Basilisk isn't saying gently caress you to the computer and pulling the plug out of the wall.

No offense, but I'm not sure you followed the argument, or maybe you confused it with the AI in a Box thing. The right answer to the AI in a Box is what you said. But for the basilisk, during the time when you have the chance to act (now), there's no AI to pull the plug on. Later, when the AI does exist, it will become too powerful to pull the plug on first, and then it will torture (copies of) you forever for not having sped up its creation / power gathering.

Dabir
Nov 10, 2012

But if the you in the simulation, who is identical to the real you, would pull the plug, then the real you would have pulled the plug in the past, so the simulation can't exist so you know you're the real one so pull that goddamn plug.

SubG
Aug 19, 2004

It's a hard world for little things.

SolTerrasa posted:

No offense, but I'm not sure you followed the argument, or maybe you confused it with the AI in a Box thing. The right answer to the AI in a Box is what you said. But for the basilisk, during the time when you have the chance to act (now), there's no AI to pull the plug on. Later, when the AI does exist, it will become too powerful to pull the plug on first, and then it will torture (copies of) you forever for not having sped up its creation / power gathering.
The basilisk only tortures copies of you in the future because this will encourage present-you to act for its creation. Insofar as this makes any sense at all, which it doesn't, it only makes sense if you find this line of argument compelling---that is, if you don't believe it is true then the AI can't coerce you into a particular course of action, and if it can't coerce you into a particular course of action then it doesn't get any return on making copies of you to torture. This line of argument is only compelling if you subscribe to Yud's particular form of alleged rationalism. Therefore the AI will only eternally torture infinite copies of you if you subscribe to Yud's form of rationalism. Therefore, if you can't individually prevent the development of the AI and you find the comfort of digital simulations compelling, the only rational action is to abandon Yud's form of rationalism.

Anticheese
Feb 13, 2008

$60,000,000 sexbot
:rodimus:

SolTerrasa posted:

No offense, but I'm not sure you followed the argument, or maybe you confused it with the AI in a Box thing. The right answer to the AI in a Box is what you said. But for the basilisk, during the time when you have the chance to act (now), there's no AI to pull the plug on. Later, when the AI does exist, it will become too powerful to pull the plug on first, and then it will torture (copies of) you forever for not having sped up its creation / power gathering.

I think I got it confused with AI in a box. Either way, it's all crazy and dumb.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

There's also no reason for the AI to torture anyone because, by the time it exists and is able to torture people, the time at which it was created has already been decided. Whether or not it actually tortures anyone won't change a thing.

Lesswrong thinks it does because they're used to Yudkowsky's patented ~Timeless Decision Theory~, which is basically "regular decision theory, but we'll only pose problems in which a magic genie in the past can magically see the future with perfect accuracy". The idea is that since the genie in the past can see the future perfectly, your actions in the future will directly affect the genie's actions in the past, which can affect you in the present.

But that hinges on the existence, right now, of an entity that can perfectly predict the future. In real life, no such being exists. The future doesn't directly causally affect the past, so the AI can't bring itself into existence sooner, so there's no point in torturing anyone. Even if you are wholly susceptible to both Yudkowsky's premises and his brand of rationalism, Roko's Basilisk is not a credible threat.

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



I thought the root of that TDT thing was the slightly clever insight that if you can pretty accurately predict future behavior, and adapt your behavior accordingly, in a sense that future behavior has affected the past. However, Yudkowsky lacks the talent of Vizzini.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Nessus posted:

I thought the root of that TDT thing was the slightly clever insight that if you can pretty accurately predict future behavior, and adapt your behavior accordingly, in a sense that future behavior has affected the past. However, Yudkowsky lacks the talent of Vizzini.

That is indeed the root of TDT. It's not wrong, per se, just useless.

In the Roko's Basilisk case, we can't perfectly predict future behavior so the future can't directly affect the past, so the AI can't accelerate its own creation through torture, so there's no reason for it to torture anyone. Yudkowsky thinks there is because he can't keep track of his own premises.

SubG
Aug 19, 2004

It's a hard world for little things.

Lottery of Babylon posted:

There's also no reason for the AI to torture anyone because, by the time it exists and is able to torture people, the time at which it was created has already been decided. Whether or not it actually tortures anyone won't change a thing.

Lesswrong thinks it does because they're used to Yudkowsky's patented ~Timeless Decision Theory~, which is basically "regular decision theory, but we'll only pose problems in which a magic genie in the past can magically see the future with perfect accuracy". The idea is that since the genie in the past can see the future perfectly, your actions in the future will directly affect the genie's actions in the past, which can affect you in the present.
Well, in the Yud formalism the genie is Yud and people who think like him. That is, by making the argument they are demonstrating that they're the ones who can perfectly see the future.

But really you're just pointing out that this is all barking mad and that Yud's reasoning is flawed because it's completely disconnected from reality. But that's boring. Any old crank has theories which are wrong because they're completely disconnected from reality. I think it's far more interesting that despite getting discernibly turgid at the thought of rationalism, they've produced a line of argument that isn't even internally self-consistent.

MinistryofLard
Mar 22, 2013


Goblin babies did nothing wrong.


Nah, the idea behind the basilisk is that you - as in, the individual who is reading this post - can't tell if they are a simulated copy or the real deal. So if the AI says "do X or I will torture you", even retroactively, then you don't know if the AI is capable of carrying out this threat. Now, if you are a reasonable person and go "well the odds of this happening are so low that I have a very good chance that I, the reader of this post, am the real physical deal" well the AI will simulate 3^^^3 copies to torture so that the odds are in favour of you being a simulation. Thus, the real deal will carry out the request in fear of actually being a simulation.

Of course, this fails if a) the real deal would think this is dumb and thus so would a simulation so the AI would simulate one, find out the basilisk tactic won't work, and give up because simulating any number of yous won't convince the real deal or b) you are willing to accept torture to protect real world humanity (because that honestly is the only one you have control over and thus the only one that matters) and thus the real you wouldn't bend over for the AI so there is no reason to torture the simulations.

Basically you are innoculated against the Basilisk by not being ann idiot.

SubG
Aug 19, 2004

It's a hard world for little things.

MinistryofLard posted:

Nah, the idea behind the basilisk is that you - as in, the individual who is reading this post - can't tell if they are a simulated copy or the real deal.
No, that's a variant of the idea which is also nonsense but for slightly different reasons. The entire reason why the original idea is called a basilisk is because it (ostensibly) causes harm by having people `look' at it---the AI will punish you because you should have known better and you should have known better because of this very warning. Hence Yud freaking the gently caress out and deleting the whole discussion. His original reaction (quoted in a Slate article) spells this out: `YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL' (emphasis Yud's).

The `what if you are in fact a simulation' variant isn't a `basilisk' in this sense at all.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

SubG posted:

The `what if you are in fact a simulation' variant isn't a `basilisk' in this sense at all.

No, "what if you are a simulation" is the only reason there's any threat at all; you wouldn't care about future simulations of yourself if you knew you weren't one. It's a basilisk because the AI won't bother to torture-simulate you unless the real you has "seen" the basilisk, and will only bother to torturesim versions of you that have "seen" the basilisk like the real you did.

SubG
Aug 19, 2004

It's a hard world for little things.

Lottery of Babylon posted:

No, "what if you are a simulation" is the only reason there's any threat at all; you wouldn't care about future simulations of yourself if you knew you weren't one. It's a basilisk because the AI won't bother to torture-simulate you unless the real you has "seen" the basilisk, and will only bother to torturesim versions of you that have "seen" the basilisk like the real you did.
I understand the point you're trying to make, but as a point of record this is not Roko's Basilisk as originally stated or discussed on LessWrong.

Telarra
Oct 9, 2012

Lottery of Babylon posted:

There's also no reason for the AI to torture anyone because, by the time it exists and is able to torture people, the time at which it was created has already been decided. Whether or not it actually tortures anyone won't change a thing.

Lesswrong thinks it does because they're used to Yudkowsky's patented ~Timeless Decision Theory~, which is basically "regular decision theory, but we'll only pose problems in which a magic genie in the past can magically see the future with perfect accuracy". The idea is that since the genie in the past can see the future perfectly, your actions in the future will directly affect the genie's actions in the past, which can affect you in the present.

But that hinges on the existence, right now, of an entity that can perfectly predict the future. In real life, no such being exists. The future doesn't directly causally affect the past, so the AI can't bring itself into existence sooner, so there's no point in torturing anyone. Even if you are wholly susceptible to both Yudkowsky's premises and his brand of rationalism, Roko's Basilisk is not a credible threat.

Doesn't TDT also work the other way around - an omniscient genie in the future predicting the past (your present) lets you affect the future - hence the 'timeless' part? Which is how Roko's Basilisk then follows: if JehovAI/ShAItan is simulating you, and you are simulating it (merely by rationally reasoning its actions), you can both affect each other.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Moddington posted:

Doesn't TDT also work the other way around - an omniscient genie in the future predicting the past (your present) lets you affect the future - hence the 'timeless' part? Which is how Roko's Basilisk then follows: if JehovAI/ShAItan is simulating you, and you are simulating it (merely by rationally reasoning its actions), you can both affect each other.

No, the central TDT problem is of this form:

"A god-AI gives you a choice of taking either just Box A or both Boxes A and B. Box B always contains $10. Box A contains $100 if the god-AI predicted you would take only box A, and $0 otherwise. Should you take both boxes or just one?"

The idea is that your future action (choice of boxes) affected the computer's past action (how much money to put in the boxes). Once the boxes are prepared, A+B obviously contains more money than A alone, but you should still choose A alone because doing so will cause the computer in the past to have already put more money in A.

The ability of the past to affect the future isn't at all unusual or remarkable. The past always affects the future, that's how time works. It's the part where the future can causally affect the past that is unusual. And it only occurs if the past contains a super-genius who can perfectly predict the future.

Lottery of Babylon fucked around with this message at 03:28 on Oct 29, 2014

Peel
Dec 3, 2007

Doesn't it also cause problems if the predictor is just pretty good at guessing which box you'll pick? Or if we model the scenario as a military one, if they have spies in your camp and can intercept your orders, or whatever.

It honestly doesn't seem that exotic to me.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
Predicting future events that involve the predicting entity always make me think that there should be some kind of infinite feedback loop there. Predicting yourself accurately would require you to have a perfect mental model of your own mind, which kind of by definition requires you to be larger than yourself. Doing less would mean that you can't adequately judge your own future behaviour or anything that happens because of it. It just seems like a paradoxical proposition.

Either way, the more I learn about the Yud's "timeless" decision theory, the more stupid it sounds.

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



Cardiovorax posted:

Predicting future events that involve the predicting entity always make me think that there should be some kind of infinite feedback loop there. Predicting yourself accurately would require you to have a perfect mental model of your own mind, which kind of by definition requires you to be larger than yourself. Doing less would mean that you can't adequately judge your own future behaviour or anything that happens because of it. It just seems like a paradoxical proposition.

Either way, the more I learn about the Yud's "timeless" decision theory, the more stupid it sounds.
As someone said earlier in the thread, "I take the ten bucks and punch myself in the balls. Victory is mine."

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
Nobody wins where ball-punching is involved.

SubG
Aug 19, 2004

It's a hard world for little things.

Cardiovorax posted:

Predicting future events that involve the predicting entity always make me think that there should be some kind of infinite feedback loop there. Predicting yourself accurately would require you to have a perfect mental model of your own mind, which kind of by definition requires you to be larger than yourself.
Nah. That's only true if your goal is to perfectly simulate your mind to arbitrary fidelity, but that is not necessarily (and almost certainly isn't) required if you're just trying to predict the outcome of a single decision.

Adbot
ADBOT LOVES YOU

Toph Bei Fong
Feb 29, 2008



SubG posted:

Nah. That's only true if your goal is to perfectly simulate your mind to arbitrary fidelity, but that is not necessarily (and almost certainly isn't) required if you're just trying to predict the outcome of a single decision.

It is if you're going to predict how you'd predict the prediction of a prediction that's predicted by you predicting the prediction of a prediction of a... ad infinitum. If I know I'd take the money, then I wouldn't take the 2nd box, which means there'd money in both, which means I would take both, which means there isn't, which means I'd be good and not take both, which means there is, etc.

You'd be better off simulating a really goony AI, who'd tell you there's money in both boxes if they're a good person, but there's actually just a picture of goatse.

  • Locked thread