Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Vorpal Cat
Mar 19, 2009

Oh god what did I just post?

Djeser posted:


The LessWrong wiki presents it like this:
A super-intelligent AI shows you two boxes. One is filled with :10bux:. One is filled with either nothing or :20bux:. You can either take the second box, and whatever's in it, or you can take both boxes. The AI has predicted what you will choose, and if it predicted "both" then the second box has nothing, and if it predicted "the second box" the second box has :20bux:.

"Well, the box is already full, or not, so it doesn't matter what I do now. I'll take both." <--This is wrong and makes you a sheeple and a dumb, because the AI would have predicted this. The AI is perfect. Therefore, it perfectly predicted what you'd pick. Therefore, the decision that you make, after it's filled the box, affects the AI in the past, before it filled the box, because that's what your simulation picked. Therefore, you pick the second box. This isn't like picking the second box because you hope the AI predicted that you'd pick the second box. It's literally the decision you make in the present affecting the state of the AI in the past, because of its ability to perfectly predict you.

It's metagame thinking plus a layer of time travel and bullshit technology, which leads to results like becoming absolutely terrified of the super-intelligent benevolent AI that will one day arise (and there is no question whether it will because you're a tech-singularity-nerd in this scenario) and punish everyone who didn't optimally contribute to AI research.

Even if you somehow except his stupid argument of perfect information this still just a variant of the freaking grandfather paradox/self negating prophecy. I'm sure to his mind it sound all deep and revolutionary but its basically just a load of deliberately contrived bollocks.

Adbot
ADBOT LOVES YOU

Vorpal Cat
Mar 19, 2009

Oh god what did I just post?
There's a delicious irony in someone with such a smug sense of superiority about their rationality engaging in such obvious magical thinking. His obsessive focus on rational decision making blinds him to the fact that at the end of the day human nature means that 90% of his decisions are going to be based on emotion and unconscious processes. In fact his inability to reflect critically on his own theories and his dogmatic belief in them makes him more vulnerable to magical thinking then the average person.

Vorpal Cat
Mar 19, 2009

Oh god what did I just post?
There's also the problem that if hear about the future AI torturing people and they dont have the decision making process they may decide to deliberately not donate to AI research out of spite. If enough people do this then even by stupid moon logic it would be in the A Is best interest to not torture people so they wont delay its creation
Therefore your best bet to avoid being tortured is to not donate.

That the fun with this story of stupid causality defying logic, it works both ways.

Vorpal Cat
Mar 19, 2009

Oh god what did I just post?
The funny thing about this whole timeless decision making AI, is its literally just Pascal's wager. "You may not think your in an AI simulation that will torturer you for all eternity if you displease it, but you might. So give me all your money just incase."

Vorpal Cat
Mar 19, 2009

Oh god what did I just post?

Krotera posted:

Yes, with the exception that Yudkowsky's come up with an absurd probabilistic justification for why it's extremely likely (instead of 'faith' or the other things normal people religions come up with). You think the chance that the AI will come into being is 1/1,000? Fine, he'll simulate 1,000,000 of you -- now the odds are even! You think the chance is 1/1,000,000? Fine, it's 1,000,000,000 now! Besides, we've got infinite time so logically an artificial superintelligence that could have been brought into being sooner by your donations is bound to come into being.

(If you think this isn't an accurate description of how probability works, then read LessWrong until your MUs are fixed.)

He's going to summon a goddamn infinity of you and make them all bleed unless you put the money in his bank account right now.

But what if evil space Buda reincarnated me into a helish nightmare because by giving to the AI I was caring too much about worldly things? Also space Buda will torturer twice as many copies of me as the AI will. What's that you say, you think evil space Buda is less likely then an AI? He will just torture more of you to make up the difference. He has an infinite amount of time in which to keep reincarnating you after all.

Vorpal Cat
Mar 19, 2009

Oh god what did I just post?

Djeser posted:

Actually, allowing for non-positive probabilities also shows that you know more about math than Yudkowsky, because he claims there's no such thing as a certain impossibility (0 probability) because like, anything could happen dude, because of like, quantum stuff.

And then he gos an conveniently forgets that this also means that making a perfect simulation of future events is impossible so his entire timeless decision making theory is bollocks.

Vorpal Cat
Mar 19, 2009

Oh god what did I just post?
Actually why would a captive AI even be allowed the kind of computing power needed for even one "perfect" simulation of a human being, and where would it get the necessary data about someone to make said simulation. I feel like I'm missing some of the dozens of stupid assumptions being made here in this stupid hypothetical.

Vorpal Cat fucked around with this message at 04:34 on Apr 23, 2014

Vorpal Cat
Mar 19, 2009

Oh god what did I just post?

Phobophilia posted:

Roko's Basilisk is a reference to the Langford Basilisk. It's less effective because it only crashes the brains of autists.

Didn't they try to do that to the Borg in a star trek next gen episode? Exploiting a flaw in how the Borg process images to make an image that would take a near infinite amount of computing power to process, thus shutting down the hive mind as it wasted all its computing power on an unsolvable problem.

Adbot
ADBOT LOVES YOU

Vorpal Cat
Mar 19, 2009

Oh god what did I just post?

Tracula posted:

Harry Kim getting promoted to anything above Ensign :v: Honestly though, I seem to recall it was some impossible and paradoxical shape or somesuch.

It was either a 2d shape that was impossible to render in 3d, or the script of Threshold, its been a while so I cant be sure which.

Vorpal Cat fucked around with this message at 07:17 on Apr 23, 2014

  • Locked thread