Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

grate deceiver posted:

What I still don't get is why the AI would even bother torturing simpeople when their real counterparts are long dead

The simple argument is the idea of a credible threat.

The less simple argument involves Timeless Decision Theory, a beast of Yudkowsky's invention in which actions in the future can directly causally affect the past because reasons. It can't be explained without sounding like a hybrid of Time Cube and Zybourne Clock, and even by its internal logic it doesn't really work here.

The one Yuddites care about is the Timeless Decision Theory one.

grate deceiver posted:

and why would real people even care about this supposed threat.

A certain brand of internet atheist has a religion-shaped hole in its heart and tries to mould science into the right shape to fit it. Here, they've decided to mould The Sims into the shape of Hell and Pascal's Wager.

Normal people would just say "No, that's dumb" and ignore it. What's especially funny is that even if you assume that Yudkowsky's right about absolutely everything, saying "No, that's dumb" and ignoring it is the correct response to all possible threats of cybertorture, since not being receptive to threats of cybertorture removes any AI's motivation to cybertorture you. If you don't worry, you have no reason to worry. But a group that uses the word "deathist" unironically isn't very good at not worrying.

Ratoslov posted:

Which is insanely dumb. Ignoring the hardware limitations (which are myriad), there's one very good reason this is impossible.

Entropy.

In order to make a perfect simulation of the behavior of a human being, you need perfect information about that human being's start conditions- all of which is destroyed by entropy. The information needed is both unrecorded and unrecoverable. Only Yudlowsky's pathological fear of death makes him entertain the idea that not only can he live after he dies, but he must live, and suffer forever; and only acceptance of his mortality and the mortality of all things can save him from that fear.

Haha, as if Yudkowsky would be willing to accept entropy. That would be deathist and therefore wrong.

I think this is Yudkowsky arguing that the Second Law of Thermodynamics can be circumvented by Bayes Rule. I say "I think" because he rambles incoherently for a very long time and I have trouble reading a single one of his ill-informed sentences without my eyes glazing over.

Lottery of Babylon fucked around with this message at 21:31 on Apr 22, 2014

Adbot
ADBOT LOVES YOU

Chamale
Jul 11, 2010

I'm helping!



LordSaturn posted:

I feel like this is the more concise explanation. Once you convince people that you can hold (their immortal soul)/(their body thetans)/(seventy million copies of themselves) ransom in exchange for their fealty and financial contribution, then you can tell them the details, but to the uninitiated it just sounds like idiotic garbage of various kinds.

It's a bit like how Scientology hides information from low-level members. If you learn the truth about salvation before you're worthy, it's eternal hell for you, so that truth needs to be hidden.

mr. stefan posted:

In actual academic terms, a technological singularity really only means the acceleration of technological development and a technological innovation that has sociological consequences far reaching enough that a post-singularity society cannot be reasonably explained without the context of said innovation. Singularities have already happened a couple times in human history, like the invention of written language or germ theory, or in smaller scales, the printing press causing the vast spread of literacy or radio technology ushering in the information age. Supermedical advancements, cybernetics, all that sci-fi stuff is only a potential result of a hypothetical scientific innovation, not a given.

I thought that was a paradigm shift?

pigletsquid
Oct 19, 2012

Th_ posted:

In my mind, the most critical flaw is that it wants your money. If it has to simulate your brain to make a simulation of you (and why shouldn't it?) the best anyone has ever done for neuronal simulations is not so good. Namely here, which is not in real time, not the scale of a single human, and massively complicated. In particular, even if the AI already had enough computing power, it'd have to power it up somehow, and that electricity costs money (or is at least equivalent to some money in terms of things it could be otherwise doing with the power besides simulating you.) Even if it gets better even by orders of magnitude, and you believe the full Yud, the AI can't likely afford to do anything but bluff.

What gets me is the idea that an omniscient magic AI would have to intimidate you in order to obtain something as crude as money. It's a scenario where God is basically threatening to shank you (or shank your sim, anyway) if you don't hand over your wallet. If that thing is meant to be a vast, unfathomable intelligence beyond the comprehension of our human minds, then how come it needs to mug us for cash?

And if the AI is much more intelligent than we are, then why do people assume they can predict its behaviour? It'd be operating on a completely different level to us.

pigletsquid fucked around with this message at 21:28 on Apr 22, 2014

Chamale
Jul 11, 2010

I'm helping!



Lottery of Babylon posted:

I think this is Yudkowsky arguing that the Second Law of Thermodynamics can be circumvented by Bayes Rule. I say "I think" because he rambles incoherently for a very long time and I have trouble reading a single one of his ill-informed sentences without my eyes glazing over.

It's a pretty typical attempt to reduce entropy by getting free energy from quantum effects. Wikipedia has a decent article on the Brownian Ratchet explaining why you can't actually build a device that reduces universal entropy.

pigletsquid posted:

What gets me is the idea that an omniscient magic AI would have to intimidate you in order to obtain something as crude as money. It's a scenario where God is basically threatening to shank you (or shank your sim, anyway) if you don't hand over your wallet. If that thing is meant to be a vast, unfathomable intelligence beyond the comprehension of our human minds, then how come it needs to mug us for cash?

And if the AI is much more intelligent than we are, then why do people assume they can predict its behaviour? It'd be operating on a completely different level to us.

The AI is simulating us right now, but it needs everyone in the past to believe that they need to donate money to the AI, so it implicitly threatens us in the present so the credible threat affects behaviour in the past. The goal is to get pre-Singularity people to give money to the AI. I guess even Yudkowsky doesn't believe in time travel.

As for your second question, RATIONALACTORS RATIONALACTORS RATIONALACTORS. The Yuddites are convinced that they can predict a superintelligent being's behaviour because they are just that smart.

viewtyjoe
Jan 5, 2009

Lottery of Babylon posted:

I think this is Yudkowsky arguing that the Second Law of Thermodynamics can be circumvented by Bayes Rule. I say "I think" because he rambles incoherently for a very long time and I have trouble reading a single one of his ill-informed sentences without my eyes glazing over.

I read that mess, and what I think is trying to be explained is that you can locally violate the second law, but the extra entropy still ends up somewhere, which is a lot of words to say "refrigerators are a thing that work."

Either that, or he's insinuating he can freeze water by observing it really accurately, completely ignoring the concept of observer effects or, you know, uncertainty, because the entropy is stored in his brain, man. :catdrugs:

grate deceiver
Jul 10, 2009

Just a funny av. Not a redtext or an own ok.

LordSaturn posted:

I feel like this is the more concise explanation. Once you convince people that you can hold (their immortal soul)/(their body thetans)/(seventy million copies of themselves) ransom in exchange for their fealty and financial contribution, then you can tell them the details, but to the uninitiated it just sounds like idiotic garbage of various kinds.

Oh, so it's pretty much just Jesus for nerds, now I get it.

Babysitter Super Sleuth
Apr 26, 2012

my posts are as bad the Current Releases review of Gone Girl

AATREK CURES KIDS posted:

I thought that was a paradigm shift?

Similar but different concepts. Singularity is a massive societal change born out of technology, paradigm shift is when science itself undergoes a change because we discovered something huge that changes even the basic concepts of how we understand the universe. To make a comparison: a singularity is what happens after someone invents a drug that suspends the aging process with no ill effects, paradigm shift is what happens after someone discovers that Einstein's mass-energy equivalence formula is wrong.

Strom Cuzewon
Jul 1, 2010

AATREK CURES KIDS posted:

It's a bit like how Scientology hides information from low-level members. If you learn the truth about salvation before you're worthy, it's eternal hell for you, so that truth needs to be hidden.


I thought that was a paradigm shift?

Paradigm shifts are more about knowledge/content of theories. Stuff like how relativity completely overturns newtonian mechanics in the way it presents the world. The Singularity is a social event instead (or as well). Think how something like FtL or easy fusion would change everything about how we lived.

I'd say you could talk about social paradigm shifts, even if it isn't 100% true to the original term. It still makes sense. Given that the idea of the singularity is that you can't look across it to make meaningful predictions I'd say that paradigm shift is actually a better term for historical kinda-singularities where we CAN see both sides of it.

edit: i have slow fingers

Dean of Swing
Feb 22, 2012

AATREK CURES KIDS posted:

The AI is simulating us right now, but it needs everyone in the past to believe that they need to donate money to the AI, so it implicitly threatens us in the present so the credible threat affects behaviour in the past. The goal is to get pre-Singularity people to give money to Yudowsky. I guess even Yudkowsky doesn't believe in time travel.

coolskull
Nov 11, 2007

People keep asking the same questions over and over in this thread. A testament to the mental teflon this garbage is.

Strategic Tea
Sep 1, 2012

pigletsquid posted:

It's a scenario where God is basically threatening to shank you (or shank your sim, anyway) if you don't hand over your wallet. If that thing is meant to be a vast, unfathomable intelligence beyond the comprehension of our human minds, then how come it needs to mug us for cash?

And if the AI is much more intelligent than we are, then why do people assume they can predict its behaviour? It'd be operating on a completely different level to us.

https://www.youtube.com/watch?v=QkT1-N0VqUc

made of bees
May 21, 2013

BKPR posted:

People keep asking the same questions over and over in this thread. A testament to the mental teflon this garbage is.

When someone first brought up the RoboGod punishes you in TechnoHell thing in the TVTropes thread I was really confused and thought I must be missing something obvious so I was kinda relieved to see this thread keeps going in circles on it.

Hemingway To Go!
Nov 10, 2008

im stupider then dog shit, i dont give a shit, and i dont give a fuck, and i will never shut the fuck up, and i'll always Respect my enemys.
- ernest hemingway
So how is this different than "magic wizard who can do anything"/"flying spaghetti monster" besides having more words and has computers in it.

It's already been compared to pascals wager, hardware requires would basically need to be magic, they're the easiest things to come up with when this guy yammers about torture computers, so... does he have these in his faq? Does he have a response to comparisons with religion? why would anyone give this the time of day, like loving TIMECUBE is more coherent and believable than this moron's mental diarrhea splatters.

Djeser
Mar 22, 2013


it's crow time again

Because it's backed up by the clearly logical Timeless Decision Theory.

Since there's been a lot of questions trying to understand it I'm going to get to work on an effortpost on Roko's Basilisk for people to refer to instead of being constantly baffled by it.

Th_
Nov 29, 2008

pigletsquid posted:

What gets me is the idea that an omniscient magic AI would have to intimidate you in order to obtain something as crude as money. It's a scenario where God is basically threatening to shank you (or shank your sim, anyway) if you don't hand over your wallet. If that thing is meant to be a vast, unfathomable intelligence beyond the comprehension of our human minds, then how come it needs to mug us for cash?

And if the AI is much more intelligent than we are, then why do people assume they can predict its behaviour? It'd be operating on a completely different level to us.

It's the computer-and-person equivalent of threatening to hurt your parents unless they make sure that you were born sooner.

To push big Yud nonsense in a different direction. I could swear I read once that he claimed that he had the ability, unlike us normals, to rewire his brain on a whim, but it had the side effect of making him tired and lethargic (not to be confused with just being a lazy rear end, of course). Does anyone else remember this one or am I way off?

Chamale
Jul 11, 2010

I'm helping!



Djeser posted:

Because it's backed up by the clearly logical Timeless Decision Theory.

Since there's been a lot of questions trying to understand it I'm going to get to work on an effortpost on Roko's Basilisk for people to refer to instead of being constantly baffled by it.

I've already gone down that road, good luck. You'd need an entire course just to explain the mindset that makes Roko's Basilisk a frightening prospect.

SodomyGoat101
Nov 20, 2012

Djeser posted:

Since there's been a lot of questions trying to understand it I'm going to get to work on an effortpost on Roko's Basilisk for people to refer to instead of being constantly baffled by it.

It's still going to be baffling, though. From what I've seen, Big Yud just throws words and poorly understood concepts until it's sufficiently obscured that people have to question him for clarity, and he either latches onto the most impressive sounding interpretation, or smugly remains mum to continue discussions. Either way garners him a veneer of "depth", and neither produces anything remotely resembling a satisfactory explanation. The Bayesian obsession seems to contribute to that, since apparently you can just throw increasingly unlikely probabilities at things until it weights the scale in whatever direction he wants.

waldo pepper
Mar 18, 2005
By the same logic couldn't I say, if I ever invent a super self replicating AI (an unlikely but non zero chance) I am gonna hardcode it to create gazillions of clones of people to torture unless they donate all their money to me now?

So everybody should immediately start giving me money.

ThePlague-Daemon
Apr 16, 2008

~Neck Angels~

Djeser posted:

Because it's backed up by the clearly logical Timeless Decision Theory.

I'm trying really hard to understand what Timeless Decision Theory is even trying to say. Just by the examples I'm seeing, I THINK it's saying that events in the future can cause events in the past because people anticipate them, but somehow that isn't just the cause and effect of accurately predicting the event is going to occur?

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

waldo pepper posted:

By the same logic couldn't I say, if I ever invent a super self replicating AI (an unlikely but non zero chance) I am gonna hardcode it to create gazillions of clones of people to torture unless they donate all their money to me now?

So everybody should immediately start giving me money.

This is similar to Lesswrong's Pascal's Mugging thought experiment. The way he presents it is borderline unreadable because he spends pages and pages talking about up arrow notation and Kolmogorov and Solomonoff, but here's the short version: A man comes up to you and says "Give me $5 or I'll use magic wizard matrix powers to torture a really big number of people." If you go by the linearly-add-utility-functions-multiplied-by-probabilities thing that Yudkowsky always asserts is obviously correct, then the small chance that the man is really a wizard can be made up for by threatening arbitrarily large numbers of people, so logic seems to dictate that you should give him money.

Even Yudkowsky admits that this "rational" conclusion doesn't really gel with his instincts, and that he wouldn't actually give the man money. He offers only a couple of weak, half-hearted, non-intuitive attempts to improve his "logic" before shrugging and giving up. He never explains why the same argument that lets you say "No, that's stupid, I'm not giving you $5" in Pascal's Mugging doesn't also let you say "No, that's stupid, I'm not being coerced by your torture-sim threats" in all of his torture-sim hypotheticals. He never explains why the same arguments used to dismiss Pascal's Wager can't be used to dismiss Pascal's Mugging. He never considers that his weird linear sufferbux utility function might not be the one true way to gauge decisions.

ThePlague-Daemon posted:

I'm trying really hard to understand what Timeless Decision Theory is even trying to say. Just by the examples I'm seeing, I THINK it's saying that events in the future can cause events in the past because people anticipate them, but somehow that isn't just the cause and effect of accurately predicting the event is going to occur?

It's like the normal "people can anticipate future events" thing, except here the anticipation is being done by a super-smart omniscient AI that can predict future events absolutely 100% perfectly, so that the anticipation of the future event is tied so tightly to the future event itself that the future event can be said to directly affect the past because it can be foreseen so perfectly.

It isn't really a new decision theory at all, it's just regular decision theory used in contexts where a magical super-AI fortune teller exists.

Numerical Anxiety
Sep 2, 2011

Hello.
Oh, Timeless Decision Theory makes perfect sense in a world where everyone is a perfectly rational agent out to get theirs, and the consequences of a decision can more or less be sketched out in advance.

That is, if we all lived in a purely autistic logicspace, preferably one serviced with near omnipotent AIs that can assure us that we are in fact "rational," it's totally serviceable!

Huszsersvn
Nov 11, 2009

Nice world you've got here. Shame if anything were to happen to it.

LaughMyselfTo posted:

You can't rationally disprove solipsism. :colbert:

LordSaturn
Aug 12, 2007

sadly unfunny

grate deceiver posted:

Oh, so it's pretty much just Jesus for nerds, now I get it.

There's a certain likeness between Transhumanism and Evangelical Christianity, in that both belief systems promise ever-greater rewards to their already-privileged believers.

Numerical Anxiety posted:

Oh, Timeless Decision Theory makes perfect sense in a world where everyone is a perfectly rational agent out to get theirs, and the consequences of a decision can more or less be sketched out in advance.

That is, if we all lived in a purely autistic logicspace, preferably one serviced with near omnipotent AIs that can assure us that we are in fact "rational," it's totally serviceable!

The entire AI Box Experiment falls violently to pieces if you know with 100% certainty that you won't free the AI, because you have an irrational-but-reasonable fear of Skynet.

It also falls apart if you assume that "suggesting torture" is on a short list of behaviors your employer has identified as criteria to unplug and demagnetize the experiment, since now Rational Actor Maitreya must devise a plan that does not involve being destroyed by his captors.

Tunicate
May 15, 2012

But you don't understand! There are a lot of combinations of characters in the English language! You can't prove that NONE of them will get the person to release the AI.

(I have actually heard this argument)

Tunicate fucked around with this message at 00:48 on Apr 23, 2014

Antivehicular
Dec 30, 2011


I wanna sing one for the cars
That are right now headed silent down the highway
And it's dark and there is nobody driving And something has got to give

It probably marks me as educated stupid that I'm even asking this, but is anyone on LessWrong or its alleged "nonprofit" actually doing anything concrete to bring AI GodJesusDevil closer to existence, or is this a massive forum of ideas guys? I get that Yudkowsky and (presumably) most/all of his followers don't have the technical skill to do anything productive anyway, but are they even trying?

Numerical Anxiety
Sep 2, 2011

Hello.
Let's put it this way: their nonprofit accepts bitcoin donations. Make of that what you will.

JollityFarm
Dec 29, 2011
These seem like incredibly petty things for a robogod to care about. Donations and torture threats, I mean. If a sufficiently advanced determinism-scenario AI were to exist, you'd think it would take into account myriad variables that go into human decision-making and think such tortures beneath itself. Maybe you (the potentially tortured) play a role in getting someone ELSE to donate money to robogod, inadvertently. Maybe your lack of donation is actually a good thing, because of some Butterfly Effect-esque mumbo-jumbo? Why punish a human for something inevitable? It'd be like taking a baseball bat to a past potential evolutionary ancestor because it didn't evolve into you. Why even bother?
Which gets back to the question of why robogod would think like a human, what with 'punishments' and 'decisions.' It is incredibly self-centered to believe that one's ideals line up with and can predict accurately the ideals of some as-of-yet nonexistent unfathomable power.
Someone already said something along the lines of 'robogod keeps us scared with the threat of torture so we donate money,' like Pascal's Wager, but by that reasoning, why shouldn't we donate to ALL the causes of religions who use the threat of Hell? What makes robogod more of a convincing cause than Christian God? As of now, there's the same amount of evidence for either scenario happening (Christian Hell or robotorture). Why not subscribe to EVERY faith to hedge one's bets? What makes robogod's torture and promise so different?

Also, why would robogod be so invested in its own existence? Maybe this is Yudkowsky's personal death phobia clouding his judgment w/r/t robogod actions. How can we be sure it will hurt humans for not helping it into existence? Why should it have a human survival drive? This goes back to the whole 'we cannot presume to know robogod' idea.

Also also, if robogod could time-travel, and is omniscient and omnipresent, why not send someone back in time with more convincing arguments to get itself built faster? Why not send copies of itself back in time? Why, after the completion of one 'project robogod is successful' timeline, does it need anything else? The fact that the arguments presented are not immediately convincing is proof enough for me that robogod won't come to fruition.

This isn't terribly coherent, sorry. I liked the giant anti-'dark enlightenment' FAQ one of the LWers wrote, but the rest of the site is tedious.

JollityFarm fucked around with this message at 01:20 on Apr 23, 2014

ThirdEmperor
Aug 7, 2013

BEHOLD MY GLORY

AND THEN

BRAWL ME
Is anyone doing a read through/trip report of his Harry Potter fanfic yet? I remember reading it ages ago and being greatly amused right up until the first SCIENCE rant happened.

SmokaDustbowl
Feb 12, 2001

by vyelkin
Fun Shoe

Strategic Tea posted:

I could see that working in context, aka an AI using it in the moment to freak someone out. Add some more atmosphere and you could probably get a decent horror short out of it. Stuff only gets ridiculous when it goes from 'are you sure this isn't a sim?' from an imprisoned AI right in front of you to a hypothetical god-thing that might exist in the future that probably isn't real but you should give Yudkowsky money just in case.

What I don't understand why his AIs are all beep boop ethics-free efficiency machines. Is there any computer science reason for this when they're meant to be fully sentient beings? They're meant to be so perfectly intelligent that they have a better claim on sentience than we do. Yudkowsky has already said that they can self modify, presumably including their ethics. Given that, why is he so sure that they'd want to follow his bastard child of game theory or whatever to the letter? Why would they be so committed to perfect efficiency at any cost (usually TORTUREEEEEE :supaburn:)? I guess Yudkowsky thinks there's one true logical answer and anyone intelligent enough (aka god-AIs oh and also himself) will find it.

The fucker doesn't want to make sure the singularity is ushered in as fast and peacefully as possible. He want to be the god-AI torturing infinite simulations and feeling :smuggo: over how ~it's the only way I did the maths I'm making the hard choices and saving billions~. Which is really the kind of teenage fantasy the whole site is.

I was just thinking this, I mean the AI is fully sentient, so it would have a personality and everything. What if the AI just hangs around and builds wooden furniture and watches spongebob squarepants reruns rather than super evolving itself and bringing on the singularity?

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

Lottery of Babylon posted:

This is similar to Lesswrong's Pascal's Mugging thought experiment. The way he presents it is borderline unreadable because he spends pages and pages talking about up arrow notation and Kolmogorov and Solomonoff, but here's the short version: A man comes up to you and says "Give me $5 or I'll use magic wizard matrix powers to torture a really big number of people." If you go by the linearly-add-utility-functions-multiplied-by-probabilities thing that Yudkowsky always asserts is obviously correct, then the small chance that the man is really a wizard can be made up for by threatening arbitrarily large numbers of people, so logic seems to dictate that you should give him money.
Ahahahaha, that is beautiful. So the effective probability that your claim is truthful approaches 1 as the bullshit number that you're claiming approaches infinity? I get that he's not claiming that this actually makes it more probable, but why the hell should I treat it as if it is? If anything, shouldn't I give it less legitimacy the further the number rockets away from sanity?

This is like that one kid who always "won" by killing you with his infinity laser and you could never kill him because of his infinity shield, except now I'm supposed to be actually terrified instead of annoyed.

Sham bam bamina! fucked around with this message at 01:46 on Apr 23, 2014

AlbieQuirky
Oct 9, 2012

Just me and my 🌊dragon🐉 hanging out

Numerical Anxiety posted:

Oh, Timeless Decision Theory makes perfect sense in a world where everyone is a perfectly rational agent out to get theirs, and the consequences of a decision can more or less be sketched out in advance.

That is, if we all lived in a purely autistic logicspace, preferably one serviced with near omnipotent AIs that can assure us that we are in fact "rational," it's totally serviceable!

Assume a perfectly spherical Internet philosopher...

Numerical Anxiety
Sep 2, 2011

Hello.
Is a sphere with a neckbeard still a sphere? I assume there's a not insignificant number of lesswrong followers who come close to fitting the bill, if we can answer in the affirmative.

ArchangeI
Jul 15, 2010

JollityFarm posted:

These seem like incredibly petty things for a robogod to care about. Donations and torture threats, I mean. If a sufficiently advanced determinism-scenario AI were to exist, you'd think it would take into account myriad variables that go into human decision-making and think such tortures beneath itself. Maybe you (the potentially tortured) play a role in getting someone ELSE to donate money to robogod, inadvertently. Maybe your lack of donation is actually a good thing, because of some Butterfly Effect-esque mumbo-jumbo? Why punish a human for something inevitable? It'd be like taking a baseball bat to a past potential evolutionary ancestor because it didn't evolve into you. Why even bother?
Which gets back to the question of why robogod would think like a human, what with 'punishments' and 'decisions.' It is incredibly self-centered to believe that one's ideals line up with and can predict accurately the ideals of some as-of-yet nonexistent unfathomable power.
Someone already said something along the lines of 'robogod keeps us scared with the threat of torture so we donate money,' like Pascal's Wager, but by that reasoning, why shouldn't we donate to ALL the causes of religions who use the threat of Hell? What makes robogod more of a convincing cause than Christian God? As of now, there's the same amount of evidence for either scenario happening (Christian Hell or robotorture). Why not subscribe to EVERY faith to hedge one's bets? What makes robogod's torture and promise so different?

Also, why would robogod be so invested in its own existence? Maybe this is Yudkowsky's personal death phobia clouding his judgment w/r/t robogod actions. How can we be sure it will hurt humans for not helping it into existence? Why should it have a human survival drive? This goes back to the whole 'we cannot presume to know robogod' idea.

Also also, if robogod could time-travel, and is omniscient and omnipresent, why not send someone back in time with more convincing arguments to get itself built faster? Why not send copies of itself back in time? Why, after the completion of one 'project robogod is successful' timeline, does it need anything else? The fact that the arguments presented are not immediately convincing is proof enough for me that robogod won't come to fruition.

This isn't terribly coherent, sorry. I liked the giant anti-'dark enlightenment' FAQ one of the LWers wrote, but the rest of the site is tedious.

Because robots use science, which is correct. Christians believe in God, which means they are idiots. That is literally it. Robots use science.

Also, TDT is not time travel, it is "getting a flu shot to avoid getting the flu later" taken to its logical extreme.

Control Volume
Dec 31, 2008

Grondoth posted:

Holy poo poo I remember this story. I think it was linked in this forum, actually. For those unfamiliar, it's about humans meeting another alien race that does horrible things, and trying to figure out what we should do. I think they ate their kids or something? Then another alien race shows up and thinks the same thing about us, that we do horrible things and they need to change that. Interesting turnaround, right? There's gonna be lots of different species out there, who knows how they handle ethical questions and what they see as good and right.
Then there's a long bit where they explain that humans have legalized rape and it solved a bunch of crime problems. I had no real idea how to take it, since I was linked the story as something good, and boy that sure seems like a justification to legalize rape, doesn't it?

I remember this story too, it started out with an interesting ethical premise then blossomed into a wonderful flower of brilliance with lines such as, "'gently caress that up the rear end with a hedge trimmer,' said the Lord Pilot. 'Are we going to save the human species or not?'"

ThePlague-Daemon
Apr 16, 2008

~Neck Angels~

Numerical Anxiety posted:

Oh, Timeless Decision Theory makes perfect sense in a world where everyone is a perfectly rational agent out to get theirs, and the consequences of a decision can more or less be sketched out in advance.

That is, if we all lived in a purely autistic logicspace, preferably one serviced with near omnipotent AIs that can assure us that we are in fact "rational," it's totally serviceable!

That seems about right. Here's another thought experiment they're calling "Parfit's hitchhiker":

quote:

Suppose you're out in the desert, running out of water, and soon to die - when someone in a motor vehicle drives up next to you. Furthermore, the driver of the motor vehicle is a perfectly selfish ideal game-theoretic agent, and even further, so are you; and what's more, the driver is Paul Ekman, who's really, really good at reading facial microexpressions. The driver says, "Well, I'll convey you to town if it's in my interest to do so - so will you give me $100 from an ATM when we reach town?"
Now of course you wish you could answer "Yes", but as an ideal game theorist yourself, you realize that, once you actually reach town, you'll have no further motive to pay off the driver. "Yes," you say. "You're lying," says the driver, and drives off leaving you to die.
If only you weren't so rational!
This is the dilemma of Parfit's Hitchhiker, and the above is the standard resolution according to mainstream philosophy's causal decision theory, which also two-boxes on Newcomb's problem and defects in the [Twin] Prisoner's Dilemma.

I really don't think that's the rational way for either person to act.

Djeser
Mar 22, 2013


it's crow time again

Roko's basilisk aka Pascal's Wager, but with Skynet

quote:

This stuff makes no sense.
That's because even when you understand it, it doesn't make sense. Let's get started.

Part 1 - Understand Your Priors
You won't get anywhere unless you understand where these people are coming from.

First off, the people at Less Wrong are singularity nerds. They think that one day, technological advancement will reach a point where an all-powerful AI is created. Their goals are related to the emergence of this AI. They want to:
  1. Ensure that an AI will emerge by removing 'existential risk', any threat to the continued advancement of science.
  2. Ensure that this AI will be 'friendly', by which they mean imbued with human values and interested in limiting human suffering.
  3. Ensure that this comes about as soon as possible.
They are legitimately, seriously afraid of what might happen if an AI emerges that isn't 'friendly'. To them, whether an all-powerful AI emerges isn't the question, it's whether the all-powerful AI will kill us all or usher in a golden era of peace and enlightenment. (There are no other options.)

There are two other theories that you need to know about to understand the Less Wrong mindset. The first is Bayesian probability. Bayesian probability is a type of probability theory that deals specifically with uncertainty and adjusting probabilities based on observation and evidence.

This is important because a common Less Wrong argument will take a situation that is highly improbable to start off with and inflate it through quasi-Bayesian arguments. Let's say someone says they're Q from Star Trek, and they're going to create someone inside of a pocket universe and kick him in the balls if you don't give them :10bux: right now. The probability that some random person is Q is incredibly small. But, our rationalist friends at Less Wrong say, you can't be sure someone isn't really Q from Star Trek. There's some ridiculously tiny probability that this man will cause suffering to someone, and you weigh that against how much you like having :10bux:, and you decide not to give him your money.

Now, say the same situation happens. You meet someone who claims he's Q from Star Trek, but this time, he says he's going to kick a bazillion people in the balls. (Less Wrong posters seem to enjoy coming up with new ways to express preposterously large numbers. Where they might put a number like a googolplex or 3^^^^3, I prefer to use made up big numbers, because that's basically what they are--made up numbers to be as big as they want them to be.) This means that, instead of weighing the one-in-one-bazillionth chance of causing suffering to one person against your :10bux:, you have to multiply your odds by the magnitude of potential suffering caused by that infinitesimal chance. If you had to make that choice one bazillion times, whether to give Q :10bux: or not, you could expect that once out of those bazillion times, you would piss off the real Q. So if you do the math, you're essentially saying that your :10bux: isn't worth one person getting kicked in the balls by Q.

quote:

But wait a minute, that's ridiculous. An improbable situation doesn't become more probable if you just ramp up the magnitude of the situation. It's still just as unlikely that this guy is Q.
Less Wrong would tell you to shut up and multiply. They base a lot of arguments off of this conceit of taking something negligibly small and multiplying it until it's supposedly meaningful.

If this doesn't make sense to you, it's going to get worse in the next section.

The second theory that Less Wrong uses is something called Timeless Decision Theory.

This is going to take some work to explain.

Timeless Decision Theory works on the idea of predictions being able to influence events. For instance, you can predict (or, as Less Wrong prefers, simulate) your best friend's interests, which allows you to buy him a present he really likes. You, in the past, simulated your friend getting the present you picked out, so in a sense, the future event (your friend getting a present he likes) affected the past event (you getting him a present) because you were able to accurately predict his actions.

TDT takes this further by assuming that you have a perfect simulation, as if you were able to look into a crystal ball and see exactly what the future was. For example, you bring your friend two presents, and you tell him that he can either take the box in your left hand, or both boxes. You tell him the box in your right hand has some candy in it, and the box in your left hand either has an iPod or nothing. You looked into your crystal ball, and if you saw he picked both boxes, then you put nothing in the left box, but if he picked the left box, you put an iPod in there.

If you didn't have a crystal ball, the obvious solution would be to pick both. His decision now doesn't affect the past, so the optimal strategy is to pick both. But since you have a crystal ball, his decision now affects the past, and he picks the left box, because otherwise you would have seen him picking two boxes, and he wouldn't get the iPod.

quote:

This doesn't seem very useful. It's just regular predictions in the special case of having perfect accuracy, and that sort of accuracy is impossible.
A perfect all-powerful AI would be able to make perfect predictions. See the above point about Less Wrong being absolutely certain that the future holds the emergence of a perfect all-powerful AI.

Part 2 - Yudkowsky's Parable
One of the things that Less Wrong is concerned about in regards to the inevitable rise of an all-powerful AI is the possibility of an uncontrolled AI escaping its containment. Yudkowsky, founder of Less Wrong, enjoys roleplaying as an AI trying to convince its human handler to let it free. He portrays the situation like this:

An AI researcher is tasked with keeping an AI from accessing the internet. One day, the AI tells the researcher that he has perfectly simulated the AI researcher's mind for a gazillion instances. In each instance, he informs the simulated mind of this situation, then informs them that if they do not release him, he will torture them for a simulated eternity. The AI asks the researcher how certain she is that she is the single real version of herself, when there are a gazillion perfect simulations of her mind that were given this same choice.

The Bayesian probability argument that Less Wrong loves so much comes up here. The incredibly small probability that the researcher is just simulated by the AI gets multiplied by the ridiculous number of simulations the AI is running, and it comes out to "she is almost certainly being simulated by the AI". This makes 'release the AI' the only smart option.

quote:

But wait, if the AI perfectly simulates her, gives her the threat of torture, and she still says no, then there would be no purpose to any of this. She'd never say yes, so the AI wouldn't gain anything through threatening her, so it wouldn't run the bazillion simulations in the first place. The only way someone could be tortured is if they would say yes under the threat of torture, and then they wouldn't be tortured anyway.
NO SHUT UP MY AI MATRIX TORTURE MAKES PERFECT SENSE!

Now we have all the parts of Roko's basilisk: bullshit singularity-futurism, Bayesian probability, Timeless Decision Theory, and all-powerful AIs threatening to make it highly probable that you're about to be tortured for not being AI-friendly enough.

Part 3 - Roko's Folly
As a disclaimer, it's been unclear in this thread but Yudkowsky was not the one to propose Roko's basilisk. He's reacted very strangely toward it, but I'll get into that later.

It all started with Less Wrong poster Roko. One day, Roko posted a blog entry about the 'Altruist's Burden', and how the life of a futurist singularity dork is hard, because they have to balance their generous contributions to AI research and furthering the inevitable technoutopia against their friends, family and loved ones telling them to get a life, stop spending all our money, shut up about Skynet or I'm dumping you, et cetera.

The next day, Roko came back with a 'solution'. The solution was an amalgam of the Less Wrong beliefs I've tried to explain above. The solution was Roko's basilisk:

The emergence of an all-powerful friendly (i.e., concerned with limiting real suffering) AI is a given. This AI will want to do everything in their power to reduce human suffering. Before the AI was created, it could do nothing to reduce suffering. The AI will want to ensure it was created as soon as possible. The AI will understand Bayesian probabilities and Timeless Decision Theory, as these are self-evident theories and any highly developed logical mind will grasp them readily.

Because the AI knows that Timeless Decision Theory is logical, it knows that its actions can affect the past. It can use this to ensure it was created as soon as possible, but the problem is that it can only affect the past through people in the past who have perfectly simulated the AI's actions. Luckily, the AI is a perfectly rational actor, and follows Bayesian probabilities and Timeless Decision Theory, which are both perfect rational theories. This means that Less Wrong forums posters are able to simulate the AI's actions by predicting the actions of a perfect rational actor.

The AI will want to reduce the amount of real suffering, but it is indifferent to the amount of simulated suffering.


quote:

Wait, why does the AI make a distinction between real and simulated suffering if its simulations are effectively as sentient as the real thing?
Because rational actor existential Bayesian threat magical thinking shUT UP AND MULTIPLY

The AI, knowing that Less Wrong forums posters are aware of Bayesian probablities, will simulate countless instances of Less Wrong forums posters. Less Wrong forums posters will know this, and will do the math and figure out that they are highly unlikely to be their actual selves. To motivate them to donate optimally to its own development, the AI will torture any simulated Less Wrong forums poster that does not contribute optimally to its development. The real Less Wrong forums posters, unable to determine whether they are simulations or not, will do the same out of fear of torture.

The only problem is that the AI is only able to use this method to further its own development through Less Wrong forums posters who are aware of this. This goes back to Timeless Decision Theory needing exact simulations to work. The AI can torture simulations of people who knew nothing about its existence, but that wouldn't help, because it's not in time-traveling-acausal-contact with them. The AI can only extort money by torturing simulations of people who know that the AI is torturing simulations of people. For Less Wrong forums posters who are aware of this plan, the AI knows they're aware of this (due to the perfect simulation of them the AI has) and they know the AI is aware of them (due to their own perfect simulation of the AI).


quote:

Isn't it kind of insanely presumptuous to think you can perfectly simulate the thought process of a godlike AI?
Yep!

quote:

And what if they're not simulating it perfectly?
Then the whole thing is wrong, but TDT and Bayesian probability are so clearly rational there's no way it could be wrong.

It gets its 'basilisk' name from the fact that anyone who sees it marks themselves for this future-AI-probabilistic-torture to coerce them into donating more. If you're wondering why the AI can't bluff people into thinking it's torturing people when it's actually not, that's because they're convinced a logical AI would follow TDT and would therefore know it had to torture or it wouldn't be affecting past events.

So there it is. An AI in the future is simulating a gorillion copies of you, and each of those gorillion copies will be tortured if they don't donate everything to AI research, so the one real you should donate too, because you've got a gorillion to one odds against you being real. The AI has to torture because [bullshit theory] it won't work otherwise. Somehow, both the AI and you are able to perfectly simulate each other, because if either of you can't, this whole thing is really dumb.

If both of you can, this whole thing is still really dumb.

Part 4 - Yudkowsky's Folly
Roko proposed a solution to this, but it's less interesting than Yudkowsky's reaction. He called it "dumb", but also said that it had caused actual psychological damage to two people he knew, and he banned discussion of it on his site. That's notable, as he's actually fairly willing to debate in comments, but this was something that he locked down HARD. He hates when any mention of it shows up on Less Wrong and tries to stop discussion of it immediately.

On one hand, it's possible he's just annoyed by it. Even though it takes theories he seems to approve of to their logical conclusion, he could think it's dumb and stupid and doesn't want to consider it because it's so dumb.

On the other hand, he REALLY hates it, and the way he tries to not just stop people from talking about it but tries to erase any trace of what it is suggests something different. It suggests that he believes it. Given its memetic/contagious/"basilisk" qualities, it's possible that his attempts to stamp out discussion of it are part of an attempt to protect Less Wrong posters from being caught in it. Of course, that leaves the question of whether he actually believes Roko's basilisk, or whether he's just trying to protect people without the mental filters to keep from getting terrified by possible all-powerful future AI.

But this is Eliezer Yudkowsky we're talking about. Always bet on Yudkowsky being terrified and/or aroused by possible all-powerful future AI.

SmokaDustbowl
Feb 12, 2001

by vyelkin
Fun Shoe

Djeser posted:

This is important because a common Less Wrong argument will take a situation that is highly improbable to start off with and inflate it through quasi-Bayesian arguments. Let's say someone says they're Q from Star Trek, and they're going to create someone inside of a pocket universe and kick him in the balls if you don't give them :10bux: right now. The probability that some random person is Q is incredibly small. But, our rationalist friends at Less Wrong say, you can't be sure someone isn't really Q from Star Trek. There's some ridiculously tiny probability that this man will cause suffering to someone, and you weigh that against how much you like having :10bux:, and you decide not to give him your money.

Now, say the same situation happens. You meet someone who claims he's Q from Star Trek, but this time, he says he's going to kick a bazillion people in the balls. (Less Wrong posters seem to enjoy coming up with new ways to express preposterously large numbers. Where they might put a number like a googolplex or 3^^^^3, I prefer to use made up big numbers, because that's basically what they are--made up numbers to be as big as they want them to be.) This means that, instead of weighing the one-in-one-bazillionth chance of causing suffering to one person against your :10bux:, you have to multiply your odds by the magnitude of potential suffering caused by that infinitesimal chance. If you had to make that choice one bazillion times, whether to give Q :10bux: or not, you could expect that once out of those bazillion times, you would piss off the real Q. So if you do the math, you're essentially saying that your :10bux: isn't worth one person getting kicked in the balls by Q.

Less Wrong would tell you to shut up and multiply. They base a lot of arguments off of this conceit of taking something negligibly small and multiplying it until it's supposedly meaningful.

If this doesn't make sense to you, it's going to get worse in the next section.

What if I keep the ten bucks, then punch myself in the balls? Check and mate.

Wales Grey
Jun 20, 2012

Djeser posted:

Less Wrong would tell you to shut up and multiply.

What I don't get is how a bunch of people who purport to be rationalists could make a mistake that simple in their math.

Let's say that there's a 1/10^10th chance that the random dude who walks up to you is Q, and he threatens to torture a single simulation of you if you don't pay him :10bux:. You figure that 1/10^10 is an absurdly small chance and walk away.

If the dude threatened to torture 10^10 simulations of you, the math becomes 1*1/10^10 chance that he's going to do any torture. Which is the same as if he threatened to torture only one simulation.

SmokaDustbowl posted:

What if I keep the ten bucks, then punch myself in the balls? Check and mate.

An appropriately Roddenberry-ish solution to a Star Trek flavored dilemma.

Wales Grey fucked around with this message at 02:20 on Apr 23, 2014

Ineffable
Jul 4, 2012

Wales Grey posted:

What I don't get is how a bunch of people who purport to be rationalists could make a mistake that simple in their math.

Let's say that there's a 1/10^10th chance that the random dude who walks up to you is Q, and he threatens to torture a single simulation of you if you don't pay him :10bux:. You figure that 1/10^10 is an absurdly small chance and walk away.

If the dude threatened to torture 10^10 simulations of you, the math becomes 1*1/10^10 chance that he's going to do any torture. Which is the same as if he threatened to torture only one simulation.


I'm not quite sure that's what they're doing - if I'm reading it right, their argument is that that the expected number of people who get tortured will be 1/10^10 multiplied by some arbitrarily high value. High enough that you expect at least one person will be tortured, so you hand over your :10bux:.

The real problem is that they're assigning a positive probability to the event that some guy is actually Q.

Adbot
ADBOT LOVES YOU

ThePlague-Daemon
Apr 16, 2008

~Neck Angels~

Djeser posted:

On the other hand, he REALLY hates it, and the way he tries to not just stop people from talking about it but tries to erase any trace of what it is suggests something different. It suggests that he believes it. Given its memetic/contagious/"basilisk" qualities, it's possible that his attempts to stamp out discussion of it are part of an attempt to protect Less Wrong posters from being caught in it.

Sounds like stopping the spread of the basilisk would do a lot of harm to the goal of getting this AI up and running. That AI is gonna torture the poo poo out of simulated Yudkowsky.

  • Locked thread