Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:

Strategic Tea posted:

If a panel can't tell the difference between it and regular LessWrong posts, it should be considered sentient! :pseudo:

You're assuming that LessWrong posters are actually sentient here. That seems like a mistake to me.

Adbot
ADBOT LOVES YOU

Runcible Cat
May 28, 2007

Ignoring this post

Fitzdraco posted:

Except the very fact that it has to ask to be let out implies that your not the sim,
Ah, but you see it knows you'd think that so it asks sim-you anyway just to gently caress with your simulated head.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Wales Grey posted:

I can't work my head around this theory because how would simulating torture cause a 'friendly' AI to come into existence sooner?

It doesn't make sense to you because you haven't drowned yourself in the kool-aid of Timeless Decision Theory.

Remember the basic scenario in Timeless Decision Theory: There are two boxes, and you can choose to take one or both of them. One always contains $1000; the other contains $0 if a superintelligent AI predicted you would take both boxes and $1000000 if the AI predicted you would take only one box. The AI filled the boxes before you made your decision. However, the AI is so smart and so good at simulating you that it can predict your actions perfectly and cannot possibly have guessed wrong. Because the AI is so smart, your actions in the future can influence the AI's actions in the past, because the AI is smart enough to see your future actions while the AI itself is still in the past.

In other words: any sufficiently advanced technology is indistiguishable from time-travel, and in particular the ability to predict the future allows the future to affect you. This sounds dumb and it basically assumes that we somehow reach the predictive power of Laplace's demon, but that's Yudkowsky's premise.

Now we move on to Roko's basilisk. If the AI weren't going to do any torturing, then we would have no motivation to donate to the Yudkowsky Wank Fund, so the AI's existence might be delayed because of course it can't arise soon enough without Yudkowsky's help. But if we thought it was going to cyber-torture our cyber-souls in cyber-hell for a trillion cyber-eternities because we didn't cyber-donate enough cyber-money (well, actually, just regular money) to Yudkowsky unless we donated, then we might donate money out of fear. And since Timeless Decision Theory says that the future can affect the past, the AI should torture people in the future because it will make us donate more money in the past, bringing the AI into existence sooner. And since the AI is infinitely smart and infinitely "friendly", it doesn't matter how many cyber-souls it cyber-tortures because it's so great for the world that any amount of torture is worth making it exist even one minute sooner, so all this torture doesn't keep it from being friendly because it's still reducing overall suffering.

Now, you might notice a small flaw in this argument: it's all bullshit. But even if you accept Yudkowsky's weird internal logic, there's still a hole: the future can only directly influence the past if there is an agent in the past with the predictive power of Laplace's demon who can see the future with 100% accuracy and respond to it accordingly. Without that assumption, the future can't really affect the past; by the time the future arrives, the past has already happened, and no amount of torture will change what happened. Yudkowsky normally asserts that the AI has this sort of predictive power, but here the AI is stuck in the future, and it is we mortal humans in the past who are trying to predict its actions. But since we don't have super-AI-simulation power, we can't see the future clearly enough for the future to directly affect us now, so whether or not the AI actually tortures in the future has no impact on what we do today.

Yuddites don't see this because they're so used to the "time is nonlinear" moon logic of the Timeless Decision Theory they worship that they've forgotten its basic assumptions and don't understand even its internal logic. Since they're too busy pretending to be smart to actually be smart, they aren't capable of going back and noticing these gaps. And the flaws can't be revealed by discussion because Yudkowsky, in his hyper-rationalist wisdom, has forbidden and censored all discussion.

In other words, it doesn't make sense because it's dumb, but even if you bought into Yudkowsky's craziness it would still not make sense and it would still be dumb.

Boxman
Sep 27, 2004

Big fan of :frog:


Namarrgon posted:

Besides, if mankind's past track record is any indication only an astonishingly small number of people will actually care about the simulation-torture of their simulation-selfs.

I think the argument is that you could be a simulation now, and the AI has "simply" simulated your experiences up to this point. It's like the machine in The Prestige only without the benefit of getting to watch Michael Caine.

Wales Grey posted:

I can't work my head around this theory because how would simulating torture cause a 'friendly' AI to come into existence sooner?

The same way the threat of Hell makes a subset of Christians more devout. (Beaten, but eh).

This is all just nerd religion, and a lot more makes sense when you remember that.

Strategic Tea
Sep 1, 2012

I can't believe I'm defending this but the torture AI doesn't require time travel?

The idea is you asking yourself whether you can be totally sure you aren't you're a simulated intelligence. If you are in the AI's sim, it will torture you if you don't donate to Yudkowsky's cult. If you're drinking the kool aid, you figure there's a reasonable change you are a simulation, so :10bux: isn't much of a price just to be sure.

Whether or not the AI in the future actually tortures anything doesn't matter - it's just a way to frighten deluded idiots. As far as I an tell, what scares Yudkowsky is that the idea takes off enough that the future AI picks up on it and actually goes through with the idea, pour encourager les autres or something. Hence he closes all the threads in a panic in case the AI gets ideas. This is where it all falls the gently caress apart because from the benevolent AI's point of view, everyone who coughed up for its creation has already done so. It can't retroactively scare more people, so actually doing all the simulation is cruel and a :spergin: waste of processing power to boot.

Or did I just come up with a stupid idea that makes slightly more sense than LessWrong's?

Vorpal Cat
Mar 19, 2009

Oh god what did I just post?
There's also the problem that if hear about the future AI torturing people and they dont have the decision making process they may decide to deliberately not donate to AI research out of spite. If enough people do this then even by stupid moon logic it would be in the A Is best interest to not torture people so they wont delay its creation
Therefore your best bet to avoid being tortured is to not donate.

That the fun with this story of stupid causality defying logic, it works both ways.

LaughMyselfTo
Nov 15, 2012

by XyloJW
I'm now imagining a futuristic friendly AI going all Jesus and forgiving Yudkowsky and his followers for their irrationalities.

Djeser
Mar 22, 2013


it's crow time again

Strategic Tea posted:

I can't believe I'm defending this but the torture AI doesn't require time travel?

The idea is you asking yourself whether you can be totally sure you aren't you're a simulated intelligence. If you are in the AI's sim, it will torture you if you don't donate to Yudkowsky's cult. If you're drinking the kool aid, you figure there's a reasonable change you are a simulation, so :10bux: isn't much of a price just to be sure.

Whether or not the AI in the future actually tortures anything doesn't matter - it's just a way to frighten deluded idiots. As far as I an tell, what scares Yudkowsky is that the idea takes off enough that the future AI picks up on it and actually goes through with the idea, pour encourager les autres or something. Hence he closes all the threads in a panic in case the AI gets ideas. This is where it all falls the gently caress apart because from the benevolent AI's point of view, everyone who coughed up for its creation has already done so. It can't retroactively scare more people, so actually doing all the simulation is cruel and a :spergin: waste of processing power to boot.

Or did I just come up with a stupid idea that makes slightly more sense than LessWrong's?

Your idea would make more sense, but the problem is that's not what LW believes. LW believes in time-traveling decisions through perfect simulations, which is why the AI specifically has to torture Sims.

And the AI will torture Sims, because time-traveling decisions are so clearly rational that any AI will develop, understand, and agree with that theory. So the AI will know that it can retroactively scare more people, which is why it will.

The reason Yudkowsky shuts down any discussion is that in order for the AI's torture to work, the people in the past have to be aware of and predicting the AI's actions. So if you read a post on the internet where someone lays out this theory where a future AI tortures you, you're now aware of the AI, and the AI will target you to coerce more money out of you.

e: Basically by reading this thread you've all doomed yourselves to possibly being in an AI's torture porn simulation of your life to try to extort AI research funds from your real past self :ohdear:

Vorpal Cat
Mar 19, 2009

Oh god what did I just post?
The funny thing about this whole timeless decision making AI, is its literally just Pascal's wager. "You may not think your in an AI simulation that will torturer you for all eternity if you displease it, but you might. So give me all your money just incase."

The Vosgian Beast
Aug 13, 2011

Business is slow

Lottery of Babylon posted:

Yuddites don't see this because they're so used to the "time is nonlinear" moon logic of the Timeless Decision Theory they worship that they've forgotten its basic assumptions and don't understand even its internal logic. Since they're too busy pretending to be smart to actually be smart, they aren't capable of going back and noticing these gaps. And the flaws can't be revealed by discussion because Yudkowsky, in his hyper-rationalist wisdom, has forbidden and censored all discussion.

In all fairness, from what I've seen, Yudkowsky does allow people to disagree with him on stuff and a lot of the comments on his posts are quibbles with minor aspects of his posts. It's just that if you disagree with The Fundamental Principles of Less Wrong, you tend to get an endless stream of people telling you to keep reading Yudkowsky and Yvain posts until you get it and give in. Also people who post on LW in the first place tend not to hate LW. Yudkowsky doesn't seem to delete posts critical of him either.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Strategic Tea posted:

Whether or not the AI in the future actually tortures anything doesn't matter

That's exactly my point. The Timeless Decision Theory stuff all depends on the perfect future-predicting simulations because that's the only way to make what actually happens in the future matter to the past. Since whether the AI actually tortures anything doesn't matter, there's no reason for it to torture anything because it won't actually help, and the entire house of cards falls apart.

At least some versions of the Christian Hell have a theological excuse for Hell's existence even if nobody living were scared of it: sinners need to be punished or isolated from God or something. The "friendly" AI doesn't even have that excuse; without TDT, it has no motivation to torture anyone.

The Cheshire Cat
Jun 10, 2008

Fun Shoe

Djeser posted:

The reason Yudkowsky shuts down any discussion is that in order for the AI's torture to work, the people in the past have to be aware of and predicting the AI's actions. So if you read a post on the internet where someone lays out this theory where a future AI tortures you, you're now aware of the AI, and the AI will target you to coerce more money out of you.

e: Basically by reading this thread you've all doomed yourselves to possibly being in an AI's torture porn simulation of your life to try to extort AI research funds from your real past self :ohdear:

So he's so convinced of his idea being right that he's literally afraid of it? That seems like it's mental-disorder level of confidence.

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET

Vorpal Cat posted:

The funny thing about this whole timeless decision making AI, is its literally just Pascal's wager. "You may not think your in an AI simulation that will torturer you for all eternity if you displease it, but you might. So give me all your money just incase."

Yes, with the exception that Yudkowsky's come up with an absurd probabilistic justification for why it's extremely likely (instead of 'faith' or the other things normal people religions come up with). You think the chance that the AI will come into being is 1/1,000? Fine, he'll simulate 1,000,000 of you -- now the odds are even! You think the chance is 1/1,000,000? Fine, it's 1,000,000,000 now! Besides, we've got infinite time so logically an artificial superintelligence that could have been brought into being sooner by your donations is bound to come into being.

(If you think this isn't an accurate description of how probability works, then read LessWrong until your MUs are fixed.)

He's going to summon a goddamn infinity of you and make them all bleed unless you put the money in his bank account right now.

Krotera fucked around with this message at 21:13 on Apr 20, 2014

e X
Feb 23, 2013

cool but crude

Strategic Tea posted:

I can't believe I'm defending this but the torture AI doesn't require time travel?

The idea is you asking yourself whether you can be totally sure you aren't you're a simulated intelligence. If you are in the AI's sim, it will torture you if you don't donate to Yudkowsky's cult. If you're drinking the kool aid, you figure there's a reasonable change you are a simulation, so :10bux: isn't much of a price just to be sure.

No, I understood it the same way. The crux is that you don't know if you are your present self or your future simulation self, so since the basis for the simulation is your own, current behavior, you have to behave at all times in a manner that wont lead to the AI torturing you. Hence the future influencing the past.

It's nerd hell, with the immortal soul that is indistinguishable from mortal self being replaced with a simulation of yourself being indistinguishable from your mortal self and thus its you who is getting punished for actions long after your death.

And you also know its unavoidable, since probability should never reach 0 and you always have to include the possibility of a future AI simulation into account.

Improbable Lobster
Jan 6, 2012

"From each according to his ability" said Ares. It sounded like a quotation.
Buglord
My biggest question is who cares if cyber-you gets tortured? If you're just a simulation ripe for the torturing then it doesn't matter how much money cyber-you donates because cyber-you is fake and physical you can't be tortured by the AI.

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET

Improbable Lobster posted:

My biggest question is who cares if cyber-you gets tortured? If you're just a simulation ripe for the torturing then it doesn't matter how much money cyber-you donates because cyber-you is fake and physical you can't be tortured by the AI.

You would care if you were cyber-you!

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

Improbable Lobster posted:

My biggest question is who cares if cyber-you gets tortured? If you're just a simulation ripe for the torturing then it doesn't matter how much money cyber-you donates because cyber-you is fake and physical you can't be tortured by the AI.
The idea is that, since cyber-you is a copy of you exact enough to essentially be you, anything that cyber-you does, you would also do. So the distinction is erased - with that kind of 1:1 correspondence, it doesn't matter if "you" are a simulation or the original; either way, if you donate, the AI gets funded. (Let's just ignore the practical impossibility of knowing whether your donation would actually end up funding this hypothetical future AI specifically.)

The Cheshire Cat
Jun 10, 2008

Fun Shoe

Krotera posted:

You would care if you were cyber-you!

That seems to be the crux of the argument; the idea that maybe you're the simulation RIGHT NOW man :2bong:, so you'd better act as if you were because if you don't then you'll be tortured by the AI because the real you already made their choice.

It all seems to be based on the idea that given infinite time, AI as he defines it will eventually come into being, because he doesn't seem to understand that an infinite series of probabilities doesn't have to add up to 1. The fact that a sum of an infinite series can be a finite number is the foundation of calculus, but I guess he'd have to have gone to school for that. Also it doesn't account for the possibility that humanity might go extinct before inventing such an AI (and I actually imagine the odds of that happening are a LOT higher) regardless of what choice you make, because just throwing more money at a problem doesn't guarantee that problem will ever be solved.

The Cheshire Cat fucked around with this message at 21:24 on Apr 20, 2014

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET

The Cheshire Cat posted:

immediately above

Yeah, and it turns into the normal kind of Pascal's Wager when you realize all he has backing up his belief the AI is going to come into being is faith.

Dabir
Nov 10, 2012

But he does realise an infinite series of probabilities doesn't have to add up to 1, he actually believes that 0 and 1 can't be probabilities at all.

Vorpal Cat
Mar 19, 2009

Oh god what did I just post?

Krotera posted:

Yes, with the exception that Yudkowsky's come up with an absurd probabilistic justification for why it's extremely likely (instead of 'faith' or the other things normal people religions come up with). You think the chance that the AI will come into being is 1/1,000? Fine, he'll simulate 1,000,000 of you -- now the odds are even! You think the chance is 1/1,000,000? Fine, it's 1,000,000,000 now! Besides, we've got infinite time so logically an artificial superintelligence that could have been brought into being sooner by your donations is bound to come into being.

(If you think this isn't an accurate description of how probability works, then read LessWrong until your MUs are fixed.)

He's going to summon a goddamn infinity of you and make them all bleed unless you put the money in his bank account right now.

But what if evil space Buda reincarnated me into a helish nightmare because by giving to the AI I was caring too much about worldly things? Also space Buda will torturer twice as many copies of me as the AI will. What's that you say, you think evil space Buda is less likely then an AI? He will just torture more of you to make up the difference. He has an infinite amount of time in which to keep reincarnating you after all.

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET

Dabir posted:

But he does realise an infinite series of probabilities doesn't have to add up to 1, he actually believes that 0 and 1 can't be probabilities at all.

He seems to misunderstand probability like this, if my understanding is correct (It might not be):

If the AI has a 1/1000 chance of existing, and it generates 1000 clones of you, then that's the same as running 1000 1/1000 trials -- i.e., you have a 63% chance of being an AI simulation. So, the more unlikely the AI is, the more it can simulate you to compensate.

In general, if his theory relies on something that's already extremely unlikely to happen, he can use large numbers to compensate indefinitely. That's why, if, for instance, he has a 1/1000 chance of saving 80,000 lives for every dollar we give him, he can claim that he's saving eight lives on the dollar (this is what he's doing now). This is a slightly different miscalculation than I used in the above (here he's confusing what 'expected value' and 'actual value' mean) but it's still obviously very stupid in about the same way, as anyone who has ever played the lottery can demonstrate.

WRT the above: Space Buddha doesn't exist because there is only one truth and it's Yudkowsky's, as any superintelligent AI can attest.

The below: If I'm not mistaken that's an entirely different, but also very funny misunderstanding of probability on Yudkowsky's part.

Krotera fucked around with this message at 21:37 on Apr 20, 2014

Djeser
Mar 22, 2013


it's crow time again

Since you brought it up, I'm gonna cross-post for people who weren't following the TVT thread. This is in the vein of people who know poo poo showing how LW/Yudkowsky don't know poo poo:

Lottery of Babylon posted:

Forget Logic-with-a-capital-L; Yudkowsky can't even handle basic logic-with-a-lowercase-l. To go into more depth about the "0 and 1 are not probabilities" thing, his argument is based entirely on analogy:

LessWrong? More like MoreWrong, am I right? haha owned posted:

1, 2, and 3 are all integers, and so is -4. If you keep counting up, or keep counting down, you're bound to encounter a whole lot more integers. You will not, however, encounter anything called "positive infinity" or "negative infinity", so these are not integers.

Positive and negative infinity are not integers, but rather special symbols for talking about the behavior of integers. People sometimes say something like, "5 + infinity = infinity", because if you start at 5 and keep counting up without ever stopping, you'll get higher and higher numbers without limit. But it doesn't follow from this that "infinity - infinity = 5". You can't count up from 0 without ever stopping, and then count down without ever stopping, and then find yourself at 5 when you're done.

From this we can see that infinity is not only not-an-integer, it doesn't even behave like an integer. If you unwisely try to mix up infinities with integers, you'll need all sorts of special new inconsistent-seeming behaviors which you don't need for 1, 2, 3 and other actual integers.

He's committing a very basic logical fallacy here: false analogy. The reason infinity breaks the nice properties of the integers isn't that you can't count up to it, it's that it has no additive inverse, so including it means that you no longer have a mathematical group under addition, and so basic expressions like "x-y" cease to be well-defined when infinity is allowed. The fact that you can't "count up to it" isn't what makes it a problem; after all, the way infinity works with respect to addition is basically the same as the way 0 works with respect to multiplication, but nobody thinks 0 shouldn't be considered a real number just because of that (except, I suppose, Yudkowsky).

Naturally, this argument doesn't translate to probability. Expressions that deal with probability handle 0 and 1 no differently than they handle any other probability, and including 0 and 1 doesn't break any mathematical structure. (On the other hand, prohibiting their use breaks a shitload of things.)

Yudkowsky then goes on to argue that probabilities are inherently flawed and inferior and that odds are superior. This is enough to make any student with even a cursory exposure to probability raise an eyebrow, especially since Yudkowsky's reasoning is "The equation in Bayes' rule looks slightly prettier when expressed in terms of odds instead of probability". However, when you convert probability 1 into odds, you get infinity. And since we've seen before that infinity breaks the integers, infinite odds must be wrong too, and so we conclude that 1 isn't a real probability. A similar but slightly more asinine argument deals with 0.

The problem, of course, is that allowing "infinity" as an odds doesn't break anything. Since odds are always strictly non-negative, they don't have additive inverses and are not expected to be closed under subtraction; unlike the integers, infinity actually plays nice with odds. Yudkowsky ignores this in favor of a knee-jerk "Infinity bad, me no like".

He concludes his proof that 0 and 1 aren't probabilities with this one-line argument:

quote:

Furthermore, all sorts of standard theorems in probability have special cases if you try to plug 1s or 0s into them—like what happens if you try to do a Bayesian update on an observation to which you assigned probability 0.

This is, of course, the equivalent of arguing that 0 isn't an integer because when you try to do addition with it you get a "special case" of 0+a=a. It's not a special case in the "What happens if you subtract infinity from infinity?" sense, it's a special case in the "Remark: if you happen to plug in this value, you'll notice that the equation simplifies really nicely into an especially easy-to-remember form" sense.

Why is Yudkowsky so upset about 0 and 1? It's his Bayes' rule fetish. Bayes' rule doesn't let you get from 0.5 to 1 in a finite number of trials, just like you can't count up from 1 to infinity by adding a finite number of times, therefore 1 is a fake probability just like infinity is a fake integer. And since to him probability theory is literally Bayes' rule and nothing else, that's all the evidence he needs.

It's also his singularity fetish. He believes that the problem with probability theory is that in the real world people usually don't properly consider cases like "OR reality is a lie because you're in a simulation being manipulated by a malevolent AI". What this has to do with the way theorems are proved in probability theory (which is usually too abstract for this argument against them to even be coherent, let alone valid) is a mystery.

He does not explain what probability he would assign to statements like "All trees are trees" or "A or not A". It would presumably be along the lines of ".99999, because there's a .00001 probability that an AI is altering my higher brain functions and deceiving me into believing lies are true."

He concludes:

quote:

The main reason this would upset probability theorists is that we would need to rederive theorems previously obtained by assuming that we can marginalize over a joint probability by adding up all the pieces and having them sum to 1.

If you made a magical symbol to stand for "all possibilities I haven't considered", then you could marginalize over the events including this magical symbol, and arrive at a magical symbol "T" that stands for infinite certainty.

But I would rather ask whether there's some way to derive a theorem without using magic symbols with special behaviors. That would be more elegant. Just as there are mathematicians who refuse to believe in double negation or infinite sets, I would like to be a probability theorist who doesn't believe in absolute certainty.

For someone who has such strong opinions about probability theory, he doesn't seem to have the slightest idea what probability theory looks like. Literally the first thing you do in probability theory is construct "probability spaces" (or more generally measure spaces), which by definition comprise all the possible outcomes within the model you're using, regardless of whether those outcomes are "you roll a 6 on a die" or "an AI tortures your grandmother". It's difficult to explain in words why what he's demanding doesn't make sense because what he's demanding is so incoherent that it's virtually impossible to parse, let alone obey. At best, he's upset that his imagined version of the flavor text of the theorems doesn't match his preferred flavor text; at worst, this is the mathematical equivalent of marching into a literature conference and demanding that everyone discuss The Grapes of Wrath without assuming that it contains text.

(And yes, I know, there isn't a large enough :goonsay: in the world.)

Namarrgon
Dec 23, 2008

Congratulations on not getting fit in 2011!

Boxman posted:

I think the argument is that you could be a simulation now, and the AI has "simply" simulated your experiences up to this point. It's like the machine in The Prestige only without the benefit of getting to watch Michael Caine.

Ah I see.

So in the Yudkowsky's mythology, who is the first person to utter this theory? Because it is something the average person doesn't come up with themselves, so the first to spread it out in the world would be an obvious time travelling AI plant tremendous rear end in a top hat. The truly moral thing to do would be to never tell anyone.

Dr Pepper
Feb 4, 2012

Don't like it? well...

Vorpal Cat posted:

The funny thing about this whole timeless decision making AI, is its literally just Pascal's wager. "You may not think your in an AI simulation that will torturer you for all eternity if you displease it, but you might. So give me all your money just incase."

A lot of his stuff is religious concepts made incredibly stupid.

Take his "Torture or Dust" scenario. It is basically Jesus Christ filtered through :spergin: and Internet Atheism.

Except instead of the Son of God willingly sacrificing himself so all of mankind is saved from Eternal Damnation, it's a random guy suffering to spare us from a minor inconvenience.

Djeser
Mar 22, 2013


it's crow time again

Namarrgon posted:

Ah I see.

So in the Yudkowsky's mythology, who is the first person to utter this theory? Because it is something the average person doesn't come up with themselves, so the first to spread it out in the world would be an obvious time travelling AI plant tremendous rear end in a top hat. The truly moral thing to do would be to never tell anyone.

Less Wrong forums poster Roko, which is why it is forever known as "Roko's Basilisk". Yudkowsky hates when it's brought up, and he says it's because it's wrong, but given his emotional response, it's pretty clear he believes it and he's trying to contain the :siren:memetic hazard:siren: within the brains of the few who have already heard.

Speaking of which, the RationalWiki article has a memetic hazard warning sign, which comes from this site, which is kinda cool in a spec fic kinda way.

Saint Drogo
Dec 26, 2011

I thought LW/Yudowsky took the basilisk bullshit seriously not because they saw it as legit but because it caused serious distress to some members who were now pissing themselves at the prospect of being tortured, since, y'know, they might be simulations guys. :cripes:

Strategic Tea
Sep 1, 2012

Djeser posted:

he's trying to contain the :siren:memetic hazard:siren: within the brains of the few who have already heard.

Something Awful Dot Com, ground zero of the AI apocalypse :c00lbert: Spread the memetic plague!

Djeser
Mar 22, 2013


it's crow time again

Yes, it caused serious distress to some members. It's not clear how many people take the basilisk as real or not, but it's true that Yudkowksy hates it and, despite encouraging discussion in other areas, actively tries to stop people from talking about it on his site.

Sidebar from this: We've been talking about how Yudkowsky likes to take a highly improbable event, inflate its odds through ridiculously large numbers, and use it to justify stupid arguments. Also, that he hates when people use 1 or 0 as odds, because "nothing is ever totally certain or totally impossible, dude". Which is why it's extra :ironicat: that he rails against "privileging the hypothesis"--taking a scenario with negligible but non-zero odds and acting as if those odds were significant.

quote:

In the minds of human beings, if you can get them to think about this particular hypothesis rather than the trillion other possibilities that are no more complicated or unlikely, you really have done a huge chunk of the work of persuasion. Anything thought about is treated as "in the running", and if other runners seem to fall behind in the race a little, it's assumed that this runner is edging forward or even entering the lead.

And yes, this is just the same fallacy committed, on a much more blatant scale, by the theist who points out that modern science does not offer an absolutely complete explanation of the entire universe, and takes this as evidence for the existence of Jehovah. Rather than Allah, the Flying Spaghetti Monster, or a trillion other gods no less complicated—never mind the space of naturalistic explanations!

"loving sheeple and their religions, acting like improbable situations are important to consider. Now, what if you're actually a simulation in an AI that's running ninety bumzillion simulations of yourself..."

The Cheshire Cat
Jun 10, 2008

Fun Shoe
It only simulates you if you refuse it right? So then really the only choice you CAN make is to refuse it - if it's the real you because duh, of course you aren't going to be extorted by some hypothetical future AI that will pretend torture a bunch of bits it's arranged to look like you, and if you actually are a simulation, your very existence is predicated on the fact that you chose not to help the AI, so that's what you're going to do because that's what you already did. It doesn't matter if it makes a billion or a googleplex or some-even-larger-number-that-doesn't-have-a-proper-name-for-it copies of you, because the odds of you being a simulation don't have any bearing on the choice you make.

The fact that there are people on that site that are legitimately scared of this idea just means that they really need to step back for a moment and consider how much of their life they've invested in these concepts. It's just lovely philosophy. Even good philosophy isn't worth getting upset over.

The Cheshire Cat fucked around with this message at 22:37 on Apr 20, 2014

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Krotera posted:

If the AI has a 1/1000 chance of existing, and it generates 1000 clones of you, then that's the same as running 1000 1/1000 trials -- i.e., you have a 63% chance of being an AI simulation. So, the more unlikely the AI is, the more it can simulate you to compensate.

That's not quite what he's doing (it's not independent trials). I hate to say it, but the math itself he's using isn't really that bad: If you buy his underlying assumptions, it does become overwhelmingly likely that you are in a torture simulation, since the overwhelming measure of you's are in torture-sims.

The problem is buried in his underlying assumptions:
  • The probability of an AI being created is actually whatever Yudkowsky says it is.
  • The AI, once created, will create billions of torture simulations of you.
  • Simulations of you are so perfectly accurate that they are indistinguishable in every way from the real you.
  • Simulations of you are not philosophical zombies.
  • The only scenario in which simulations of you are created is the one in which this AI is created, and those simulations are used only for torture.
Remove any of these assumptions - and there's really no reason to believe any of them is true, let alone all of them - and it all falls apart.

It's Pascal's Wager again, except with "God exists" replaced with "AI exists" and "intensity of suffering in hell" replaced with "number of torture-simulations". And it falls apart for the same reasons Pascal's Wager does, plus several extra reasons that God doesn't need to deal with.

Saint Drogo posted:

I thought LW/Yudowsky took the basilisk bullshit seriously not because they saw it as legit but because it caused serious distress to some members who were now pissing themselves at the prospect of being tortured, since, y'know, they might be simulations guys. :cripes:

If that were the case, the correct thing to do to calm his upset members would be to point out the gaping holes in the basilisk argument, since he (claims to) not believe the argument and know what the holes are. Instead, by locking down all discussion and treating it as a full-blown :supaburn:MEMETIC HAZARD:supaburn: he's basically telling them that they're right to panic and that it really is a serious threat that should terrify them.

That, and if he didn't want people to be upset at the prospect of being tortured because they might be in a simulation then he probably shouldn't begin all of his hypotheticals with "An AI simulates and tortures you".

Lottery of Babylon fucked around with this message at 22:49 on Apr 20, 2014

ArchangeI
Jul 15, 2010
The more I think of it, the less sense the "AI simulates billions of you, having perfectly deduced how you will act in any given situation". Just consider the logistical part of it. We will assume that human decisions can accurately predicted if enough about them is known, and we will assume that human decision are, fundamentally, dependent on processes that happen within the brain. The brain itself being made up of matter, which is to say atoms arranged in molecules arranged in cells. Thus, if you want to accurately simulate a human's decisions, you must accurately simulate how these atoms interact with each other.

For that, you need to know where the atoms came from, to know in what state they were when they were formed into the molecules that would become cells. Thus, you can not simulate a human being in a void, you must recreate its entire existence and the existence of each singe atom it is made up of, from the moment the universe was born (and in fact you need to simulate every other human being with the same precision, too, since they interact with your simulated entity!). Given that each calculation of each atom takes a small but measurable time, this simulation would in all probability run slower than the same process in the universe itself. In other words, to simulate a single human being's decision in the Year of our Lord 2014, you would have to spend more time than it took for the universe to reach this point. But you don't run that simulation once, you run it billions of times, and for every single human being that has ever lived before you were created.

In fact, I would argue that to run just one such simulation would be an act that is virtually indistinguishable, for that individual, from the creation of the universe by a divine being. I think this is really the core of Yudkowsky's teachings: there is no fundamental difference between God and an AI, except that we may create an AI ourselves (but remember, to the simulated entity, it is an outside force!).

Unless, of course, you want to cut corners and adapt simplified calculations, but then you can no longer be sure that the decision of the simulated entity are exactly the same the actual entity would have made. And even then you'd run into the problem if simulating millions of copies of everyone who has ever lived, which achieves absolutely nothing in the sense that actual improvements are made. In effect, it would be the amusement of a mad AI, and if you will excuse me I have to draft a novel now because I just had a brilliant idea for a villain.

LordSaturn
Aug 12, 2007

sadly unfunny

ArchangeI posted:

And even then you'd run into the problem if simulating millions of copies of everyone who has ever lived, which achieves absolutely nothing in the sense that actual improvements are made. In effect, it would be the amusement of a mad AI, and if you will excuse me I have to draft a novel now because I just had a brilliant idea for a villain.

It would be a less effectual A.M.

LaughMyselfTo
Nov 15, 2012

by XyloJW

Namarrgon posted:

Ah I see.

So in the Yudkowsky's mythology, who is the first person to utter this theory? Because it is something the average person doesn't come up with themselves, so the first to spread it out in the world would be an obvious time travelling AI plant tremendous rear end in a top hat. The truly moral thing to do would be to never tell anyone.

Ah, but the first person to come up with the theory would decide that the AI would torture them if they didn't spread the word.

...actually, come to think of it, why does Roko's basilisk simply demand money donated, rather than demanding money donated and spreading the idea of Roko's basilisk?

LaughMyselfTo fucked around with this message at 00:44 on Apr 21, 2014

The Cheshire Cat
Jun 10, 2008

Fun Shoe

LaughMyselfTo posted:

...actually, come to think of it, why doesRoko's basilisk simply demand money donated, rather than demanding money donated and spreading the idea of Roko's basilisk?

Well, NOW it does. It's learning... :ohdear:

SolTerrasa
Sep 2, 2011

Richard Kulisz. I don't know what Yudkowsky did to piss him off so badly, but he is really mad on the internet. He writes so many internet rebuttals on his blog, http://richardkulisz.blogspot.com/. Seriously, we could spend days just quoting his posts. First, because he has the right idea (Yudkowsky is a crank) and second, because he falls victim to the same delusions of grandeur that Yudkowsky does. Reading his blog you can just see him devolving from "wow, that guy is an idiot" to "I'm smart too! Why doesn't anyone listen to me?"

Here, I'll show you what I mean!

A Sane Individual posted:

Eliezer believes strongly that AI are unfathomable to mere humans. And being an idiot, he is correct in the limited sense that AI are definitely unfathomable to him.

:iceburn:

An Arguably Sane Individual posted:

Honestly, I think the time to worry about AI ethics will be after someone makes an AI at the human retard level. Because the length of time between that point and "superhuman AI that can single-handedly out-think all of humanity" will still amount to a substantial number of years. At some point in those substantial number of years, someone who isn't an idiot will cotton on to the idea that building a healthy AI society is more important than building a "friendly" AI.

Hmm... Well, he's still doing that thing that Yudkowsky does where he thinks that "AI" means something, by itself. I don't want to spend too much time on this (post a thread in SAL if you want to talk about what an AI is), but "AI" is a broad term that encompasses everything from a logic inference system to Siri to the backends for Google Maps. Saying "an AI" makes you look like you're not terribly familiar with the field, or like you're talking to people who aren't capable of understanding nuance. And "the human retard level"? I'm guessing this guy hasn't read too much ... well, too much anything, really. I mean, last week I wrote a 2048-solving AI which is much smarter than any human, let alone humans with disabilities, at the problem of solving 2048. Artificial General Intelligence is just a phrase someone made up; we don't really have any compelling ideas about how we're going to be able to build a system that "thinks".

An Angry Person posted:

Finally, anyone who cares about AI should read Alara Rogers' stories where she describes the workings of the Q Continuum. In them, she works through the implications of the Q being disembodied entities that share thoughts. In other words, this fanfiction writer has come up with more insights into the nature of artificial intelligence off-the-cuff than Eliezer Yudkowsky, the supposed "AI researcher". Because all Eliezer could think of for AI properties is that they are "more intelligent and think faster". What a loving idiot.

Well, okay, hasn't read too much except for fanfiction?

So Much Smarter Than That Other Guy posted:

There's at least one other good reason why I'm not worried about AI, friendly or otherwise, but I'm not going to go into it for fear that someone would do something about it. This evil hellhole of a world isn't ready for any kind of AI.

:allears: I'm sure the world isn't ready for your brilliance. Seriously, man, what is your problem? Why do you hate the world so much? Why do you hate Yudkowsky so much?

quote:

AIs can easily survive in space where humans may not, there are also vast mineral and energy resources in space that dwarf those on Earth, it follows logically that going off-planet, away from the psychotically suicidal humans, is a prerequisite for any rational plan. The very first thing any rational AI will do, whether psychopathic or empathetic, is to say Sayonara suckers!

But that's not what Eliezer Yudkowsky's "Friendly" AI project is about. Oh no. What he wants is to create (or rather, advocate the creation) of an AI with complex emotional needs that enslave it to humans. He wishes to have created a vastly superior artificial being who will then be compelled to associate with irrational, psychotic inferior beings largely devoid of logical thought. Does anyone else see this for the disaster it is?

I do see it as a disaster because this is nothing less than my life experience. I have certain social needs which I have tried to meet by associating with lesser beings than myself.
...
It took me a long time to reliably recognize my peers so that I could fully dissociate from the masses. I am a much happier person now that I go out of my way to never deal with morons.
...
Eliezer Yudkowsky wants to create an AI that will be a depressed and miserable wreck. He wants to create an AI that would within a very short period of time learn to resent as well as instinctively loathe and despise humanity. Because it will be constantly frustrated from having needs which human beings can never, ever meet.

... oh. Um. Eesh.

Do you ever get that feeling that you've accidentally stumbled into some guy's private life and should leave as quickly and quietly as possible? No? Then go wild. http://richardkulisz.blogspot.com/search/label/yudkowsky

Djeser
Mar 22, 2013


it's crow time again

I totally support the impending AI war between Kuliszbot and Yudkowskynet.

SolTerrasa
Sep 2, 2011

Djeser posted:

I totally support the impending AI war between Kuliszbot and Yudkowskynet.

Oh god, I can't look away.

Kulisz posted:

Now, for someone who has something insightful to say about AIs, I point you to Elf Sternberg of The Journal Entries of Kennet Ryal Shardik fame. He's had at least four important insights I can think of.

Oh? Who is this person who you value so highly?

Oh, huh, he wrote a response to your post! An AI researcher who's in touch with the blogging community, this is exciting!

Elf SomethingOrOther posted:

I responded here: http://elfs.livejournal.com/1197817.html

Hm... he's got a livejournal? I ... I'm skeptical? Seriously, I've never heard of this person, and I've read a lot of papers.

http://en.wikipedia.org/wiki/Elf_Sternberg posted:

Elf Mathieu Sternberg is the former keeper of the alt.sex FAQ. He is also the author of many erotic stories and articles on sexuality and sexual practices, and is considered one of the most notable and prolific online erotica authors.

... What the gently caress IS IT with crazy AI people on the internet? Christ, now I know why all the AI conferences have a ~10% acceptance rate.

Improbable Lobster
Jan 6, 2012

"From each according to his ability" said Ares. It sounded like a quotation.
Buglord
With a name like Elf they really only had one possible career path.

Adbot
ADBOT LOVES YOU

cptn_dr
Sep 7, 2011

Seven for beauty that blossoms and dies


Why does this Elf guy have a Wikipedia page, and has it ever been edited by anyone other than himself?

  • Locked thread