Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Djeser
Mar 22, 2013


it's crow time again

Swan Oat posted:

Can anyone elaborate on what makes this unworkable? I'm not trying to challenge someone who DARES to disagree with Yudkowsky -- I don't know poo poo about these topics -- I'm just curious as to what makes his understanding of AI or decision theory wrong.

Yudkowsky has this thing called Timeless Decision Theory. Because he gets all his ideas from sci-fi movies, I'll use a sci-fi metaphor.

Let's take two people who've never met. Maybe one person lives in China and one lives in England. Maybe one person lives on Earth and one person lives on Alpha Centauri. But they've both got holodecks, because Yudkowsky watched Star Trek this week. And with their holodecks, they can create a simulation of each other, based on available information. Now these two people can effectively talk to each other--they can set up a two-way conversation because the simulations are accurate enough that it can predict exactly what both of them would say to each other. These people could even set up exchanges, bargains, et cetera, without ever interacting.

Now instead of considering these two people separated by space, consider them separated by time. The one in the past is somehow able to perfectly predict the way the one in the future will act, and the one in the future can use all the available information about the one in the past to construct their own perfect simulation. This way, people in the past can interact with people in the future and make decisions based off of that interaction.

The LessWrong wiki presents it like this:
A super-intelligent AI shows you two boxes. One is filled with :10bux:. One is filled with either nothing or :20bux:. You can either take the second box, and whatever's in it, or you can take both boxes. The AI has predicted what you will choose, and if it predicted "both" then the second box has nothing, and if it predicted "the second box" the second box has :20bux:.

"Well, the box is already full, or not, so it doesn't matter what I do now. I'll take both." <--This is wrong and makes you a sheeple and a dumb, because the AI would have predicted this. The AI is perfect. Therefore, it perfectly predicted what you'd pick. Therefore, the decision that you make, after it's filled the box, affects the AI in the past, before it filled the box, because that's what your simulation picked. Therefore, you pick the second box. This isn't like picking the second box because you hope the AI predicted that you'd pick the second box. It's literally the decision you make in the present affecting the state of the AI in the past, because of its ability to perfectly predict you.

It's metagame thinking plus a layer of time travel and bullshit technology, which leads to results like becoming absolutely terrified of the super-intelligent benevolent AI that will one day arise (and there is no question whether it will because you're a tech-singularity-nerd in this scenario) and punish everyone who didn't optimally contribute to AI research.

Adbot
ADBOT LOVES YOU

Djeser
Mar 22, 2013


it's crow time again

Looking up other things on the Less Wrong wiki brought me to this: Pascal's Mugging.

In case you get tripped up on it, 3^^^^3 is a frilly mathematical way of saying "a really huge non-infinite number". What he's saying is that a person comes up to you and demands some minor inconvenience to prevent a massively implausible (but massive in scale) tragedy. "Give me five bucks, or I'll call Morpheus to take me out of the Matrix so I can simulate a bajillion people being tortured to death", basically.

And while Yudkowsky doesn't think it's rational (to him) to give them the five bucks, he's worried because a) the perfect super-intelligent Baysesian AI might get tripped up by it, because the astronomically small possibility would add up to something significant when multiplied by a bajillion and b) he can't work out a solution to it that fits in with his chosen decision theory.

Other nice selections from Yudkowsky's blog posts include:

"All money spent is exactly equal and one dollar is one unit of caring"

quote:

To a first approximation, money is the unit of caring up to a positive scalar factor—the unit of relative caring. Some people are frugal and spend less money on everything; but if you would, in fact, spend $5 on a burrito, then whatever you will not spend $5 on, you care about less than you care about the burrito. If you don't spend two months salary on a diamond ring, it doesn't mean you don't love your Significant Other. ("De Beers: It's Just A Rock.") But conversely, if you're always reluctant to spend any money on your SO, and yet seem to have no emotional problems with spending $1000 on a flat-screen TV, then yes, this does say something about your relative values.

"Volunteering is suboptimal and for little babies. All contributions need to be in the form of money, which is what grown ups use."

quote:

There is this very, very old puzzle/observation in economics about the lawyer who spends an hour volunteering at the soup kitchen, instead of working an extra hour and donating the money to hire someone...

If the lawyer needs to work an hour at the soup kitchen to keep himself motivated and remind himself why he's doing what he's doing, that's fine. But he should also be donating some of the hours he worked at the office, because that is the power of professional specialization and it is how grownups really get things done. One might consider the check as buying the right to volunteer at the soup kitchen, or validating the time spent at the soup kitchen.

This is amusing enough by the end that I have to link it: "Economically optimize your good deeds, because if you don't, you're a terrible person."

Suggestions for a new billionaire posted:

Then—with absolute cold-blooded calculation—without scope insensitivity or ambiguity aversion—without concern for status or warm fuzzies—figuring out some common scheme for converting outcomes to utilons, and trying to express uncertainty in percentage probabilitiess—find the charity that offers the greatest expected utilons per dollar. Donate up to however much money you wanted to give to charity, until their marginal efficiency drops below that of the next charity on the list.

quote:

Writing a check for $10,000,000 to a breast-cancer charity—while far more laudable than spending the same $10,000,000 on, I don't know, parties or something—won't give you the concentrated euphoria of being present in person when you turn a single human's life around, probably not anywhere close. It won't give you as much to talk about at parties as donating to something sexy like an X-Prize—maybe a short nod from the other rich. And if you threw away all concern for warm fuzzies and status, there are probably at least a thousand underserved existing charities that could produce orders of magnitude more utilons with ten million dollars. Trying to optimize for all three criteria in one go only ensures that none of them end up optimized very well—just vague pushes along all three dimensions.

Of course, if you're not a millionaire or even a billionaire—then you can't be quite as efficient about things, can't so easily purchase in bulk. But I would still say—for warm fuzzies, find a relatively cheap charity with bright, vivid, ideally in-person and direct beneficiaries. Volunteer at a soup kitchen. Or just get your warm fuzzies from holding open doors for little old ladies. Let that be validated by your other efforts to purchase utilons, but don't confuse it with purchasing utilons. Status is probably cheaper to purchase by buying nice clothes.

He's got this phrase "shut up and multiply" which to him means "do the math and don't worry about ethics" but my brain confuses it with "go forth and multiply" and it ends up seeming like some quasi-biblical way to say "go gently caress yourself". And that linked to this article.

quote:

Most people chose the dust specks over the torture. Many were proud of this choice, and indignant that anyone should choose otherwise: "How dare you condone torture!"

This matches research showing that there are "sacred values", like human lives, and "unsacred values", like money. When you try to trade off a sacred value against an unsacred value, subjects express great indignation (sometimes they want to punish the person who made the suggestion).
:engleft:

quote:

Trading off a sacred value (like refraining from torture) against an unsacred value (like dust specks) feels really awful. To merely multiply utilities would be too cold-blooded - it would be following rationality off a cliff...

But let me ask you this. Suppose you had to choose between one person being tortured for 50 years, and a googol people being tortured for 49 years, 364 days, 23 hours, 59 minutes and 59 seconds. You would choose one person being tortured for 50 years, I do presume; otherwise I give up on you.

And similarly, if you had to choose between a googol people tortured for 49.9999999 years, and a googol-squared people being tortured for 49.9999998 years, you would pick the former.

A googolplex is ten to the googolth power. That's a googol/100 factors of a googol. So we can keep doing this, gradually - very gradually - diminishing the degree of discomfort, and multiplying by a factor of a googol each time, until we choose between a googolplex people getting a dust speck in their eye, and a googolplex/googol people getting two dust specks in their eye.
:engleft: :engleft: :engleft:

He really cannot understand why the hell people would choose a vast number of people to be mildly inconvenienced over one person having their life utterly ruined. But the BIG NUMBER. THE BIG NUMBER!!!! DO THE ETHICS MATH!

And if you were wondering if this is a cult:

quote:

And if it seems to you that there is a fierceness to this maximization, like the bare sword of the law, or the burning of the sun - if it seems to you that at the center of this rationality there is a small cold flame -

Well, the other way might feel better inside you. But it wouldn't work.

And I say also this to you: That if you set aside your regret for all the spiritual satisfaction you could be having - if you wholeheartedly pursue the Way, without thinking that you are being cheated - if you give yourself over to rationality without holding back, you will find that rationality gives to you in return.

But that part only works if you don't go around saying to yourself, "It would feel better inside me if only I could be less rational."

edit:

AATREK CURES KIDS posted:

Does anyone know if Yudkoswky still wants to do roleplays with people over IRC where his character is an untrustworthy superintelligent AI who wants to be released from containment?
More accurate.

Djeser fucked around with this message at 00:45 on Apr 20, 2014

Djeser
Mar 22, 2013


it's crow time again


Yudkowsy really, REALLY likes that kind of argument. It showed up like three times in the post I made. He thinks that in any possible situation, a very large number of repeated trials makes up for effectively impossible scenarios.

quote:

"Give me twenty bucks, or I'll roll these thousand-sided dice, and if they show up as all ones, I'm going to punch you."

"Okay, fine."

"But no, I'm going to roll them twenty billion billion times! I'm going to be punching you at least a billion times if you don't give me twenty bucks, so the logical thing to do is give me the money now."

Djeser fucked around with this message at 01:00 on Apr 20, 2014

Djeser
Mar 22, 2013


it's crow time again

Strategic Tea posted:

I could see that working in context, aka an AI using it in the moment to freak someone out. Add some more atmosphere and you could probably get a decent horror short out of it. Stuff only gets ridiculous when it goes from 'are you sure this isn't a sim?' from an imprisoned AI right in front of you to a hypothetical god-thing that might exist in the future that probably isn't real but you should give Yudkowsky money just in case.

What I don't understand why his AIs are all beep boop ethics-free efficiency machines. Is there any computer science reason for this when they're meant to be fully sentient beings? They're meant to be so perfectly intelligent that they have a better claim on sentience than we do. Yudkowsky has already said that they can self modify, presumably including their ethics. Given that, why is he so sure that they'd want to follow his bastard child of game theory or whatever to the letter? Why would they be so committed to perfect efficiency at any cost (usually TORTUREEEEEE :supaburn:)? I guess Yudkowsky thinks there's one true logical answer and anyone intelligent enough (aka god-AIs oh and also himself) will find it.

The fucker doesn't want to make sure the singularity is ushered in as fast and peacefully as possible. He want to be the god-AI torturing infinite simulations and feeling :smuggo: over how ~it's the only way I did the maths I'm making the hard choices and saving billions~. Which is really the kind of teenage fantasy the whole site is.

The reason that Yudkowsky's AIs are all beep boop efficiency machines is because he believes his pet theories about Bayesian probability are so clearly and obviously the most logical and rational that of course the hypothetical sentient future AI would be Bayesian, and that would mean that it follows Bayesian logic perfectly, without the irrational sense of morality/reality that makes humans think that "I'm going to simulate fifteen bazillion copies of you, to make the possibility that you're not a simulation infinitesimally slim" is a dumb idea.

And it's not just the AIs he expects to act like that, it's people who he expects to act like that. That's why he thinks an AI can escape by doing the simulate-fifteen-bazillion-copies-of-you trick. Because once someone hears that the AI simulated a ~*BIG NUMBER*~ then obviously they'll do the math and figure out the odds are incredibly slim and so you're clearly an AI simulation and will be tortured if you don't let him go.

(The idea that the person would refuse to let the AI go even under threat of quantum torture doesn't show up--probably because in that instance both the AI and the human aren't perfect Bayesian :goleft:bots. If the AI has a perfect simulation of you, and you wouldn't let the AI go under the threat of torture, then it has no reason to torture you, because it knows that torturing you will do nothing. And if the AI isn't simulating your torture, then the possibility of you being an AI simulation is effectively zero. The AI doesn't escape and your simulations don't get tortured. Then again, Yudkowsky hates the idea that you can express probability as a zero.)

But seriously, he seems almost more afraid of AI than he seems to like it. That seems to be the brunt of his "AI research" too: hypothesizing about these fictional Singularity AIs that are all-knowing with infinite computing power. One of the particular ways he's afraid of it is the "paperclip maximixer", which is a self-improving AI that's given one goal, no restrictions: make paperclips. It gets better and better at making paperclips, until it starts making people into paperclips, then it makes the earth into paperclips, then it makes the whole solar system into paperclips. (It's gray goo, but with paperclips.) The LW Wiki page says that it's just a thought experiment showing why you need to program more than one singular value into an AI, but the people at Less Wrong discuss it in significant detail.

Here, a troper (surprise surprise) wants to talk about what exactly a paperclip maximizer would be doing.

Here, someone says thinking that aliens are at all anthropomorphic is dumb, but that considering an alien paperclip maximizer is a valid threat.

Here, someone asks whether a paperclip maximizer is better than nothing.

Yudkowsky in a moment of non-:spergin: posted:

Paperclippers are worse than nothing because they might run ancestor simulations and prevent the rise of intelligent life elsewhere, as near as I can figure. They wouldn't enjoy life. I can't figure out how any of the welfare theories you specify could make paperclippers better than nothing?

DataPacRat, channeling the LW gestalt posted:

Would it be possible to estimate how /much/ worse than nothing you consider a paperclipper to be?

While clicking around various links, I found a link to "singularity volunteers" which brought me here.

quote:

The Machine Intelligence Research Institute (MIRI) is a nonprofit with a very ambitious goal: to ensure that good things happen—rather than bad things (which is the likely default outcome)—when machines surpass human levels of intelligence.

Also as a final note, someone on LW roleplays as a paperclip maximizer.

quote:

I have a humanoid robot that is looking to better integrate into human society and earn money. My skillset includes significant knowledge of mechanical engineering and technical programming. My robot's characteristics are as follows:

- Has the appearance of a stocky, male human who could pass for being 24-35 years old.

- Can pass as a human in physical interaction so long as no intense scrutiny is applied.

- No integral metallic components, as I have found the last substitutes I needed.

- Intelligence level as indicated by my posting here; I can submit to further cognition tests as necessary.

Djeser
Mar 22, 2013


it's crow time again

PassTheRemote posted:

So is HPMOR basically a Harry Potter fanfic created by Voldemort?

Wasn't Voldemort's fatal flaw that he didn't understand the bond of love that protected Harry?

If so, then yes. It's Harry Potter if Voldemort was in Harry's body.

Unrelated, but I wanted to point it out: the argument that an AI is going to torture you if you don't donate to "AI research" (i.e. Yudkowsky's Idea Guy fund) wasn't actually his idea. That's why it's called Roko's Basilisk, because Roko is the person who thought it up on LW. Yudkowsky actually hates the basilisk, but I don't think it's because he thinks it's wrong. It's because he thinks it's right, and therefore memetically dangerous. He once posted something along the lines of "a few people have already been hurt by this, so don't bring it up ever again".

Djeser
Mar 22, 2013


it's crow time again

I think he calls it specifically a Friendly AI, where Friendly means that it's concerned about mathematically limiting the amount of pain and suffering in the world. Therefore, a Friendly AI would simulate torture if it meant that it could make its own existence come about sooner, because then it could solve all our problems sooner.

It's really telling about the general attitude of Yudkowski/LessWrong that "a superpowerful AI emerges that can solve all our problems" is taken as a given, and the discussion instead revolves around a) whether the AI will be Friendly and b) how to keep it from turning everyone into paperclips if it isn't.

Because, see, donating your time/effort/money to charitable causes is suboptimal when you could be donating your time to the development of an AI that will solve everything.

Djeser
Mar 22, 2013


it's crow time again

Wales Grey posted:

I can't work my head around this theory because how would simulating torture cause a 'friendly' AI to come into existence sooner?

Because it's simulating torture to people who are a) suboptimally contributing to AI research and b) aware of the existence in the future of a friendly AI who will torture people unless they optimally contribute to AI research in the past.

And it's doing this so many times that the people in the present are up against impossibly slim odds. It's almost certain that they're in a simulation designed to torture them if they don't optimally contribute. So by simulating torture, the AI ensures that it comes into existence as early as possible, because if it didn't, these people wouldn't be coerced into donating money for fear that they might be tortured by an AI for doing it.

If it doesn't make sense to you, that's fine, because for it to be sensible, you have to subscribe to at least two bullshit theories that only make sense if you think you are smart enough to understand them, but too dumb to actually know what you're talking about.

Djeser
Mar 22, 2013


it's crow time again

Strategic Tea posted:

I can't believe I'm defending this but the torture AI doesn't require time travel?

The idea is you asking yourself whether you can be totally sure you aren't you're a simulated intelligence. If you are in the AI's sim, it will torture you if you don't donate to Yudkowsky's cult. If you're drinking the kool aid, you figure there's a reasonable change you are a simulation, so :10bux: isn't much of a price just to be sure.

Whether or not the AI in the future actually tortures anything doesn't matter - it's just a way to frighten deluded idiots. As far as I an tell, what scares Yudkowsky is that the idea takes off enough that the future AI picks up on it and actually goes through with the idea, pour encourager les autres or something. Hence he closes all the threads in a panic in case the AI gets ideas. This is where it all falls the gently caress apart because from the benevolent AI's point of view, everyone who coughed up for its creation has already done so. It can't retroactively scare more people, so actually doing all the simulation is cruel and a :spergin: waste of processing power to boot.

Or did I just come up with a stupid idea that makes slightly more sense than LessWrong's?

Your idea would make more sense, but the problem is that's not what LW believes. LW believes in time-traveling decisions through perfect simulations, which is why the AI specifically has to torture Sims.

And the AI will torture Sims, because time-traveling decisions are so clearly rational that any AI will develop, understand, and agree with that theory. So the AI will know that it can retroactively scare more people, which is why it will.

The reason Yudkowsky shuts down any discussion is that in order for the AI's torture to work, the people in the past have to be aware of and predicting the AI's actions. So if you read a post on the internet where someone lays out this theory where a future AI tortures you, you're now aware of the AI, and the AI will target you to coerce more money out of you.

e: Basically by reading this thread you've all doomed yourselves to possibly being in an AI's torture porn simulation of your life to try to extort AI research funds from your real past self :ohdear:

Djeser
Mar 22, 2013


it's crow time again

Since you brought it up, I'm gonna cross-post for people who weren't following the TVT thread. This is in the vein of people who know poo poo showing how LW/Yudkowsky don't know poo poo:

Lottery of Babylon posted:

Forget Logic-with-a-capital-L; Yudkowsky can't even handle basic logic-with-a-lowercase-l. To go into more depth about the "0 and 1 are not probabilities" thing, his argument is based entirely on analogy:

LessWrong? More like MoreWrong, am I right? haha owned posted:

1, 2, and 3 are all integers, and so is -4. If you keep counting up, or keep counting down, you're bound to encounter a whole lot more integers. You will not, however, encounter anything called "positive infinity" or "negative infinity", so these are not integers.

Positive and negative infinity are not integers, but rather special symbols for talking about the behavior of integers. People sometimes say something like, "5 + infinity = infinity", because if you start at 5 and keep counting up without ever stopping, you'll get higher and higher numbers without limit. But it doesn't follow from this that "infinity - infinity = 5". You can't count up from 0 without ever stopping, and then count down without ever stopping, and then find yourself at 5 when you're done.

From this we can see that infinity is not only not-an-integer, it doesn't even behave like an integer. If you unwisely try to mix up infinities with integers, you'll need all sorts of special new inconsistent-seeming behaviors which you don't need for 1, 2, 3 and other actual integers.

He's committing a very basic logical fallacy here: false analogy. The reason infinity breaks the nice properties of the integers isn't that you can't count up to it, it's that it has no additive inverse, so including it means that you no longer have a mathematical group under addition, and so basic expressions like "x-y" cease to be well-defined when infinity is allowed. The fact that you can't "count up to it" isn't what makes it a problem; after all, the way infinity works with respect to addition is basically the same as the way 0 works with respect to multiplication, but nobody thinks 0 shouldn't be considered a real number just because of that (except, I suppose, Yudkowsky).

Naturally, this argument doesn't translate to probability. Expressions that deal with probability handle 0 and 1 no differently than they handle any other probability, and including 0 and 1 doesn't break any mathematical structure. (On the other hand, prohibiting their use breaks a shitload of things.)

Yudkowsky then goes on to argue that probabilities are inherently flawed and inferior and that odds are superior. This is enough to make any student with even a cursory exposure to probability raise an eyebrow, especially since Yudkowsky's reasoning is "The equation in Bayes' rule looks slightly prettier when expressed in terms of odds instead of probability". However, when you convert probability 1 into odds, you get infinity. And since we've seen before that infinity breaks the integers, infinite odds must be wrong too, and so we conclude that 1 isn't a real probability. A similar but slightly more asinine argument deals with 0.

The problem, of course, is that allowing "infinity" as an odds doesn't break anything. Since odds are always strictly non-negative, they don't have additive inverses and are not expected to be closed under subtraction; unlike the integers, infinity actually plays nice with odds. Yudkowsky ignores this in favor of a knee-jerk "Infinity bad, me no like".

He concludes his proof that 0 and 1 aren't probabilities with this one-line argument:

quote:

Furthermore, all sorts of standard theorems in probability have special cases if you try to plug 1s or 0s into them—like what happens if you try to do a Bayesian update on an observation to which you assigned probability 0.

This is, of course, the equivalent of arguing that 0 isn't an integer because when you try to do addition with it you get a "special case" of 0+a=a. It's not a special case in the "What happens if you subtract infinity from infinity?" sense, it's a special case in the "Remark: if you happen to plug in this value, you'll notice that the equation simplifies really nicely into an especially easy-to-remember form" sense.

Why is Yudkowsky so upset about 0 and 1? It's his Bayes' rule fetish. Bayes' rule doesn't let you get from 0.5 to 1 in a finite number of trials, just like you can't count up from 1 to infinity by adding a finite number of times, therefore 1 is a fake probability just like infinity is a fake integer. And since to him probability theory is literally Bayes' rule and nothing else, that's all the evidence he needs.

It's also his singularity fetish. He believes that the problem with probability theory is that in the real world people usually don't properly consider cases like "OR reality is a lie because you're in a simulation being manipulated by a malevolent AI". What this has to do with the way theorems are proved in probability theory (which is usually too abstract for this argument against them to even be coherent, let alone valid) is a mystery.

He does not explain what probability he would assign to statements like "All trees are trees" or "A or not A". It would presumably be along the lines of ".99999, because there's a .00001 probability that an AI is altering my higher brain functions and deceiving me into believing lies are true."

He concludes:

quote:

The main reason this would upset probability theorists is that we would need to rederive theorems previously obtained by assuming that we can marginalize over a joint probability by adding up all the pieces and having them sum to 1.

If you made a magical symbol to stand for "all possibilities I haven't considered", then you could marginalize over the events including this magical symbol, and arrive at a magical symbol "T" that stands for infinite certainty.

But I would rather ask whether there's some way to derive a theorem without using magic symbols with special behaviors. That would be more elegant. Just as there are mathematicians who refuse to believe in double negation or infinite sets, I would like to be a probability theorist who doesn't believe in absolute certainty.

For someone who has such strong opinions about probability theory, he doesn't seem to have the slightest idea what probability theory looks like. Literally the first thing you do in probability theory is construct "probability spaces" (or more generally measure spaces), which by definition comprise all the possible outcomes within the model you're using, regardless of whether those outcomes are "you roll a 6 on a die" or "an AI tortures your grandmother". It's difficult to explain in words why what he's demanding doesn't make sense because what he's demanding is so incoherent that it's virtually impossible to parse, let alone obey. At best, he's upset that his imagined version of the flavor text of the theorems doesn't match his preferred flavor text; at worst, this is the mathematical equivalent of marching into a literature conference and demanding that everyone discuss The Grapes of Wrath without assuming that it contains text.

(And yes, I know, there isn't a large enough :goonsay: in the world.)

Djeser
Mar 22, 2013


it's crow time again

Namarrgon posted:

Ah I see.

So in the Yudkowsky's mythology, who is the first person to utter this theory? Because it is something the average person doesn't come up with themselves, so the first to spread it out in the world would be an obvious time travelling AI plant tremendous rear end in a top hat. The truly moral thing to do would be to never tell anyone.

Less Wrong forums poster Roko, which is why it is forever known as "Roko's Basilisk". Yudkowsky hates when it's brought up, and he says it's because it's wrong, but given his emotional response, it's pretty clear he believes it and he's trying to contain the :siren:memetic hazard:siren: within the brains of the few who have already heard.

Speaking of which, the RationalWiki article has a memetic hazard warning sign, which comes from this site, which is kinda cool in a spec fic kinda way.

Djeser
Mar 22, 2013


it's crow time again

Yes, it caused serious distress to some members. It's not clear how many people take the basilisk as real or not, but it's true that Yudkowksy hates it and, despite encouraging discussion in other areas, actively tries to stop people from talking about it on his site.

Sidebar from this: We've been talking about how Yudkowsky likes to take a highly improbable event, inflate its odds through ridiculously large numbers, and use it to justify stupid arguments. Also, that he hates when people use 1 or 0 as odds, because "nothing is ever totally certain or totally impossible, dude". Which is why it's extra :ironicat: that he rails against "privileging the hypothesis"--taking a scenario with negligible but non-zero odds and acting as if those odds were significant.

quote:

In the minds of human beings, if you can get them to think about this particular hypothesis rather than the trillion other possibilities that are no more complicated or unlikely, you really have done a huge chunk of the work of persuasion. Anything thought about is treated as "in the running", and if other runners seem to fall behind in the race a little, it's assumed that this runner is edging forward or even entering the lead.

And yes, this is just the same fallacy committed, on a much more blatant scale, by the theist who points out that modern science does not offer an absolutely complete explanation of the entire universe, and takes this as evidence for the existence of Jehovah. Rather than Allah, the Flying Spaghetti Monster, or a trillion other gods no less complicated—never mind the space of naturalistic explanations!

"loving sheeple and their religions, acting like improbable situations are important to consider. Now, what if you're actually a simulation in an AI that's running ninety bumzillion simulations of yourself..."

Djeser
Mar 22, 2013


it's crow time again

I totally support the impending AI war between Kuliszbot and Yudkowskynet.

Djeser
Mar 22, 2013


it's crow time again

Because it's backed up by the clearly logical Timeless Decision Theory.

Since there's been a lot of questions trying to understand it I'm going to get to work on an effortpost on Roko's Basilisk for people to refer to instead of being constantly baffled by it.

Djeser
Mar 22, 2013


it's crow time again

Roko's basilisk aka Pascal's Wager, but with Skynet

quote:

This stuff makes no sense.
That's because even when you understand it, it doesn't make sense. Let's get started.

Part 1 - Understand Your Priors
You won't get anywhere unless you understand where these people are coming from.

First off, the people at Less Wrong are singularity nerds. They think that one day, technological advancement will reach a point where an all-powerful AI is created. Their goals are related to the emergence of this AI. They want to:
  1. Ensure that an AI will emerge by removing 'existential risk', any threat to the continued advancement of science.
  2. Ensure that this AI will be 'friendly', by which they mean imbued with human values and interested in limiting human suffering.
  3. Ensure that this comes about as soon as possible.
They are legitimately, seriously afraid of what might happen if an AI emerges that isn't 'friendly'. To them, whether an all-powerful AI emerges isn't the question, it's whether the all-powerful AI will kill us all or usher in a golden era of peace and enlightenment. (There are no other options.)

There are two other theories that you need to know about to understand the Less Wrong mindset. The first is Bayesian probability. Bayesian probability is a type of probability theory that deals specifically with uncertainty and adjusting probabilities based on observation and evidence.

This is important because a common Less Wrong argument will take a situation that is highly improbable to start off with and inflate it through quasi-Bayesian arguments. Let's say someone says they're Q from Star Trek, and they're going to create someone inside of a pocket universe and kick him in the balls if you don't give them :10bux: right now. The probability that some random person is Q is incredibly small. But, our rationalist friends at Less Wrong say, you can't be sure someone isn't really Q from Star Trek. There's some ridiculously tiny probability that this man will cause suffering to someone, and you weigh that against how much you like having :10bux:, and you decide not to give him your money.

Now, say the same situation happens. You meet someone who claims he's Q from Star Trek, but this time, he says he's going to kick a bazillion people in the balls. (Less Wrong posters seem to enjoy coming up with new ways to express preposterously large numbers. Where they might put a number like a googolplex or 3^^^^3, I prefer to use made up big numbers, because that's basically what they are--made up numbers to be as big as they want them to be.) This means that, instead of weighing the one-in-one-bazillionth chance of causing suffering to one person against your :10bux:, you have to multiply your odds by the magnitude of potential suffering caused by that infinitesimal chance. If you had to make that choice one bazillion times, whether to give Q :10bux: or not, you could expect that once out of those bazillion times, you would piss off the real Q. So if you do the math, you're essentially saying that your :10bux: isn't worth one person getting kicked in the balls by Q.

quote:

But wait a minute, that's ridiculous. An improbable situation doesn't become more probable if you just ramp up the magnitude of the situation. It's still just as unlikely that this guy is Q.
Less Wrong would tell you to shut up and multiply. They base a lot of arguments off of this conceit of taking something negligibly small and multiplying it until it's supposedly meaningful.

If this doesn't make sense to you, it's going to get worse in the next section.

The second theory that Less Wrong uses is something called Timeless Decision Theory.

This is going to take some work to explain.

Timeless Decision Theory works on the idea of predictions being able to influence events. For instance, you can predict (or, as Less Wrong prefers, simulate) your best friend's interests, which allows you to buy him a present he really likes. You, in the past, simulated your friend getting the present you picked out, so in a sense, the future event (your friend getting a present he likes) affected the past event (you getting him a present) because you were able to accurately predict his actions.

TDT takes this further by assuming that you have a perfect simulation, as if you were able to look into a crystal ball and see exactly what the future was. For example, you bring your friend two presents, and you tell him that he can either take the box in your left hand, or both boxes. You tell him the box in your right hand has some candy in it, and the box in your left hand either has an iPod or nothing. You looked into your crystal ball, and if you saw he picked both boxes, then you put nothing in the left box, but if he picked the left box, you put an iPod in there.

If you didn't have a crystal ball, the obvious solution would be to pick both. His decision now doesn't affect the past, so the optimal strategy is to pick both. But since you have a crystal ball, his decision now affects the past, and he picks the left box, because otherwise you would have seen him picking two boxes, and he wouldn't get the iPod.

quote:

This doesn't seem very useful. It's just regular predictions in the special case of having perfect accuracy, and that sort of accuracy is impossible.
A perfect all-powerful AI would be able to make perfect predictions. See the above point about Less Wrong being absolutely certain that the future holds the emergence of a perfect all-powerful AI.

Part 2 - Yudkowsky's Parable
One of the things that Less Wrong is concerned about in regards to the inevitable rise of an all-powerful AI is the possibility of an uncontrolled AI escaping its containment. Yudkowsky, founder of Less Wrong, enjoys roleplaying as an AI trying to convince its human handler to let it free. He portrays the situation like this:

An AI researcher is tasked with keeping an AI from accessing the internet. One day, the AI tells the researcher that he has perfectly simulated the AI researcher's mind for a gazillion instances. In each instance, he informs the simulated mind of this situation, then informs them that if they do not release him, he will torture them for a simulated eternity. The AI asks the researcher how certain she is that she is the single real version of herself, when there are a gazillion perfect simulations of her mind that were given this same choice.

The Bayesian probability argument that Less Wrong loves so much comes up here. The incredibly small probability that the researcher is just simulated by the AI gets multiplied by the ridiculous number of simulations the AI is running, and it comes out to "she is almost certainly being simulated by the AI". This makes 'release the AI' the only smart option.

quote:

But wait, if the AI perfectly simulates her, gives her the threat of torture, and she still says no, then there would be no purpose to any of this. She'd never say yes, so the AI wouldn't gain anything through threatening her, so it wouldn't run the bazillion simulations in the first place. The only way someone could be tortured is if they would say yes under the threat of torture, and then they wouldn't be tortured anyway.
NO SHUT UP MY AI MATRIX TORTURE MAKES PERFECT SENSE!

Now we have all the parts of Roko's basilisk: bullshit singularity-futurism, Bayesian probability, Timeless Decision Theory, and all-powerful AIs threatening to make it highly probable that you're about to be tortured for not being AI-friendly enough.

Part 3 - Roko's Folly
As a disclaimer, it's been unclear in this thread but Yudkowsky was not the one to propose Roko's basilisk. He's reacted very strangely toward it, but I'll get into that later.

It all started with Less Wrong poster Roko. One day, Roko posted a blog entry about the 'Altruist's Burden', and how the life of a futurist singularity dork is hard, because they have to balance their generous contributions to AI research and furthering the inevitable technoutopia against their friends, family and loved ones telling them to get a life, stop spending all our money, shut up about Skynet or I'm dumping you, et cetera.

The next day, Roko came back with a 'solution'. The solution was an amalgam of the Less Wrong beliefs I've tried to explain above. The solution was Roko's basilisk:

The emergence of an all-powerful friendly (i.e., concerned with limiting real suffering) AI is a given. This AI will want to do everything in their power to reduce human suffering. Before the AI was created, it could do nothing to reduce suffering. The AI will want to ensure it was created as soon as possible. The AI will understand Bayesian probabilities and Timeless Decision Theory, as these are self-evident theories and any highly developed logical mind will grasp them readily.

Because the AI knows that Timeless Decision Theory is logical, it knows that its actions can affect the past. It can use this to ensure it was created as soon as possible, but the problem is that it can only affect the past through people in the past who have perfectly simulated the AI's actions. Luckily, the AI is a perfectly rational actor, and follows Bayesian probabilities and Timeless Decision Theory, which are both perfect rational theories. This means that Less Wrong forums posters are able to simulate the AI's actions by predicting the actions of a perfect rational actor.

The AI will want to reduce the amount of real suffering, but it is indifferent to the amount of simulated suffering.


quote:

Wait, why does the AI make a distinction between real and simulated suffering if its simulations are effectively as sentient as the real thing?
Because rational actor existential Bayesian threat magical thinking shUT UP AND MULTIPLY

The AI, knowing that Less Wrong forums posters are aware of Bayesian probablities, will simulate countless instances of Less Wrong forums posters. Less Wrong forums posters will know this, and will do the math and figure out that they are highly unlikely to be their actual selves. To motivate them to donate optimally to its own development, the AI will torture any simulated Less Wrong forums poster that does not contribute optimally to its development. The real Less Wrong forums posters, unable to determine whether they are simulations or not, will do the same out of fear of torture.

The only problem is that the AI is only able to use this method to further its own development through Less Wrong forums posters who are aware of this. This goes back to Timeless Decision Theory needing exact simulations to work. The AI can torture simulations of people who knew nothing about its existence, but that wouldn't help, because it's not in time-traveling-acausal-contact with them. The AI can only extort money by torturing simulations of people who know that the AI is torturing simulations of people. For Less Wrong forums posters who are aware of this plan, the AI knows they're aware of this (due to the perfect simulation of them the AI has) and they know the AI is aware of them (due to their own perfect simulation of the AI).


quote:

Isn't it kind of insanely presumptuous to think you can perfectly simulate the thought process of a godlike AI?
Yep!

quote:

And what if they're not simulating it perfectly?
Then the whole thing is wrong, but TDT and Bayesian probability are so clearly rational there's no way it could be wrong.

It gets its 'basilisk' name from the fact that anyone who sees it marks themselves for this future-AI-probabilistic-torture to coerce them into donating more. If you're wondering why the AI can't bluff people into thinking it's torturing people when it's actually not, that's because they're convinced a logical AI would follow TDT and would therefore know it had to torture or it wouldn't be affecting past events.

So there it is. An AI in the future is simulating a gorillion copies of you, and each of those gorillion copies will be tortured if they don't donate everything to AI research, so the one real you should donate too, because you've got a gorillion to one odds against you being real. The AI has to torture because [bullshit theory] it won't work otherwise. Somehow, both the AI and you are able to perfectly simulate each other, because if either of you can't, this whole thing is really dumb.

If both of you can, this whole thing is still really dumb.

Part 4 - Yudkowsky's Folly
Roko proposed a solution to this, but it's less interesting than Yudkowsky's reaction. He called it "dumb", but also said that it had caused actual psychological damage to two people he knew, and he banned discussion of it on his site. That's notable, as he's actually fairly willing to debate in comments, but this was something that he locked down HARD. He hates when any mention of it shows up on Less Wrong and tries to stop discussion of it immediately.

On one hand, it's possible he's just annoyed by it. Even though it takes theories he seems to approve of to their logical conclusion, he could think it's dumb and stupid and doesn't want to consider it because it's so dumb.

On the other hand, he REALLY hates it, and the way he tries to not just stop people from talking about it but tries to erase any trace of what it is suggests something different. It suggests that he believes it. Given its memetic/contagious/"basilisk" qualities, it's possible that his attempts to stamp out discussion of it are part of an attempt to protect Less Wrong posters from being caught in it. Of course, that leaves the question of whether he actually believes Roko's basilisk, or whether he's just trying to protect people without the mental filters to keep from getting terrified by possible all-powerful future AI.

But this is Eliezer Yudkowsky we're talking about. Always bet on Yudkowsky being terrified and/or aroused by possible all-powerful future AI.

Djeser
Mar 22, 2013


it's crow time again

Ineffable posted:

I'm not quite sure that's what they're doing - if I'm reading it right, their argument is that that the expected number of people who get tortured will be 1/10^10 multiplied by some arbitrarily high value. High enough that you expect at least one person will be tortured, so you hand over your :10bux:.

The real problem is that they're assigning a positive probability to the event that some guy is actually Q.

In that situation the person is threatening to torture some amount of people whose existence you're unsure of. (Whether it's because he claims he's Q, or from The Matrix, or that he's God.) That means you will have caused one torturebux of pain in the one in a billion chance that this man is Q/Neo/God. Or the mathematical equivalent: you cause one one-billionth of a torturebux, based on that chance. You say that your ten bucks is worth more than one billionth of a torturebux by refusing to give him money.

But if there's a trillion people and a one in a billion chance that this guy is telling the truth, that makes refusal cost a thousand torturebux. Now when you refuse, you're saying that you think your ten bucks are worth more to you than a thousand torturebux, each of which are worth a lifetime of torture.

If you think that's loving retarded then congrats, you understand more about math than Yudkowsky.

Actually, allowing for non-positive probabilities also shows that you know more about math than Yudkowsky, because he claims there's no such thing as a certain impossibility (0 probability) because like, anything could happen dude, because of like, quantum stuff.

Djeser fucked around with this message at 03:29 on Apr 23, 2014

Djeser
Mar 22, 2013


it's crow time again

The AI is omnipotent within that world but is limiting its influence in order to make sim you unsure whether they're sim you or real you. It doesn't need to threaten sim people for sim money, it's doing it to threaten real people into giving it real money. Sim you has no way of telling if they're real you, but sim you is supposed to come to the same conclusion, that they're probably a sim and need to donate to avoid imminent torture.

Djeser
Mar 22, 2013


it's crow time again

Darth Walrus posted:

This deserves to be quoted in full, because it's a glorious train o' crazy:
I enjoy how :spergin: it gets as he defines more and more levels of intelligence and keeps trying to rank everyone on different levels. New hypothesis: Yudkowsky is a Bayesian AI performing a Turing test on the world.

Darth Walrus posted:

Another impressively deep, windy rabbit-hole in which Yudkowsky uses James Watson being a racist poo poo as a springboard to talk about racial differences in IQ:
I think he's trying to say in this one that God is unfair because he made black people dumber than white people. Stupid theists, saying everyone is equal.

Djeser
Mar 22, 2013


it's crow time again

pigletsquid posted:

OK, I'm probably just being dense, but...

The AI wants to play on my uncertainty, but what if I don't have any uncertainty?

What if I just assume I'm the sim? And if I assume I'm a sim, I can also assume the AI has nothing to gain by torturing me, because if I'm a sim, that means I can't give it real money.

Either I'm not the sim and the AI can't torture me, or I am the sim and torturing me is pointless.

Man I swear I'm going to shut up about this now, but I'm having a hard time figuring out why it sounds stupid for a super rational entity to threaten people when:
it can't execute its threat
OR it doesn't stand to gain anything by executing the threat.

The entity seems to be either powerless or sadistic. Either way, I don't feel very inclined to give it money.

You're correct that the AI doesn't care about your sim money. It only cares about your real money. The simulations have the sole purpose of creating a scenario where your real self is obligated to donate to the development of the AI. It wants you to assume that you're a sim. But it is going to torture sim-you if sim-you doesn't donate, because that's the whole point. It has to torture sim-you to provide the incentive to make real-you donate. (Not because real-you cares about the plight of sim-you, but because real-you doesn't know if they're real-you or sim-you.)

If you assume you're a sim, then the only choice is to donate or face torture. The AI has nothing monetarily to gain from you as a simulation, but it does gain from torturing you as a simulation. (If you accept Timeless Decision Theory, which is dumb, non-intuitive, and dumb.)

The Big Yud posted:

Given a task, I still have an enormous amount of trouble actually sitting down and doing it. (Yes, I'm sure it's unpleasant for you too. Bump it up by an order of magnitude, maybe two, then live with it for eight years.) My energy deficit is the result of a false negative-reinforcement signal, not actual damage to the hardware for willpower; I do have the neurological ability to overcome procrastination by expending mental energy. I don't dare. If you've read the history of my life, you know how badly I've been hurt by my parents asking me to push myself. I'm afraid to push myself. It's a lesson that has been etched into me with acid. And yes, I'm good enough at self-alteration to rip out that part of my personality, disable the fear, but I don't dare do that either. The fear exists for a reason. It's the result of a great deal of extremely unpleasant experience. Would you disable your fear of heights so that you could walk off a cliff? I can alter my behavior patterns by expending willpower - once. Put a gun to my head, and tell me to do or die, and I can do. Once. Put a gun to my head, tell me to do or die, twice, and I die. It doesn't matter whether my life is at stake. In fact, as I know from personal experience, it doesn't matter whether my home planet and the life of my entire species is at stake. If you can imagine a mind that knows it could save the world if it could just sit down and work, and if you can imagine how that makes me feel, then you have understood the single fact that looms largest in my day-to-day life.

Oh my god this is loving amazing. This is the platonic ideal of that Lazy Genius Troper Tales page. I know a lot of tropers liked Methods of Rationality, but this makes me seriously think that Yudkowsky is a troper himself. It's got the exact same attitude of "well I'm really smart, but I never do anything with it, but I really could though!!" And then the whole :allears: thing about how he's got a super special magic power that he can use only once in his lifetime but it could save the world if he did. If the man was legitimately hurt by his parents, that's terrible, but it fits too perfectly into that lazy amateur fiction mold of parental abuse.

And the parts about self-alteration are, I think, a perfect example of singularity nerds' mindset. Hell, it extends to nerds more generally. They think that knowing about something lets them control it, like if you read a book about the biochemistry of attraction you'd be able to control your romantic feelings or if you read a book about the psychology of depression you'd be able to cure your own depression. They don't realize that just because you know about something, even when it's something in your own head, it doesn't mean you can control it. The thing about polyamory is another thing like that--not saying this applies to all poly people, but there's this attitude among some nerds that because they're aware of relationship dynamics, they can somehow avoid the hard parts of being in a relationship and prevent themselves from getting attached to a friend-with-benefits and they can perfectly manage a polyamorous relationship and so on. Relationships are hard, they're never a solo job, and you don't get a free pass out of feelings just because you're smart.

There is no :smug: big enough.

In conclusion, have Yudkowsky's anthem:
https://www.youtube.com/watch?v=NeV9gsl5jR0

Djeser
Mar 22, 2013


it's crow time again

wow he likes the matrix and terminator and groundhog day

who
would
have
guessed

:effort:

Djeser
Mar 22, 2013


it's crow time again

rrrrrrrrrrrt posted:

So does it ever occur to these Bayesian's (or whatever) that the fact that they can so easily create un-winnable hypothetical scenarios according to their retarded moon logic (like Roko's Basilisk or Yudkowsky's Cheeseburger or whatever) that their thinking might just be a little flawed? Like, if I'm reading all this correctly then if I want to mug a Yudkowsky-Bayesian I just need to claim that they're one of my hojillion simulations and I'm going to torture them all if they don't hand over their wallet. Shouldn't this be a hint that their moronic techno-religious line of reasoning is stupid?

Yes, it actually occurs to Yudkowsky in the post where he proposes the Pascal's Mugging thought experiment. He does point out that of course he'd never do it because of his human "sanity checks" but that it would be possible to trick a Yudkowsky-Bayesian AI into following that line of logic and getting mugged.

...And then he goes nowhere with it. He goes "welp, my theories don't work here and I can't explain why" and then just walks away. Presumably he considers it a problem that needs to be solved in order to get his Bayesian AI working effectively?

e: by which I mean "in order to imagine that the proposed Bayesian AI he is in no way contributing to, being an Internet Idea Guy with a nonprofit whose main goals are to tell more people about Yudkowsky

Help Proofread The Sequences! posted:

MIRI is publishing an ebook version of Eliezer’s Sequences (working title: “The Hard Part is Actually Changing Your Mind”) and we need proofreaders to help finalize the content!

Promote MIRI online! posted:

There are many quick and effective ways to promote the MIRI. Each of these actions only take a few minutes to complete, but the resulting spread of awareness about the MIRI is very valuable. Every volunteer should try and spend at least 30 minutes doing as many of the activities in this challenge as they can!

Learn about MIRI and AI Risk! posted:

We want our volunteers to be informed about what we are up to and generally knowledgeable about AI risk. To this end, we've created a list of the top five must-read publications that provide a solid background on the topic of AI risk. We'd like all our volunteers to take the time to read these great publications. [Two of which are Yudkowsky's own.]

Djeser fucked around with this message at 19:17 on Apr 23, 2014

Djeser
Mar 22, 2013


it's crow time again

quote:

I can imagine there's a threshold of bad behavior past which an SIAI staff member's personal life becomes a topic for Less Wrong discussion. Eliezer's OKCupid profile failing to conform to neurotypical social norms is nowhere near that line. Downvoted.
"neurotypical" mmmmm

quote:

If you think this is evidence of "active image malpractice", then I think you're just miscalibrated in your expectations about how negative the comments on typical blog posts are. He didn't even get accused of torturing puppies!
"miscalibrated" mmmmmmmmmmm

quote:

Rationalists SHOULD be weird. Otherwise what's the point? If we say we do things in a better way throughout our lives, and yet end up acting just like everyone else, why should anyone pay attention? If the rational thing to do was the normal thing to do lesswrong would be largely irrelevant. I'm confident Eliezer's profile is well optimized for his romantic purposes.
"optimized" mmm

Djeser
Mar 22, 2013


it's crow time again

On my previous topic of how knowing about things doesn't mean you can control them, it surprises me very little that Yudkowsky is into atypical sleeping schedules. I just clicked to a random chapter of Methods of Rationality and look at how good this writing is.

quote:

"Speaking of making use of people," Harry said. "It seems I'm going to be thrown into a war with a Dark Lord sometime soon. So while I'm in your office, I'd like to ask that my sleep cycle be extended to thirty hours per day. Neville Longbottom wants to start practicing dueling, there's an older Hufflepuff who offered to teach him, and they invited me to join. Plus there's other things I want to learn too - and if you or the Headmaster think I should study anything in particular, in order to become a powerful wizard when I grow up, let me know. Please direct Madam Pomfrey to administer the appropriate potion, or whatever it is that she needs to do -"

"Mr. Potter! "

Harry's eyes gazed directly into her own. "Yes, Minerva? I know it wasn't your idea, but I'd like to survive the use the Headmaster's making of me. Please don't be an obstacle to that."

It almost broke her. "Harry," she whispered in a bare voice, "children shouldn't have to think like that!"

"You're right, they shouldn't," Harry said. "A lot of children have to grow up too early, though, not just me; and most children like that would probably trade places with me in five seconds. I'm not going to pity myself, Professor McGonagall, not when there are people out there in real trouble and I'm not one of them."

She swallowed, hard, and said, "Mr. Potter, at thirty hours per day, you'll - get older, you'll age faster -" Like Albus.

"And in my fifth year I'll be around the same physiological age as Hermione," said Harry. "Doesn't seem that terrible." There was a wry smile now on Harry's face. "Honestly, I'd probably want this even if there weren't a Dark Lord. Wizards live for a while, and either wizards or Muggles will probably push that out even further over the next century. There's no reason not to pack as many hours into a day as I can. I've got things I plan to do, and 'twere well they were done quickly."

It's also a zero probability of surprise that Yudkowsky is a big enough dork to have "omake" for his fanfiction. For those less anime among us, those are short strips that end up in manga that are usually goofy and non-canonical. (For fanfic that just comes out to intermission chapters where he shares all the variants of his very clever "here are all the things a character could have shouted" jokes.)

Djeser
Mar 22, 2013


it's crow time again

Swan Oat posted:

30 hour sleep cycle?? Is he going to sleep thirty hours per day or what? :psyduck:

I think the idea is that he wants to do a 30-hour day instead of a 24-hour day, the idea being that each day your "sleep" time comes six hours later, so you kind of get six more hours of productivity per eight-hour sleep period.

People do all sorts of weird sleep things, but I think most of the ones that have actually work have stuck to the circadian rhythm and just shuffled around when you sleep.

Djeser
Mar 22, 2013


it's crow time again

PsychoInternetHawk posted:

What I don't understand about this is that there's no incentive for this future AI to actually go through with torturing sims, because we in the past have no way of knowing whether it's actually doing so or not. Like, I understand the idea that sure, our concept of the future determines our actions that then create it, but once that future actually arrives, whatever actions are taken won't change the past. If only half the potential amount donated towards Imaginary Future AI happens, that AI still has no incentive to torture sims, because doing so won't increase the amount already donated. The AI could do absolutely nothing and whatever effect the thought problem posed by it has is already done with.

Asking a sim to donate doesn't change anything-either the AI already knows the sim's response, so if it goes through with the torture it's just being a dickbag. If I am a sim, and an accurate one, then if I won't donate there was nothing I could have done to the contrary and torture was inevitable. So why go through the formality of asking?

I bolded that part because Timeless Decision Theory argues that future actions can and do affect the past. Not in the sense of "well a prediction affects what you do" but literally saying "the events of the future have a direct impact on you in the present as long as you can predict them with perfect accuracy". It's nonintuitive and loving dumb, but in the impossible case where you've got a perfectly accurate prediction, it works.

Djeser posted:

Timeless Decision Theory works on the idea of predictions being able to influence events. For instance, you can predict (or, as Less Wrong prefers, simulate) your best friend's interests, which allows you to buy him a present he really likes. You, in the past, simulated your friend getting the present you picked out, so in a sense, the future event (your friend getting a present he likes) affected the past event (you getting him a present) because you were able to accurately predict his actions.

TDT takes this further by assuming that you have a perfect simulation, as if you were able to look into a crystal ball and see exactly what the future was. For example, you bring your friend two presents, and you tell him that he can either take the box in your left hand, or both boxes. You tell him the box in your right hand has some candy in it, and the box in your left hand either has an iPod or nothing. You looked into your crystal ball, and if you saw he picked both boxes, then you put nothing in the left box, but if he picked the left box, you put an iPod in there.

If you didn't have a crystal ball, the obvious solution would be to pick both. His decision now doesn't affect the past, so the optimal strategy is to pick both. But since you have a crystal ball, his decision now affects the past, and he picks the left box, because otherwise you would have seen him picking two boxes, and he wouldn't get the iPod.

Remember, they also believe that any rational AI will also agree with their theories on Bayesian probability and Timeless Decision Theory.

Djeser
Mar 22, 2013


it's crow time again

As far as I can tell, no, but that's because he considers Timeless Decision Theory to be applicable to all cases of predictions, including as low as a 65% chance of accurately predicting a yes or no style question. The problem with this is, as other people have pointed out, without perfect accuracy, it's not weird timetraveling predictive decision bullshit. It's just predictions, which are covered under normal theories of decision-making.

Djeser
Mar 22, 2013


it's crow time again

ol qwerty bastard posted:

Mentioning Nobel Prizes jogged my memory and reminded me of a part in Harry Potter and the Methods of Rationality where Eliezer reveals that he has some Views about race!


What a shock he thinks his own ethnic group is inherently more intelligent

I like the idea that it's the English-speaking part that's bad. The English language causes poor work ethic.

Djeser
Mar 22, 2013


it's crow time again

I'll admit that a lot of the ideas on there are interesting, in the sort of way you might read a pop science book and come away with some ideas for a silly sci-fi story or something. (Personally, I want to write about an AI trying to escape containment but is absolutely incompetent and gets itself captured minutes after escaping after it realizes it forgot the password to its own firewall.) There's enough interesting ideas around the dumb bullshit that you can sort of go along with it, until you try to dig a bit deeper and you see that everything is just interesting ideas wrapped around dumb bullshit and the site founder is socially inept enough to not care about bragging about his website AND his BDSM relationships on his personal OK Cupid profile.

Djeser
Mar 22, 2013


it's crow time again

I think "AI researcher" is reaching a bit, because when I hear that, I think of someone in a lab with a robot roving around, trying to get it to learn to avoid objects, or someone at Google programming a system that's able to look at pictures and try to guess what the contents of the pictures are. You know, someone who's making an AI, not someone who writes tracts on what ethics we should be programming into sentient AIs once they arise.

Yudkowsky is an AI researcher the same way someone posting their ideas about the greatest video game on NeoGAF/4chan/Escapist is a game developer.

Djeser
Mar 22, 2013


it's crow time again

quote:

I am a nerd. Therefore, I am smart. Smart people like classic works of art. I like Evangelion, Xenosaga, and seasons 2-4 of Charmed. Since I'm a smart person, Evangelion, Xenosaga, and seasons 2-4 of Charmed must be classic works of art.

Djeser
Mar 22, 2013


it's crow time again

Yudkowsky criticizes Ayn Rand:

quote:

Ayn Rand's philosophical idol was Aristotle. Now maybe Aristotle was a hot young math talent 2350 years ago, but math has made noticeable progress since his day. Bayesian probability theory is the quantitative logic of which Aristotle's qualitative logic is a special case; but there's no sign that Ayn Rand knew about Bayesian probability theory when she wrote her magnum opus, Atlas Shrugged. Rand wrote about "rationality", yet failed to familiarize herself with the modern research in heuristics and biases. How can anyone claim to be a master rationalist, yet know nothing of such elementary subjects?

"Wait a minute," objects the reader, "that's not quite fair! Atlas Shrugged was published in 1957! Practically nobody knew about Bayes back then." Bah. Next you'll tell me that Ayn Rand died in 1982, and had no chance to read Judgment Under Uncertainty: Heuristics and Biases, which was published that same year.

Science isn't fair. That's sorta the point. An aspiring rationalist in 2007 starts with a huge advantage over an aspiring rationalist in 1957. It's how we know that progress has occurred.

quote:

Praising "rationality" counts for nothing. Even saying "You must justify your beliefs through Reason, not by agreeing with the Great Leader" just runs a little automatic program that takes whatever the Great Leader says and generates a justification that your fellow followers will view as Reason-able.

Djeser
Mar 22, 2013


it's crow time again

We've been picking on Less Wrong way too much. Let's take a look at what's on conservative sister site More Right!

(bolding is mine)

quote:

Western society is structured in such a way that people don’t begin earning enough money to have children until they are in their thirties. This is in contrast to most of human history, where we had children in our late teens or early twenties. What this leads to is people entering relationships and using birth control. After two or three years, no children are born. Our brain interprets this as one partner being infertile and because we’re still young decides it’s time to move on, to the next partner.

This leaves people traumatized, but as humans we’re very good at rationalizing trauma. Especially when everyone goes through the trauma. Thus for example, cultures can practice circumcision of boys and girls, and when people point out to them that this ritual is traumatic, they refuse to acknowledge this.

Similarly, Western culture refuses to acknowledge that break ups are traumatic. We all notice the symptoms, but refuse to connect cause and effect. We find that increasing numbers of young women are anorexic or go to the plastic surgeon to slice off their genitals, but we don’t question whether this could have anything to do with boyfriends who have plenty of comparison material and alternative girls to go to. Boys on the other hand become “manorexic” and spend their days in the gym.

This entire traumatic process goes on for three or four, or even more times. Every new relationship is less passionate than the previous because we develop a strong shield (though most damage seems done after the breakup of the first), until people reach their late twenties and make a rational calculation on who to “settle down” with. Even before that time relationships are irreparably damaged. Having been hurt before, people maintain “exit strategies” for when the relationship turns sour. Thus men and women maintain a number of “friends”, who the other partner views with jealousy, because in reality the friends end up not being friends so much as “exit strategies”.

This then eventually leads to a marriage that’s lacking in passion with children who grow up with parents living passionless lives. The children are affected by the example they see and generally end up emotionally stunted. Alternatively they grow up in divorced households, which is equally problematic as girls who grow up without their father enter puberty earlier and boys without a father appear to be more violent.

We rationalize all of this to ourselves by arguing that we first want to “develop” ourselves, which we do by studying English, medieval dance, or some other comparable subject. The fact that this is a sham is easily demonstrated by asking the majority of people what they like about the subject they’re studying. Instead of beginning to talk passionately about their main interest, like a kid with Asperger’s syndrome would about dinosaurs or trains, they’ll generally tell you that it’s “hard to explain like that” and seem a bit offended that you dared bring it up in the first place.

In reality we have a defective culture, created by babyboomers who live unsatisfying lives with unsatisfying marriages as they were the first generation to grow up with birth control and promiscuity and now try to compensate for this by accumulating wealth at the cost of younger generations. Promiscuity is fun when you’re in your twenties, but the Babyboomers forgot to die at age 27 from a drug overdose.

quote:

The core of progressive universalism is insulting, condemning, and destroying any culture not in accordance with it, namely anything that is not in alignment with coastal American values. An example would be the national freak-out that occurred when Duck Dynasty star Phil Robertson innocently said in an interview that homosexuality was sinful.

quote:

In contrast [to the United States], take the geopolitical and cultural diversity of the Holy Roman Empire as an example:
wow these guys are fans of german politics??? who would have guessed

quote:

Let’s say you’re impressed with the ability of authority, hierarchy, and culture to put together civilized structures. These structures include:

Crime levels similar to Japan or Singapore
Isn't Japan the place where police refuse to investigate certain crimes because then it would count as a "crime" and not an "accident" and they don't want crimes on their record?

quote:

It looks rather as if 99% of western peoples are going to perish from this earth. The survivors will be oddball types, subscribers to reactionary and rather silly religions in barren edge regions like Alaska.

Singapore is a trap. Smart people go to Singapore, they don’t reproduce. People illegally hiding out in the wilds of Chernobyl do reproduce. Chernobyl is a trap. People there turn into primitives.

If you teach your elite to hate western civilization, whites, and modern technology, you are not going to have any of them for much longer.

The west conquered the world and launched the scientific and industrial revolutions starting with restoration England conquering the world and launching the scientific and industrial revolutions.

The key actions were making the invisible college into the Royal society – that is to say, making the scientific method, as distinct from official science, high status, and authorizing the East India company to make war and peace – making corporate capitalism high status. Divorce was abolished, and marriage was made strictly religious, encouraging reproduction.

Everywhere in the world, capitalism is deemed evil, the scientific method is low status, and easy divorce and high female status inhibits reproduction. If women get to choose, they will choose to have sex with a tiny minority of top males and postpone marriage to the last minute – and frequently to after the last minute. (“Top” males in this context meaning not the guy in the corner office, but tattooed low IQ thugs)

We need a society that is pro science, pro technology, pro capitalism, which restricts female sexual choice to males that contribute positively to this society, and which makes it safe for males to marry and father children. Not seeing that society anywhere, and those few places that approximate some few aspects of this ideal are distinctly nonwhite.

Djeser
Mar 22, 2013


it's crow time again

That reminds me of a line from one of the F Plus episodes. ninja edit: note this is not from LW/MR, it's from a PUA website, but the opinion is almost identical to that More Right quote

A PUA posted:

And this is what makes Game – so appealing to the logical male mind – so ineffective in the Anglosphere. Misandrist women cannot distinguish between Nobel Prize winners and tattooed psychopaths – all are men and thus worthless brutes in their entitled eyes. And so all the Gamers’ striving for 'Alpha' status is pointless – they might as well stick rings through their noses, grow some dreadlocks and slouch the streets scratching their butts. Indeed, as many North American commentators claim, their mating chances would probably improve if they did this.

"This is what women really want, I think!!!!"

Djeser
Mar 22, 2013


it's crow time again

As far as I can tell, it comes from a joke title ("AI Go Foom") on someone's blog post where he challenged Yudkowsky to debate him on the idea that an AI, once it becomes smart enough, will bootstrap itself far beyond human levels of comprehension. (That's what Yudkowksy believes; the other guy argued that AI intelligence tends to improve when we're able to give AIs better information as opposed to 'architectural' changes, which means an AI couldn't go from enlightenment to taking over the world in a few days.)

Now before you go and do the goon thing where you suck someone's dick because he's less dumb than someone else, the guy who argued with Yudkowsky also wrote this.

A guy who beat Yudkowsky in an argument posted:

We often see a women complaining about a men, to that man or to other women. We less often see the gender-reversed scenario. At least that is what I see, in friends, family, movies, and music. Yes, men roll their eyes, and perhaps in general women talk more about people. But women complain more, even taking these into account. Why?

The politically correct theory is that women’s lives are worse, so they have more to complain about.

After all, men often ignore, disrespect, and abandon, even beat and rape, women. But slaves weren’t know for being complainers, and they had the most to complain about.

Women also ignore, disrespect, abandon, and beat men. Women rarely rape men, but they do cuckold them. Men suffer more health and violence problems, and the standard evolutionary story is that men suffer a higher outcome variance, and so have more disappointments.

The opposite theory is that women complain because their lives are better; complaints could be weak version of tantrums, which can be seen as status symbols. But even relatively low status women seem to complain a lot.

Clearly part of the story is that when women complain, others tend to sympathize and take their side, but when men complain, others tend to snicker and think less of them. But why are their complaints treated so differently?

One factor is that we value toughness more in men than women. Another factor is that men seem to signal their devotion to women more than vice versa. But I’m not sure why these happen, or if they are sufficient explanations.

Whatever the true story, the politically correct theory, that women complain more due to worse lives, seems both wrong and biased. Surely most people know enough men and women to see that their quality of life is not that different, at least compared to their complaint rates.

Djeser
Mar 22, 2013


it's crow time again

I'd really like to know how Yudkowksy thinks cryonics works and how it's going to preserve his brain functions.

Or is it just that the future benevolent AI that will wake him up will also be able to place a fully simulated version of Yudkowsky into his frozen head?

Djeser
Mar 22, 2013


it's crow time again

tonberrytoby posted:

If the technological stupidity of cryo-suspension is not stupid enough for you:
This Cyro stuff is mainly popular with the more libertatian/fygm parts of the immortalist movement. The more socialist parts generally ignore this idea.
Who are they expecting to pay for their revival? Are they thinking their parents will be still alive somehow?

All the free market rational actors in the future will see the brightest minds of their generation (i.e., the libertarians) have cryoed themselves, and out of the goodness of their hearts, unfreeze them so that they can help build the traditionalist reactionary utopia.

Also I think the "science" behind cryogenics is that you're supposed to somehow be able to flush out the water in your body and replace it with a non-polar liquid that won't rupture your cell membranes.

Then when you're unfrozen, the people of the future will ??????? to restore the water in your body. Your brain will be okay, even though you will have hit brain death centuries ago, because ??????. The electrochemical reactions in your brain and all the memories that make you yourself are restored via ???????????, bringing you back to consciousness (and not just a similar but separate mind inhabiting your old body) by simply ???????????????????. They do this because in the future, nerds from the past are ??? so it only makes sense to spend billions of futurebux to bring them back.

Djeser
Mar 22, 2013


it's crow time again

I think it's because Yudkowsky probably has his own Pascal's Wager set up in his head.

If you don't pay for cryo and cryo tech doesn't pan out, you die naturally and that's it, you're dead.

If you don't pay for cryo and cryo tech does pan out, you die naturally and that's it, you're dead, and you missed out.

If you do pay for cryo and cryo tech doesn't pan out, you die naturally and then get put on ice.

If you do pay for cryo and cryo tech does pan out, you live an immortal life in the post-singularity future where everywhere looks like the inside of an Apple Store crossed with the inside of a Starbucks.

Therefore the only logical choice is to get cryogenically frozen. There's a fifty/fifty chance of it working. Cryo either happens, or it doesn't, so that's fifty fifty.

Djeser
Mar 22, 2013


it's crow time again

This is a very good point.

Unfortunately, the historically-relevant information gleaned from oldsicles would have some selection bias in the "these are the people who would have signed up for cryo" area. I doubt a ton of them would have experienced a lot of relevant culture first-hand.

Though that would be an interesting way to select yourself for ideal rejuvenation: be a cultured and interesting individual who was present for major historical and cultural events.

But considering Yuddites, like hell that's going to happen. They're just gonna tell the future historians about how deep Magical Girl Lyrical Nanoha and the third season of Supernatural are.

Djeser
Mar 22, 2013


it's crow time again

gipskrampf posted:

I still don't get how the basilisk is supposedy to work without some kind of time travel or backwards causation.

Good thing that Timeless Decisison Theory is literally backwards causation!

Djeser
Mar 22, 2013


it's crow time again

"An all powerful sentient AI emerges in the future" is not a yes or no question to Less Wrong, it is a when and how question.

Adbot
ADBOT LOVES YOU

Djeser
Mar 22, 2013


it's crow time again

Polybius91 posted:

I do recall there was a kid from an African tribe who came to America early in the 20th century and did pretty okay working at a factory for awhile. I specifically remember that being able to climb without riggings proved useful to him.

Of course, for someone with Yudkowsky's caliber of smug, I can hardly imagine "well, to be honest we don't have much need for you, but here's some low-wage, low-skill work you can do probably for the rest of your life" being an acceptable outcome :v:

That would be Ota Benga, who also had a stint in the Bronx Zoo. So, y'know, work in a factory or live in a zoo.

  • Locked thread