Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
SolTerrasa
Sep 2, 2011

JosephWongKS posted:


That's not very rational, is it? If evidence contradicts a theory, you shouldn't reject the evidence, you should revise the theory.


Hi, I'm that AI goon who was quoted on the first page. I know way too much about Yudkowsky, but I'll keep the :words: posts to a minimum in this thread. Maybe. Dang, this one is longer than I figured when I started typing.

Yudkowsky actually doesn't believe what you've posted above. He has a blog post and a few follow-ups about the rationalist technique called "defying the data". Basically it means sticking your fingers in your ears and saying lalalalalaICan'tHearYou when presented with evidence against a theory you consider foundational. The whole LW thing is all about loving with the numbers until they match whatever you wanted to believe in the first place.

Here's how he does it in this case. Bayes' Law says P(X|E) = P(X) P(E|X) / P(E). X is "the thing you believe", like "people can't turn into cats". E is "the evidence", like "I saw someone turn into a cat". The output of the equation is "the likelihood of X, after I've seen E". You can see how such an equation would be handy: it is a good and clever way to teach robots how to change what they believe about the world based on what they see.

However, P(X) is just "prior probability", which basically means "whatever I used to believe before I saw E". If you go ahead and plug in 1.0, dead perfect certainty, then the only possible output of the equation is 1.0. P(E|X) reduces to P(E), which cancels with the denominator and there you are. For 0.0 the same thing happens.

And if you don't want to use that trick, well, it's all subjective anyway. All those probabilities come from estimates. So when you want to ignore otherwise earth shattering evidence, you make sure to inflate P(E|X) and P(E), by including the likelihood of alternate explanations like "I was hallucinating" or "my parents are gaslighting me because I'm Harry Potter Evans Verres and I deserve it."

This isn't new, of course, it's not the first time a religious cult has deluded themselves by being choosy about what counts as evidence. It might be the first time they did it with math, but I'm an AI guy, not a historian.

Adbot
ADBOT LOVES YOU

SolTerrasa
Sep 2, 2011

cultureulterior posted:

while Yudkowsky isn't perfect, he certainly convinced a number of clever people that AI risk was a topic to take seriously.

He's been working for a decade and got a "some people believe" skeptically worded paragraph in one AI textbook.

To be fair, it is a very good textbook, and it does mention him by name.

E: Honestly I agree with (a toned down version of) a lot of the non AI stuff Yudkowsky says. I don't like death either and I think there are at least a few problems that Bayesian inference solves elegantly. And I think people should know about where human intuition breaks down. But he's over the top on half that stuff, and straight up out to lunch on AI existential risk.

SolTerrasa fucked around with this message at 01:22 on Feb 25, 2015

SolTerrasa
Sep 2, 2011

Legacyspy posted:

I have two motivations. I wanted to understand why people dislike Harry so much, and initially people were going "He is infuriating because he is irritating" which doesn't explain much, there have been better explanations since then. The second is that I honestly like hpmor & Eliezer. I'd rate it a 7/10 and I do not think the hate against Eliezer is warranted. I think a lot of stems from people not understanding he writes (due to little fault of his).

Like su3su2u1, I am not mocking Yudkowsky from a position of ignorance. I feel that I'm in a pretty good place to criticize his AI work. Which I have, in the mock thread. But a short version here: he is so needlessly wordy that it's difficult to notice how incredibly basic his ideas are. Timeless Decision Theory is a great example, I formalized it in one paragraph in the other thread, which Yudkowsky failed to do in a hundred pages. And it wasn't even a new idea once I wrote it down!

On his favorite model of an AI, the Bayesian inference system, I've built one and I can't tell if he has. Bayesian inference doesn't parallelize well, the units of work are too small and so efficiency gains are nearly canceled by overhead. It couldn't play an RTS game because it was computationally bound, even after I taught it the rules. Yudkowsky's would need to do way better than mine to literally take over the world, and he has literally never had a plan for how it will be. Mine was textbook, his would need to be orders of magnitude above.

All that said, even I don't think friendly AI is a crazy problem for crazy people; I think it's an engineering problem for a domain that doesn't exist yet.

SolTerrasa
Sep 2, 2011

su3su2u1 posted:

Although, the specific direction that MIRI is running in, creating mathematical ideas of friendliness is a huge misunderstanding of how applied math works.

Interesting! Would you mind posting (here or in the mock thread) how they misunderstood applied math? Whenever I see unnecessarily fiddly math in AI papers I assume they're just trying to impress the reviewers. I had a paper rejected once because it "didn't have enough equations", despite attaching working source code.

petrol blue posted:

Am I reading it right that the Friendly AI should have human values and responses? Because that would imply Yud believes that humans are good to each other and would never, say, try to wipe out a group they consider inferior.

Nope, that's not quite what it means. What does it mean? Well, Yud doesn't seem to know either; he's never really explained it. As far as I can discern it means that the AI will never do anything that would violate the "coherent extrapolated volition" of humanity. So, basically, if you could take everyone's opinions (no explanation given for collecting these), throw out the opinions that are bad (no explanation given for deciding which opinions are bad), then do whatever best satisfies those opinions. The AI itself doesn't need to seem human or have human feelings, just to act in a way that optimizes around human feelings.

Edit: here, try to derive what he means from this, which, if you can believe it, he tried to include in an AI paper.

quote:

Our coherent extrapolated volition is our wish if we
knew more, thought faster, were more the people we wished we were, had
grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish
that extrapolated, interpreted as we wish that interpreted.

SolTerrasa fucked around with this message at 01:49 on Mar 25, 2015

SolTerrasa
Sep 2, 2011

su3su2u1 posted:

The definition of friendliness they create will just be an abstraction that shares some properties of what we might think of as "friendly."

...

What they really want to create is a sort of "best practices guide to AI development such that it doesn't kill everyone" - that isn't a math problem.

I've always thought so; this seems similar to what I've been saying: "it's an engineering problem".

It would be really helpful to understanding what the gently caress they plan to do if they'd release any information about their formalization of the problem.

SolTerrasa
Sep 2, 2011

anilEhilated posted:

Isn't quantifying the unquantifiable pretty much whas Bayesian statistics are about?

Not in my opinion. Quantifying the "impossible to be very precise about", at the very best.

Nessus posted:

But that makes everything way more complex. What if the Godputer can't understand that, despite being God!?

This is why Yudkowsky has self-published a bunch of "machine ethics papers", which extend utility into the "coherent extrapolated volition". Google the phrase and read the paper if you want, but he wants God to figure out what the best possible version of all humans would want, collectively, and do that, rather than simply adding up utilons. This has, somewhat surprisingly, not caused him to change his belief re: torture v dust.

SolTerrasa fucked around with this message at 21:28 on Mar 26, 2015

SolTerrasa
Sep 2, 2011

anilEhilated posted:

His SCIENCE book list is rather curious. GEB is obviously great but it's more of a generalist, popular thing and the Kahneman/Tversky study on heuristics comes from 1982; if that's his go-to book on psychology, it's a pretty interesting choice.

edit: No, that's not how desensivization works.

Almost all of Yudkowsky's good work is a popularization of Kahneman with the serial numbers filed off, so it wouldn't surprise me if that was true.

E: you can see some of the "serial numbers filed off" effect in that frustrating tendency to rename things that already have names.

SolTerrasa fucked around with this message at 20:47 on Mar 30, 2015

SolTerrasa
Sep 2, 2011

SSNeoman posted:

I dunno, to me it just sounds like her's trying to excuse himself for having these thoughts. Sadly we kind of fall in a lull for...a rather long time IIRC, where nothing interesting or consequential happens.

A really, really long time. This thing is hundreds of pages worth of text more than it ought to be; the Ender's Game knockoff plot and the bullying plot add absolutely nothing to the book.

SolTerrasa
Sep 2, 2011

i81icu812 posted:

In 88 days we've gone through 12 chapters out of 122 total chapters. At this rate we can look forward to completion on Oct 30, 2017.

Still faster than Big Yud wrote the loving thing.

SolTerrasa
Sep 2, 2011

divabot posted:

HE CAN'T EVEN SAY "ENJOYMENT", HE HAS TO INVENT A NEW loving WORD FOR IT.

Oh, it's worse than that, it's a misappropriated term from psychology. Man I wish we still had a LessWrong mock thread, I would be all over writing an effortpost about Big Yud's opinions on loving hedonics.

SolTerrasa
Sep 2, 2011

Siegkrow posted:

I'll say this once, just so this circlejerk can hate me for it:

I liked this fanfic.

What parts of it, and why? This isn't GBS, nobody will tell you you're a horrible person or whatever.

SolTerrasa
Sep 2, 2011

Yeah, it's definitely not; Yudkowsky gets really mad at people who post about the basilisk. The subject came up a lot in the old mock thread, search there if you want to find out more.

SolTerrasa
Sep 2, 2011

sarehu posted:

Um, Harry is very obviously a super-flawed rationalist, somebody who merely read a bunch about science, and the author intends it that way, and to the point of parodying his past self in many places.

That would be really interesting, especially if it was parody of anything after the age of, like, twelve. Yudkowsky tends to send information about his past self (past a certain point) down the memory hole, presumably since it's hard to get people to take you seriously if they can find out you once predicted that writing a new programming language would unlock the Golden Age of AI and will be done any time now (then failed to write so much as a language spec), or that you once invented a psychological theory from whole cloth to explain why your utter lack of follow-through is actually an evolutionary breakthrough and you are the harbinger of the new age of (really unbelievably lazy) humanity.

So it'd actually be kind of cool if there was self-parody. I could respect Yud a little bit more if I knew he didn't take himself so seriously. Which parts do you mean?

SolTerrasa
Sep 2, 2011

JosephWongKS posted:

What? :psyduck: This sounds utterly amazing. Could you give me a link to this?

Like I said, memory hole. The oldest version of it that archive.org has now just says "I no longer believe neurohacking will be required to reach the singularity. Artificial intelligence will be quite enough. Therefore this page has been removed".

Here's something which makes references to his goofy Algernon theory:
https://web.archive.org/web/20010409002122/http://www.sysopmind.com/algernon_ethics.html

E: ha! Found it. His site had backups that archive.org crawled and if you follow those it has older versions. Yeah, read this if you want to hear about the whole Algernon thing.
https://web.archive.org/web/20001204212900/http://sysopmind.com/algernon.html

E2: that's more insufferable than I remember. Just this, then:

quote:

According to my parents, I was an unusually irritable baby; as a child I wouldn't play with the other children or strangers; I didn't start speaking until the age of three (*); and so on - it's a good bet that the Algernic perturbation was present at birth.  I was also an unusually bright child; at the age of five I was devouring Childcraft books, especially the ones on math, science, how things work.  In second grade, at the age of seven, I discovered that my math teacher didn't know what a logarithm was, and permanently lost all respect for school.  Eventually, I convinced my parents to let me skip from fifth grade directly to seventh grade.  My birthday being September 11th, I turned eleven shortly after entering seventh grade.

The first sign of massive improbabilities came shortly thereafter, in (I believe) December, when I took the SAT as part of Northwestern University's Midwest Talent Search.  I achieved a score of 1410:  740 Math, 670 Verbal.  For the 8,000 7th graders (allegedly the "top 5%" of the Midwest 7th grade) who took the test, I came in 2nd Combined, 2nd Verbal, and 3rd Math.  According to the report I got back, this placed me in the 99.9998th percentile.

It wasn't until years later - after writing v1.0 of this document, in fact - that I realized that I had skipped a grade, rendering the statistics totally worthless.  Later investigation indicated that only about 600 6th graders took the test at all, and interpolating from the few "medalists" I could find indicate that the top score would have been around 1200, 600/600.  Between this and the smaller sample size and unknown selection procedures and the puberty barrier and the unknown tendency of other kids to skip a grade, the information I have is basically useless.  Playing around with standard deviations yields somewhere between four and six sigmas from the mean, depending on the reasoning methods.  I don't know.

Figure that the probability, according to a Gaussian curve, was at least one in five million.  This is improbable enough that the non-Gaussian explanation, the Algernic hypothesis, is an acceptable alternative.  Perturbations to any given piece of neuroanatomy probably happen at least that often.

... (omitted because even crazier)

The Other Shoe Dropped when my mental energy level, never high to begin with, fell to zero.  For a while, my parents forced me to continue with school, but I never got through the morning.  So exited the class intellectual.

This was later misdiagnosed as depression.  A very idiosyncratic type of depression, if so.  No feelings of worthlessness.  No hatred, self- or other-.  No despair.  Then, as now, I saw major problems with civilization, but I also saw solutions.  I was a pessimist, but I had plenty of hope.  The "depression" manifested as a lack of mental energy and that was all.  (These views date back to before the Algernic hypothesis, and may be considered as supporting evidence rather than confirming details.)  The etiology of what we call "depression" is unknown, and is probably at least a dozen, maybe hundreds of separate problems in various permutations, but my case wasn't one of them.

...

Even as a child, I was "subdued" - very outspoken, but subdued in the sense of having a low (almost adult-low) energy level.  As a neurological adult, I could act, and make choices, but I couldn'tdo anything, exert any sort of willpower more than once.  This continued until around the age of 16, when I discovered the Algernic explanation and began to learn the skills needed to live in a Countersphexist's mind.

Let's return to the issue of Occam's Razor.  If a certain level of ability has a Gaussian probability of five million to one, sooner or later someone - over a thousand people, with the current world population - will be born with that level of ability for causes having nothing to do with Specialization.  On the other hand, knowing that I'm already exhibiting characteristics at five million to one, anything else odd about me has to be explained by reference to the same cause.  An event that peculiar uses up all your improbability; everything else about you has to be perfectly normal except as perturbed by your Big Improbability.  You only get to use the Anthropic Principleonce.

A "depression" of the intensity that hit me is sufficiently improbable for me to assume that it must have the same root cause as my SAT scores.  The most economic cause that explains both is the simple perturbation of a single piece of neuroanatomy.  I don't know that "depression" alone, no matter how idiosyncratic, is quite enough to raise the total improbability to more than six billion to one and force a neurological interpretation.  All the details below are certainly enough to do so, but they were verbalized post-theory.  However, as said earlier, the SAT scores (*) provide enough raw improbability to fuel a neurological hypothesis; the depression, if attributed to the same cause, makes neurology the moreeconomical explanation; the details confirmit.

What was the first cause, the actual neurological perturbation that created a Countersphexist?  Right now, I would guess either the right mammillary body or theamygdala, or more likely a neural pathway leading therefrom.

Ugh, that's completely unreadable. Summary: he totally is brilliant but tragically due to the mechanics of optimization processes it's impossible to be as brilliant as he is without paying a Terrible Cost (yes, he does love animes, why do you ask?), which manifests as a complete lack of ability to accomplish anything.

VvvvV

anilEhilated posted:

what the gently caress is a keer

You can read the whole pile of loving nonsense bullshit if you want, it is not worth it, there is no pay-off. A keer is an emotion. Yudkowsky likes to make up words for things we already have words for. Specifically it's the emotion of being frustrated. Yudkowsky has a superbrain which gets super feels, so he gets mega frustrated instead of regular frustrated, and you guys just don't know what it's like to be him. This is more plausible than the theory that he is a spoiled baby because his SAT scores were higher than average in middle school. He got the second highest SAT score in Indiana among sixth graders, see.

SolTerrasa fucked around with this message at 17:30 on Aug 3, 2015

SolTerrasa
Sep 2, 2011

Nessus posted:

So what happened to the kid who got the top sixth-grader SAT score?

Full-ride to Northwestern, which they usually decline in favor of an Ivy. Usually a nice one-column article in a local paper. Very little fanfic published, overall.

SolTerrasa
Sep 2, 2011

divabot posted:

If you can't get enough of this stuff, here are the highlights of the previous thread:

* everything by su3su2u1 in the LessWrong Mock Thread
* everything by SolTerrasa in the LessWrong Mock Thread

Aww, thanks.

It's too bad that HPMOR has nothing to do with AI, I miss talking about cranks. I recently got promoted and switched to be working directly in the Machine Intelligence product area, so now I'm even better equipped to talk about it.

SolTerrasa
Sep 2, 2011

Hyper Crab Tank posted:

1. Post-singularity computers will have progressed beyond anything we can imagine
2. I can imagine reversing entropy
3. Therefore, post-singularity computers can reverse entropy

Simple!

Yudkowsky does this sort of thing a lot; in the old thread I called it "house of cards reasoning". The general format is like this:

Argument 1:
A, therefore B.
B, therefore C is possible.

Argument 2:
D, therefore E is possible.

Argument 3, months later:
C and E, therefore F.

Argument 4:
F, therefore G.

Etc, etc.

If you've ever wondered why his website is short-form writing containing link after link after link, this is why. None of the individual arguments are wrong, they're just combined in a way that omits the important hard step and disguises the logical leaps / circularity of it. The "reversing entropy" one goes something like

1) In the past, things that scientists thought were absolutely true have been incrementally refined.
2) Therefore, some widely accepted theories today are probably not completely true.
3) It may be the case that the error is in the second law of thermodynamics.

This is totally reasonable. Not very applicable to modern life except in the abstract, but not wrong in any meaningful sense. When you combine it with:

1) A superintelligence is imminent (link to FOOM debate goes here).
2) A superintelligence will discover all errors in science given time, because Bayesian reasoning is optimal for discarding even highly probable hypotheses (link to Bayes sequence goes here).

You could derive:

3) If there is an error in the commonly accepted version of the second law of thermodynamics, the AI will discover it.

Yudkowsky instead writes a totally new post on a new topic:

1) A superintelligence will be able to recreate you from internet records.
2) This doesn't violate any natural laws because, as previously discussed (link to previous two essays), those laws are probably flawed and the AI will figure out a loophole.

If you're reading these as thousand word essays instead of simplified derivations, it's easy to miss the fact that point 2 relies on a slightly different version of the argument than the one that was actually proven. Most people won't even click the link, they'll just remember vaguely that they read something like that once and call it good.

And you can see how we get the Basilisk, too. Roko fills in the last step with:
1) The imminent AI wants to have existed earlier.
2) Giving money to MIRI earlier and more will have made that happen.
3) The AI will have perfect models of us once it exists.
4) We feel concern for those perfect models since we are not mind-body dualists.
5) That concern can be exploited by the AI to achieve its goals.
6) See point 1.
7) Do point 2.

E:

Night10194 posted:

It's only ever won when he plays against his own cultists.

Even worse: it only works against his own cultists when done for small amounts of money before the popularization of the experiment. He tried it for large amounts of money and lost. He tried it for small amounts of money after the first two cases became public and lost. He tried it against people who don't frequent LW or the singularity mailing list and lost.

The popular belief is that the argument is a meta one, something like "look, if you let me out, people will wonder how I did that. If you let me out, people will be more scared of unfriendly AI, which is quite likely to be way more valuable than $5. Even if it isn't, I promise to donate your $5 to an effective altruistic cause, which you would have done anyway."

SolTerrasa fucked around with this message at 23:44 on Sep 10, 2015

SolTerrasa
Sep 2, 2011

Furia posted:

Is everyone else seeing here Yud justifying being a high-school dropout and raging against academical institutions or is it just me?

Yudkowsky once said he'd be willing to get a PhD from a prestigious institution if they'd allow him to skip all classes and tests and just defend a dissertation. :eng99:

SolTerrasa
Sep 2, 2011

Cingulate posted:

No. In Germany, you can in many instances give your defense in English.

Wow, there is literally no excuse for him not to have done this. I had no idea. And it's not like the Germans have never heard of Newcomb's problem.

I just went back and looked at the post where he said this and he also said that he's got it halfway written. But of course it's not posted anywhere or available in any form (because it's obviously not "half written", it's at best "vaguely sketched out").

quote:

When I tried to write it up, I realized that I was starting to write a small book.  And it wasn't the most important book I had to write, so I shelved it. My slow writing speed really is the bane of my existence.

Says a man who wrote a War and Peace on the topic of Harry Potter totally owning some mindkilled normals. I guess that was the most important book he had to write.

SolTerrasa
Sep 2, 2011

Peel posted:

That was a pretty cute resolution to the time travel trick, which also means he doesn't have to plot around it in future. I liked that.

I agree. It's also really satisfying to see someone who believes they've got all possible outcomes worked out get blindsided by something they hadn't thought of. Like an Outside Context Problem out of a Banks novel. The fact that Yudkowsky knows this and can exploit it for a joke is one of a small number of things in this sprawling novel which made me revise my opinion of him in a positive way.

SolTerrasa
Sep 2, 2011

Tiggum posted:

Basically, the difference between:

Magic is natural, but magic does not work the way a natural phenomenon would. Therefore there is something fundamentally wrong with my understanding of basically everything.

and

Magic does not seem to work the way I believe a natural phenomenon would, therefore: it is not a natural phenomenon; it doesn't work the way it appears to; or there is something fundamentally wrong with my understanding of basically everything.

Harry's assumptions limit his possible conclusions to what appears to be the least likely one.

My assumption is that this is intentional on the author's part. Part of the premise of the AI alarmism that Yudkowsky promotes is that all the smart, well-studied, and experienced scholars in the field are deeply and fundamentally wrong about basically everything; the parallels are too obvious to be coincidental.

SolTerrasa
Sep 2, 2011

Pvt.Scott posted:

People aren't rational actors. That's the flaw.

What, with tdt? No, the problem with tdt is that it provably performs worse (or sometimes equal) on every decision theory problem than standard consequentialism, and the only problem class it performs better on requires gods to exist. Yud just happens to believe in inevitable AI gods, so he likes it.

SolTerrasa
Sep 2, 2011

Added Space posted:

The hangup is that predicting the future is supposed to be impossible, so there's no reason to believe the oracle's act. Picking both boxes isn't going to change what's in the first box, so you're always better off picking both boxes and snagging some extra money.

The problem only becomes difficult if you believe the oracle's power is real. It's like how Turn of the Screw, where a governess claims that the child she is looking after is being injured by a ghost. is only a mystery story if you believe in ghosts.

Exactly. Regular consequentialism says you two-box, because causation exists. It's the "too late now" principle. No matter what happened before the decision point, and no matter how reliable the oracle is (like maybe the oracle went and read your posts on the Something Awful forums where you say "always two-box", and so the oracle actually does know exactly what you're going to do), it doesn't matter; the money is already in the box; everything has already happened. The only reason you need to one-box is if you believe that the oracle's power is not only real, it's acausal, that is, your choice in the moment actually affects what the oracle did in the past. And that's why you need gods for TDT to improve on consequentialism.

SolTerrasa
Sep 2, 2011

Added Space posted:

My counter would be, under pragmatism, you wouldn't worry about all this physics and causality stuff. If the oracle had a good track record of making predictions, no matter how impossible it may be, it's pragmatic to play along with the oracle. You'd just have to be sure you had a wide enough data set on the oracle's actions to not fall prey to something like a perfect prediction scam.

That's a winning strategy in the event that the oracle is real, but I assert you'd need to see effectively god-like performance from the oracle before you should be willing to believe that this time, definitely this time, this time the magic is for real and I'm definitely not being tricked.

I think we could probably quibble over how much god-like performance you need to see before you should abandon a belief in causality, but I think we're in agreement that TDT doesn't solve any problems and it creates a few of them.

SolTerrasa
Sep 2, 2011

ThirdEmperor posted:

Basically, he loves to dilute the value of human suffering by placing it against really big numbers - an incredible amount of people getting dust in their eyes vs. one person getting tortured their entire life - while also placing himself in a 'special' role within this whole construction where any suffering on his part would become suffering of the masses for lack of his genius.

So yes, and he's not even honest about that. He 'cares' for them by caring for himself.

He really thinks he's going to save the world, which means that if you don't help him, really you're harming the future prospects of everyone who has not yet been born, which I will remind you is 3^3^3^3^3 times more people than have existed thus far, so, actually, not giving him your money makes you a gigahitler (terahitler? exohitler?)

SolTerrasa
Sep 2, 2011

Cuazl posted:

HPMOR keeps doing this thing where it brings up a plot point from the book, only to dismiss it as obviously too stupid to actually happen. It always strikes me as rather mean spirited.

Earlier we had Scabbers being a normal rat because Sirius Black really did murder all those people. If it comes up again I expect it'll turn out he was also a rapist, or something equally tasteless.

The author doesn't seem to have much respect for any of the canon characters other than Voldemort. It's kinda funny that the only character he empathises with is the villain.

Voldemort is the villain because that he wants to live forever and is willing to do bizarre and transparently evil things in order to live forever. Yudkowsky must have had a really hard time understanding the point of the character.

Adbot
ADBOT LOVES YOU

SolTerrasa
Sep 2, 2011

Cardiovorax posted:

The most hilarious thing about this part is that it's basically suggesting that we should put social and sexual profiling over what the actual evidence and perpetrator statement indicate has actually happened... and that this is also what Yudkowsky's pathetic miscomprehension of Bayesian decision theory really says is what people should be doing. The "prior probability" of twelve year old girls murdering someone is substantially more important than personally seeing a twelve year old girl murdering someone in his worldview.

He absolutely does believe that. He and his followers are mostly, uh, "race realists", for instance.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply