Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Seraphic Neoman
Jul 19, 2011


I really cannot believe you're actually doing this.

And that about-the-author page is a little too generous, so let me crash it back down to reality. Effortpost go.

Eliezer Yudkowsky is not an AI researcher, he's merely a dude who runs a blog where he tries to convince everyone that Bayesian probability is literally the best loving thing ever (and though it is indeed a powerful area of mathematical probability, it is easy to misuse and that is exactly what Yudkowsky does). Big Yud never finished high school and has never finished college. He claims he can do it really guys but is "saving up his intellect" for something more dire. Instead he chose to create a site called Less Wrong which analyzes possible AI behavior (so long as the AI uses logic based on Bayesian probability. What who says that people will create Bayesian-probability based AI? What are you, a moron? :smug:) though in reality the site is more about its own insular culture, where nerds invent terms to sound smart and argue about mundane poo poo. Since this is the largest pseudo-intellectual circlejerk to have ever graced the internet, they dislike it fiercely when people critique their work, even if it does have more holes than swiss cheese.

Lemme give you an example:

quote:

Now here's the moral dilemma. If neither event is going to happen to you personally, but you still had to choose one or the other:

Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?

I think the answer is obvious. How about you?

Oh yeah torture and big numbers (3^^^3 is the same thing as 3^3^3^3 which is the same thing as "a meaninglessly big number"). Big Yud and Less Wrong love torture and big numbers :awesome:
So as you would expect from socially-stunted intellectual masturbators the answers were as follows:

quote:

Wow. The obvious answer is TORTURE, all else equal, and I'm pretty sure this is obvious to Eliezer too. But even though there are 26 comments here, and many of them probably know in their hearts torture is the right choice, no one but me has said so yet. What does that say about our abilities in moral reasoning?

quote:

Torture,

Consider three possibilities:

(a) A dusk speck hits you with probability one, (b) You face an additional probability 1/( 3^^^3) of being tortured for 50 years, (c) You must blink your eyes for a fraction of a second, just long enough to prevent a dusk speck from hitting you in the eye.

Most people would pick (c) over (a). Yet, 1/( 3^^^3) is such a small number that by blinking your eyes one more time than you normally would you increase your chances of being captured by a sadist and tortured for 50 years by more than 1/( 3^^^3). Thus, (b) must be better than (c). Consequently, most people should prefer (b) to (a).

and Big Yud chimed in with

Big Yud posted:

I'll go ahead and reveal my answer now: Robin Hanson was correct, I do think that TORTURE is the obvious option, and I think the main instinct behind SPECKS is scope insensitivity.

Some comments:

While some people tried to appeal to non-linear aggregation, you would have to appeal to a non-linear aggregation which was non-linear enough to reduce 3^^^3 to a small constant. In other words it has to be effectively flat. And I doubt they would have said anything different if I'd said 3^^^^3.

If anything is aggregating nonlinearly it should be the 50 years of torture, to which one person has the opportunity to acclimate; there is no individual acclimatization to the dust specks because each dust speck occurs to a different person. The only person who could be "acclimating" to 3^^^3 is you, a bystander who is insensitive to the inconceivably vast scope.

Scope insensitivity - extremely sublinear aggregation by individuals considering bad events happening to many people - can lead to mass defection in a multiplayer prisoner's dilemma even by altruists who would normally cooperate. Suppose I can go skydiving today but this causes the world to get warmer by 0.000001 degree Celsius. This poses very little annoyance to any individual, and my utility function aggregates sublinearly over individuals, so I conclude that it's best to go skydiving. Then a billion people go skydiving and we all catch on fire. Which exact person in the chain should first refuse?

I may be influenced by having previously dealt with existential risks and people's tendency to ignore them.

It's like amateur hour in high school philosophy. Let's take an individual and torture them for 50 years because that causes 5000 pain points but a dust speck only causes 0.01 pain points, however when you SHUT UP AND MULTIPLY (actual less wrong term and is the single dumbest loving thing they ever made) that gives us 10000000.....insert long list of zeroes here....0000 pain points! So single person torture is good! Forget about counting the aftereffects, how much life this person will miss out due to 50 loving years of torture, if you do the math it all squares up! :v:

Other bullshit:
His "organization" basically has only has one person (himself) and the money goes to sponsor himself and the site.
Yudkowsky is fiercely afraid of death and these ideas will show up in the book. Anyone who tried to make a statement about accepting death on Less Wrong is labeled a "deathist"

And for those who are "well maybe he is right, I mean are you an AI researcher?" I have this quote from a goon who is an AI researcher:

SolTerrasa posted:

Effortpost incoming, but the short version is that there are so many (so many) unsolved problems before this even makes sense. It's like arguing what color to paint the bikeshed on the orbital habitation units above Jupiter; sure, we'll probably decide on red eventually, but christ, that just doesn't matter right now. What's worse, he's arguing for a color which doesn't exist.

[...]

Okay, so long version here, from the beginning. I'm an AI guy; I have an actual graduate degree in the subject and now I work for Google. I say this because Yudkowsky does not have a degree in the subject, and because he also does not do any productive work. There's a big belief in CS in "rough consensus and running code", and he's got neither. I also used to be a LessWronger, while I was in undergrad. Yudkowsky is terrified (literally terrified) that we'll accidentally succeed. He unironically calls this his "AI-go-FOOM" theory. I guess the term that the AI community actually uses, "Recursive Self-Improvement", was too loaded a term (wiki). He thinks that we're accidentally going to build an AI which can improve itself, which will then be able to improve itself, which will then be able to improve itself. Here's where the current (and extensive!) research on recursive-self-improvement by some of the smartest people in the world has gotten us: some compilers are able to compile their own code in a more efficient way than the bootstrapped compilers. This is very impressive, but it is not terrifying. Here is the paper which started it all!

So, since we're going to have a big fancy AI which can modify itself, what stops that AI from modifying its own goals? After all, if you give rats the ability to induce pleasure in themselves by pushing a button, they'll waste away because they'll never stop pushing the button. Why would this AI be different? This is what Yudkowsky refers to as a "Paperclip Maximizer". In this, he refers to an AI which has bizarre goals that we don't understand (e.g. maximizing the number of paperclips in the universe). His big quote for this one is "The AI does not love you, nor does it hate you, but you are made of atoms which it could use for something else."

His answer is, in summary, "we're going to build a really smart system which makes decisions which it could never regret in any potential past timeline". He wrote a really long paper on this here, and I really need to digress to explain why this is sad. He invented a new form of decision theory with the intent of using it in an artificial general intelligence, but his theory is literally impossible to implement in silicon. He couldn't even get the mainstream academic press to print it, so he self-published it, in a fake academic journal he invented. He does this a lot, actually. He doesn't really understand the academic mainstream AI researchers, because they don't have the same :sperg: brain as he does, so he'll ape their methods without understanding them. Read a real AI paper, then one of Yudkowsky's. His paper comes off as a pale imitation of what a freshman philosophy student thinks a computer scientist ought to write.

So that's the decision theory side. On the Bayesian side, RationalWiki isn't quite right. We actually already have a version of Yudkowsky's "big improvement", AIs which update their beliefs based on what they experience according to Bayes' Law. This is not unusual or odd in any way, and a lot of people agree that an AI designed that way is useful and interesting. The problem is that it takes a lot of computational power to get an AI like that to do anything. We just don't have computers which are fast enough to do it, and the odds are not good that we ever will. Read about Bayes Nets if you want to know why, but the amount of power you need scales exponentially with the number of facts you want the system to have an opinion about. Current processing can barely play an RTS game. Think about how many opinions you have right now, and compare the sum of your consciousness to how hard it is to keep track of where your opponents probably are in an RTS. Remember that we're trying to build a system that is almost infinitely smarter than you.

Yeah. It's probably not gonna happen.

Seraphic Neoman fucked around with this message at 20:45 on Feb 21, 2015

Adbot
ADBOT LOVES YOU

Seraphic Neoman
Jul 19, 2011


Arcsquad12 posted:

Is it normal for me to look at that quote of word vomit and say "the gently caress did any of that even mean?"

Yes. Less Wrong comments are infamously impenetrable to an outsider everyone really. Not only do they use big words and scientific terms, they usually like to toss in together their own made up concepts to muddle everything further.
And to make things worse they use terms incorrectly to boot. The scenario with the skydivers is not a Prisoner's Dilemma. Yes there are some similarities, but that doesn't make them the same thing.

Seraphic Neoman fucked around with this message at 20:57 on Feb 21, 2015

Seraphic Neoman
Jul 19, 2011


Scintilla posted:

In other words its a bloated mess about one and a half times longer than a Wheel of Time entry, or ten times the size of an average novel. Fun times are ahead! :allears:

This is Highfort in :spergin: novel form.

Seraphic Neoman
Jul 19, 2011


petrol blue posted:

Harry is the author's best guess at what the author to a fellow human is like.

FTFY

Seraphic Neoman
Jul 19, 2011


^^It's a "Rosencrantz and Guildenstern Are Dead" type of critique, particularly since Rowling sort of makes a point of keeping a large roster of characters going (and then killing many of them in the last book). It's still kind of an absurd mystery (disguised as criticism) but at the very least it fits with the...spirit, I guess...of the novels. This dreck certainly does not.
EDIT: On further reading I take it back, the story becomes a mystery in its own right. It even covers a few plot holes Rowling left in.

Yeah who honestly gave a poo poo about the banking system. It was there to be cool and to say "this kid has money, our plot moves along"

poo poo I just thought of what would happen if Big Yud gave Three Parts Dead this sort of treatment. It'd be totally insufferable. That book also loves its tangents, but those are actually interesting as opposed to this.
He'd probably make me wanna strangle Tara, even if she was originally a good protagonist :(

Seraphic Neoman fucked around with this message at 22:51 on Feb 24, 2015

Seraphic Neoman
Jul 19, 2011


Night10194 posted:

I would no poo poo read a story about a magical banker who helps run a wizard bank and goes on banking adventures, though. Exploring what the 'normal' world is like for wizbiz would be lots of fun, because much like school, I imagine their banking is full of hilarious misadventures.

Terry Pratchet's Making Money is sort of this.

The Craft Trilogy is this but about equity (except instead of money it's your soul)

Seraphic Neoman
Jul 19, 2011


quote:

Again, seems like the author is speaking of himself in this case. Has he claimed to be a child prodigy and/or a genius in his other writings?

Yuuuuuuuuuuuuuup.

quote:

I think that an evil child genius plotting to take over the world may in the right circumstances be entertaining, but this particular smug know-it-all Harry is just so unpleasant to read that it sucks all the potential joy out of the story.

...Wow this sentence just made it click for me.

This fanfiction is basically an insufferable, low-rent version of Artemis Fowl. Holy poo poo how did I never notice this? :aaaaa:
Then again, not even Art was THIS much of a shithead.

Seraphic Neoman
Jul 19, 2011


How do you know what a Death Eater is, Harry?

Seraphic Neoman
Jul 19, 2011


Harry is a spoiled brat no matter what he uses to justify this purchase. Especially since McGonagall said he could get a first-aid kit at Hogwarts. This sob story and the preceding :spergin: isn't enough to justify his stubbornness.

Seraphic Neoman
Jul 19, 2011


Holy poo poo stop it, you little poo poo!
Why does everything have to be a problem for this kid?

Seraphic Neoman
Jul 19, 2011


i81icu812 posted:

LotR: Words: 455,125
Atlas Shrugged: Words: 561,996
War and Peace: Words: 587,287
All seven Harry Potter books: 1,084,170 Words

This is why I told JWKS that this was a bad idea. Even if this poo poo was good, it's long as gently caress.

quote:

Adults don't respect me enough to really talk to me. And frankly, even if they did, they wouldn't sound as smart as Richard Feynman, so I might as well read something Richard Feynman wrote instead. I'm isolated, Professor McGonagall. I've been isolated my whole life. Maybe that has some of the same effects as being locked in a cellar. And I'm too intelligent to look up to my parents the way that children are designed to do.

gently caress you, Yudkowsky. If I ever needed proof that you are a pseudo-intellectual shitlord, this would be one of my exhibits.
Richard Feynman was famous for being extremely inquisitive and conversational. He was interested in physics since childhood, be it water waves, radio signals or light switches. But he had interests everywhere else, bongo drums, the Japanese language, travel, safe-cracking. The man had an extremely inquisitive mind and loved learning about everything ever. Yeah okay he was somewhat of an rear end in a top hat, but he never considered others to be beneath him. The man was an -excellent- teacher, people had to get tickets to his lectures and that poo poo got sold out fast. He was REALLY GOOD at explaining things, and one of his greater disappointments was his inability to explain the physics of fire to his dad. Hell, one of the reasons he is famous is because of his Feynman Diagrams, which were used to explain the movement of sub-atomic particles iirc. He made quantum mechanics a lot easier to understand and much more approachable for everyone.

Richard Feynman doesn't try to "sound smart". Whenever he presented knowledge, he would use conversational speech, only dipping into scientific jargon when necessary. You don't know what the gently caress you're talking about.

And then, in the same loving breath, you say that "I'm isolated...locked in a cellar...too intelligent". I don't have an :ironicat: big enough.

Seraphic Neoman fucked around with this message at 17:44 on Mar 10, 2015

Seraphic Neoman
Jul 19, 2011


Every time people say Quidditch is stupid I'd like to remind them that American Football, Rugby and Cricket exist, the rules of which are all endlessly more convoluted (I will admit that the snitch is pretty egregious though).

Seraphic Neoman
Jul 19, 2011


IronClaymore posted:

Maybe Yudkowsky will actually have time to put his money where his mouth is now and actually build an AI that outdoes the Roomba (a device he disparages, but which does far more than he does to advance the cause of artificial intelligence, thanks partially to cats and Youtube). And maybe he'll finally get therapy? I hope he gets therapy.

You may be sardonic when you say this, but this is probably the route that humanity will take when it comes to making AI. Giving a computer human-like intelligence and extreme decision-making skills is not only currently impossible, it's also pointless. People would much rather make a computer do one job, but do it extremely well, rather than have one computer which will do everything. Why would I want my box-lifting robot to be able to debate me in philosophy? I just want it to lift boxes and to not hurt anyone when doing so.

Unfortunately, this approach means that Yudkowsky's "research" would be rendered moot, so he hates this idea.

Seraphic Neoman
Jul 19, 2011


I just can't get into this story. It's like Harriezer read the entire series, and is now making decisions based on what happens in the novels (and if you say "perfect decision theory" a) gently caress you and b) that would mean that Harry is a robot)
He's 10 years old and was ignorant of magic until just a few weeks ago, but in the span of several days he has picked up enough information to discuss wizard politics.

EDIT: OH. I just remembered what line will come up. Hoooooooooooooooooo boyyyyyyy...

Seraphic Neoman
Jul 19, 2011


That's okay! Eliezer wrote his own MLP fanfiction :unsmigghh: It features a perfect AI seducing the human race into a VR world. The protagonist gets an achievement for taking his partner's virginity.

You're welcome/I'm sorry.

VV Oh was it? Ah well.

Seraphic Neoman fucked around with this message at 06:56 on Mar 18, 2015

Seraphic Neoman
Jul 19, 2011


sarehu posted:

If you think rape's the worst thing people treat casually in the Harry Potter universe then I've got some unicorn blood to sell you.

What are your views on Bayesian AI?

Seraphic Neoman
Jul 19, 2011


Legacyspy posted:

But what? How is that relevant. Nessus seems to be saying that Harry is was supposed to be a uber-rationalist, and isn't, and is instead is like Artemis Fowl, so this is a failing of the story. I want to understand whether or not that is actually Nessus is saying, because that seems ridiculous. If harry fails at being an uber-rationalist, then obviously he wasn't intended to be an uber-rationalist. Your comment has no bearing on that.

Ah, but you see he was! We're supposed to think the science is real. We're supposed to think he is uber-rationalistic. The author pretty much flat our admitted this point. If he fails to deliver, then it's a bad story. And it fails to deliver.

Legacyspy posted:

Also why is his behavior so upsetting that this term keeps getting used? I want to understand why Harry is so upsetting to people. Repeatability saying he is smug or a douche... just tells me that his behavior is upsetting to you. It doesn't explain why that is the case.

His behavior is upsetting because he is portrayed as not intelligent, but insufferable and borderline psychotic. Here is a child whose first idea on how to get more info from a relatively kind adult is to blackmail her. He arrives at this conclusion through his own internal logic and decides he's totally okay with ruining another's life to get information that is not immediately necessary. And the book is on his side. I have read ahead, this incident is pretty much forgotten and forgiven. He cares little for anyone other than himself, except for the few moments Yudkowsky remembers to humanize him.
Case in point, here Harry wants to bring justice to this world, but a few chapters ago he was planning to smash their economy. It's madness.

Even if he wasn't a sociopath, he is still insufferable. He name drops concepts and goes on :spergin: infodumps at the drop of a hat while condescendingly talking down to adults who are remarkably patient with his poo poo. He knows little about this world except when he's suddenly aware of everything and casually using terms (Death Eaters, etc...) as though he knew them for years. And despite priding himself as a rational person, he throws tantrums at the drop of a hat. Even if we weren't meant to like him, he would still be a loving annoying character.

Seraphic Neoman
Jul 19, 2011


Night10194 posted:

I had a professor once tell me the Enlightenment was the first time in human history anyone had ever had the idea that maybe rulers should be chosen because they're intelligent, virtuous, and skilled. Seriously. I mean, this poo poo is pretty wide-spread. "All past eras were stupid and we are the best forever." is a very comforting delusion.

So basically Philosopher Kings? :v:

Seraphic Neoman
Jul 19, 2011


Yeah this part is Yud's hypocrisy at full force. He sort of understands that science doesn't have instant answers. He just expects science to have instant access to answers. Just do a few experiment and arrive at your conclusion? Why is it so hard, right? :smug:

And holy hell am I reminded of this rant whenever he opens his mouth: http://www.zippcast.com/video/2c5c07ff61c33b4a05a

Seraphic Neoman
Jul 19, 2011


no guys this is called negging my friend told me all about this no wait where are you going

Seraphic Neoman
Jul 19, 2011


I will defend my position but I need JWKS to get to the later parts of this dreck before I can. That said,

Legacyspy posted:

Naw. I have two motivations. I wanted to understand why people dislike Harry so much, and initially people were going "He is infuriating because he is irritating" which doesn't explain much, there have been better explanations since then. The second is that I honestly like hpmor & Eliezer. I'd rate it a 7/10 and I do not think the hate against Eliezer is warranted. I think a lot of stems from people not understanding he writes (due to little fault of his). Like the guy on the first page who, when speaking of the torture vs dust specks says "3^^^3 is the same thing as 3^3^3^3" despite the fact that the very post where Eliezer raised the discussion, he explained the notation before even getting to the torture & dust specks.

Right my mistake. It's actually 3^^7625597484987 which is some godawful number you get if you take 3 and then raise it to the 3rd power 7,625,597,484,987 times which is still the same thing as "a meaninglessly big number" which means my point still stands.
There is no reason to involve numbers in this problem, it's a problem concerning philosophical morality, unless you're a smartass who wants to flash his cock. Yud could have asked "Should one person get tortured for a decade or should every other person on Earth get a grain of sand in their eye?" But that wasn't nerdy enough so they do this poo poo.

But you know what fine, whatever. Lemme explain why Yud's conclusion is nuts.

So there is a branch of philosophy called utilitarianism. The basic premise is that it wants to achieve the maximum amount of happiness for the maximum amount of people (in broad strokes).
In Yud's PHILO101 he blasts the problem with a blunt solution. Dude is tortured for 50 years, or someone is tortured (ie: removes the dust speck) for like 1 second? Well obviously we'd pick option 2. But when you SHUT UP AND MULTIPLY our second by that "meaninglessly big number", suddenly having one dude tortured for 50 years doesn't sound that bad, right? BEEP BOOP PROBLEM SOLVED GET hosed CENTURIES OF PHILOSOPHY

Well of course not. There are other factors involved when you realize that we are dealing with a being who has life, willpower, the ability to feel pain and all that other good poo poo. We are taking 50 years out of a person's life and replacing it with pain and misery, instead of inconveniencing a lot of people for an insignificant amount of time. This logic, by the way, is still equating torture with the pain caused by dust specks. I am still playing by Yud's rules despite the fact the two are obviously not equal. You are crushing a person's dreams and ambitions just for the sake of not bothering a whole lot of people for something they won't remember. What about the life this poor person is missing out by going through this torture? All those people won't even remember the speck by the end of the day, but that person will carry his PTSD for the rest of his (no doubt shortened) life, if he's not loving catatonic by the end of the first year.
I am explaining this in-depth because it's a problem that requires you to do so. It is something LW themselves avoid doing, and I hope you are not falling into the same trap.

Yud, by the way, totally dismisses people who point this out, because he misapplies his own concept of Scopes Insensitivity to the solution. He uses an absolutely absurd and unrelated example to prove his point, which is fantastically different from the problem on hand.

Legacyspy posted:

I'm just using this as a really simple example of how the people who mock Eliezer's writing don't even understand basic things hes written and are mocking him from a position of ignorance.

NO.
I understand what Yud and co write. What I don't, I ask others until I do. Some of it is profound, but most of it is them re-inventing philosophical wheels. And sometimes they decide that these wheels should be squares instead.
Do you remember Roko's Basilisk? That was a user taking Yud's ridiculous loving AI anecdote to its absurd conclusion. That is the reason (I suspect) why Yud hates it so, because it shows what a house of cards his whole philosophy is.
That is why he hates non-Bayesian AI ideas; because they make him irrelevant.
At one point he was even backed into a philosophical corner by that same "we know in our hearts torture is the right answer" guy and he threw a hissy fit instead of saying "I don't know".
I could go on about their myriad of faults, but I am very much not arguing from a position of ignorance. Yud's research center has barely published any papers and none are helpful or relevant to society. He is pondering on an irrelevant problem that will never have a solution, nor does it even need to be solved. And if it does come up, Yud's solution will be wrong. I can explain why in detail if you want me to but this part is already long as gently caress.

I'm really annoyed that you'd use this defense because this is the sort of bullshit LW loves. Instead of trying to explain things in simple terms, they use complex ones or ones they made up. Not only does this go against their own mission statement, it's condescending and disingenuous. People have debunked Yud's ideas, he just doesn't want to admit to it (like how he doesn't want to admit that he lost his AI box roleplay 4 times ina row after two wins, which is why he doesn't offer that challenge anymore). So no, you're wrong.

Legacyspy posted:

Similarly I don't think "friendly A.I" is some sort of crazy idea. It seems pretty reasonable. It is basically: "Are there solutions to problems where the solutions are so complex we can't understand everything about the solution? If yes, how do we build something that will give solutions to problems that won't provide solutions that will conflict with other things we care about?"

Except Yud wants a very specific friendly AI, one that uses Bayesian probability to achieve godlike omnipotence. And this is one that will handle all tasks ever and can easily lord over our entire society.

Seraphic Neoman fucked around with this message at 08:25 on Mar 24, 2015

Seraphic Neoman
Jul 19, 2011


If it was just a dumb prank, whatever. But the way he tried to justify it with this bullshit:

quote:

"- and after we were done giving him all the sweets I'd bought, we were like, 'Let's give him some money! Ha ha ha! Have some Knuts, boy! Have a silver Sickle!' and dancing around him and laughing evilly and so on. I think there were some people in the crowd who wanted to interfere at first, but bystander apathy held them off at least until they saw what we were doing, and then I think they were all too confused to do anything. Finally he said in this tiny little whisper 'go away' so the three of us all screamed and ran off, shrieking something about the light burning us. Hopefully he won't be as scared of being bullied in the future. That's called desensitisation therapy, by the way."

"I think the word you're looking for is enjoyable, and in any case you're asking the wrong question. The question is, did it do more good than harm, or more harm than good? If you have any arguments to contribute to that question I'm glad to hear them, but I won't entertain any other criticisms until that one is settled. I certainly agree that what I did looks all terrible and bullying and mean, since it involves a scared little boy and so on, but that's hardly the key issue now is it? That's called consequentialism, by the way, it means that whether an act is right or wrong isn't determined by whether it looks bad, or mean, or anything like that, the only question is how it will turn out in the end - what are the consequences."

Just get hosed Yudkowsky. Seriously someone please tell me he gets punched in the face.

Seraphic Neoman
Jul 19, 2011


Yudkowsky's OK Cupid posted:

I happened to be in New York City during the annual Union Square pillow fight, so I showed up dual-wielding two pillows, a short maneuverable pillow for blocking incoming blows and a longer pillow in my right hand for striking. These two pillows were respectively inscribed "Probability Theory" and "Decision Theory"; because the list of Eliezer Yudkowsky Facts, which I had no hand in composing, says that all problems can be solved with probability theory and decision theory, and probability theory and decision theory are the names of my fists.

Seraphic Neoman fucked around with this message at 03:28 on Apr 4, 2015

Seraphic Neoman
Jul 19, 2011


This doesn't seem rational or scientific.

Seraphic Neoman
Jul 19, 2011


Harry is about to get sorted into Torture Simulation number 4564735485.

Seraphic Neoman
Jul 19, 2011


Unfortunately the story doesn't do anything obnoxious for a while, so we're left with long stretches of nothing happening.

Though I am glad this smug poo poo is getting a verbal bruising from a magic hat.

Seraphic Neoman
Jul 19, 2011


I never really "got" Omake. They usually have this weird humor that I never understood, so maybe Yud is right on the money here.
Omake is also when artists have an excuse to post characters in cheesecake outfits, so we no doubt have that to look forward to :sigh:

Seraphic Neoman
Jul 19, 2011


The first two were fine. The third was okay.

It went downhill from there.

Seraphic Neoman
Jul 19, 2011


Albus would be so on-board with this though. "Do it man, do it! Do it! Do it!"
Hell he'd probably say something silly anyway.

Seraphic Neoman
Jul 19, 2011


If this was a competent author I'd be having bets on how long till his fellow nerds kick the poo poo out of him.
And Elizier, gifted kid programs worked out so well for you in the past, huh?

Seraphic Neoman
Jul 19, 2011



This emote is pretty much HPMoR in a nutshell.

Seraphic Neoman
Jul 19, 2011


I dunno, to me it just sounds like her's trying to excuse himself for having these thoughts. Sadly we kind of fall in a lull for...a rather long time IIRC, where nothing interesting or consequential happens.

Seraphic Neoman
Jul 19, 2011


Hey guys! Do you love stories where nothing of consequence occurs and the protagonist learns no lessons from his misdaventures!?

Then whoah nelly have we got something for you!

Seraphic Neoman
Jul 19, 2011


Sure is a lotta science going on this chapter.

Seraphic Neoman
Jul 19, 2011


Tunicate posted:

I like how 75-1=7.

The idea is that he found a note well ahead of time he was supposed to.

Seraphic Neoman
Jul 19, 2011


blackmongoose posted:

Based on the recognition code, I'm pretty sure this sequence is dealing with one of the parts of the actual books that actually does deserve some mocking, only the story is using it in an even dumber way somehow. I hope I'm wrong, but all the elements fit so far

It's a a little hard to do this considering Yud never read the original books.

Seraphic Neoman
Jul 19, 2011


Yeah I see what Yud did here. Rowling actually mentioned that she used Avada Kadabra because it means "I strike down that which is before me". It was apparently an incantation some ancient doctors used in one place when they were treating disease, this was supposed to destroy the ill plaguing the patient.
Not a lotta rationality so far.

Seraphic Neoman
Jul 19, 2011


It's the romhack school of game design.

Seraphic Neoman
Jul 19, 2011


Seriously the thing that annoys me the most is this mishmash of elements from across all the books. It breaks my suspension of disbelief completely because it's so drat jarring.

Adbot
ADBOT LOVES YOU

Seraphic Neoman
Jul 19, 2011


anilEhilated posted:

Somehow the whole thing gets stupider as it goes on.

A succinct summary of Yudkowsky and every contribution he ever made.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply