Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
The Unholy Ghost
Feb 19, 2011
So I don't get it...is this just a jealously thread or is there someone else in here who has published respected work? :confused:

Adbot
ADBOT LOVES YOU

Remora
Aug 15, 2010

itshappening.gif

Telarra
Oct 9, 2012

Usually it's a good idea to read at least some of a thread before posting.

Like, any of it at all.

Ratoslov
Feb 15, 2012

Now prepare yourselves! You're the guests of honor at the Greatest Kung Fu Cannibal BBQ Ever!

Yeah, uh, the only thing of note this guy's published is Harry Potter fanfiction.

That's not hyperbole or a joke or anything.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

The Unholy Ghost posted:

So I don't get it...is this just a jealously thread or is there someone else in here who has published respected work? :confused:

When I was in seventh grade I wrote a legend of zelda fanfic but got bored halfway through and didn't finish it. This makes me Yudkowsky's equal.

Remora
Aug 15, 2010

Can I ask why you think Yudkowsky has "published respected work"? Because there are some pretty comprehensive arguments in this thread to the contrary.

Absurd Alhazred
Mar 27, 2010

by Athanatos

Remora posted:

Can I ask why you think Yudkowsky has "published respected work"? Because there are some pretty comprehensive arguments in this thread to the contrary.

Some people respect Yudkowsky's work. In that sense it is "respected". Considering who those people are, that's a backhanded compliment at best.

SolTerrasa
Sep 2, 2011


The Unholy Ghost posted:

So I don't get it...is this just a jealously thread or is there someone else in here who has published respected work? :confused:

At least two, iirc; I'm one, for some value of "respected". My subfield is natural language processing, but any knowledge in AI at all is enough. Anything in particular of Big Yud's that you respect / believe is respected? Legitimately he has some good opinions about some stuff, just that basically none of it is AI.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

I don't think the REU paper I co-authored in undergrad counts as particularly noteworthy or respected, but it is peer-reviewed and published. Yudkowsky's work is only peer-reviewed in the sense that people who post on fanfiction.net are his intellectual peers and have left many reviews.

SolTerrasa posted:

Legitimately he has some good opinions about some stuff, just that basically none of it is AI.

His rare good opinions are mostly cribbed from other, actually-smart people. It's like the old joke about the USSR needing to have two separate papers called "truth" and "news" - where Yudkowsky is right he's not original, and when he's original he's not right. (Most of the time he's neither, attempting to copy the arguments of others but misunderstanding them and screwing them up in the process.)

SolTerrasa
Sep 2, 2011


Lottery of Babylon posted:

His rare good opinions are mostly cribbed from other, actually-smart people. It's like the old joke about the USSR needing to have two separate papers called "truth" and "news" - where Yudkowsky is right he's not original, and when he's original he's not right. (Most of the time he's neither, attempting to copy the arguments of others but misunderstanding them and screwing them up in the process.)

I've heard that before (possibly from you, possibly in this thread? Don't recall). It's probably true, it fits with what I know about Big Yud. Nonetheless, if somebody posted the "Mysterious Answers to Mysterious Questions" sequence and asked for criticism, it'd be hard for me since I fundamentally agree with a few of his points. All I'd be able to make fun of is the verbal diarrhea, Irritating Capitalization, and link spam; the only consistent sin of the content is assuming that his (not obviously wrong, but unpopular) reductionist philosophy is the only valid one.

E: although, holy gently caress do his posts look :smug: now that I look again.

quote:

You can have some fun with people whose anticipations get out of sync with what they believe they believe. This post recounts a conversation in which a theist had to backpedal when he realized that, by drawing an empirical inference from his religion, he had opened up his religion to empirical disproof.

--"BAYESIAN JUDO"

http://lesswrong.com/lw/i5/bayesian_judo/

E2: Oh CHRIST that post is even more insufferable than I thought. I was trying to defend some of your writing, Yudkowsky, why do you do this to me

Here it is in its entirety:

quote:

I was once at a dinner party, trying to explain to a man what I did for a living, when he said: "I don't believe Artificial Intelligence is possible because only God can make a soul."

At this point I must have been divinely inspired, because I instantly responded: "You mean if I can make an Artificial Intelligence, it proves your religion is false?"

He said, "What?"

I said, "Well, if your religion predicts that I can't possibly make an Artificial Intelligence, then, if I make an Artificial Intelligence, it means your religion is false. Either your religion allows that it might be possible for me to build an AI; or, if I build an AI, that disproves your religion."

There was a pause, as the one realized he had just made his hypothesis vulnerable to falsification, and then he said, "Well, I didn't mean that you couldn't make an intelligence, just that it couldn't be emotional in the same way we are."

I said, "So if I make an Artificial Intelligence that, without being deliberately preprogrammed with any sort of script, starts talking about an emotional life that sounds like ours, that means your religion is wrong."

He said, "Well, um, I guess we may have to agree to disagree on this."

I said: "No, we can't, actually. There's a theorem of rationality called Aumann's Agreement Theorem which shows that no two rationalists can agree to disagree. If two people disagree with each other, at least one of them must be doing something wrong."

We went back and forth on this briefly. Finally, he said, "Well, I guess I was really trying to say that I don't think you can make something eternal."

I said, "Well, I don't think so either! I'm glad we were able to reach agreement on this, as Aumann's Agreement Theorem requires." I stretched out my hand, and he shook it, and then he wandered away.

A woman who had stood nearby, listening to the conversation, said to me gravely, "That was beautiful."

"Thank you very much," I said.

Yudkowsky, how can you manage to be such a dick that you can make me empathize with the guy who believes that loving souls disprove artificial intelligence?

E3: Christ, I just went back and read even more of Mysterious Answers, it's all about as insufferable as that one. Okay, Unholy Ghost, I am pretty sure that whatever he's written that's respected, it probably shouldn't be. I'm bored tonight, let me know if you want a teardown post about something in particular.

SolTerrasa fucked around with this message at 06:19 on Aug 25, 2014

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

SolTerrasa posted:

Yudkowsky, how can you manage to be such a dick that you can make me empathize with the guy who believes that souls disprove artificial intelligence?

As usual, Yudkowsky name-drops big-sounding things to make himself look smart, but does the opposite because he doesn't understand them. Aumann's Agreement Theorem applies only to two Bayesian agents with identical priors. Even if we assume that all people are strictly Bayesian agents, a guy who runs an AI cult definitely doesn't have the same priors as a guy who thinks souls disprove AI, so Aumann's Agreement Theorem is silent.

(Not to mention that "let's agree to disagree" in context means "you're a weird creepy autist and I would rather not waste this dinner party talking with you".)

SolTerrasa
Sep 2, 2011


Lottery of Babylon posted:

As usual, Yudkowsky name-drops big-sounding things to make himself look smart, but does the opposite because he doesn't understand them. Aumann's Agreement Theorem applies only to two Bayesian agents with identical priors. Even if we assume that all people are strictly Bayesian agents, a guy who runs an AI cult definitely doesn't have the same priors as a guy who thinks souls disprove AI, so Aumann's Agreement Theorem is silent.

(Not to mention that "let's agree to disagree" in context means "you're a weird creepy autist and I would rather not waste this dinner party talking with you".)

Thank you for linking that paper. I just read it for the first time, and the author says "we publish this paper with some diffidence, since once one has the appropriate framework, it is mathematically trivial". That's definitely true now that I read it.

Correct me if I'm wrong, Lottery, but the framework is this: a) both agents begin with exactly the same beliefs about the world, b) both agents are perfect Bayesian agents, and know this about each other, c) neither agent can lie, and d) neither agent has any systematic bias. Once those conditions are true, no two agents can conclude that they could never come to an agreement. Literally everything about Big Yud's use of that theorem is wrong; none of those conditions hold over two humans. And even if they did, "agree to disagree" in human language does not mean "we can never come to agreement", it just means "we can't agree right now".

I guess the souls guy should have said "our priors are sufficiently different that I do not believe this conversation will conclude with a profitable exchange of data beep boop beep". Leave it to Big Yud to require a paragraph when three words would do.

Lemur Crisis
May 6, 2009

What will you do?
Where can you run?
And the woman's name was Albert Einstein.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

SolTerrasa posted:

Thank you for linking that paper. I just read it for the first time, and the author says "we publish this paper with some diffidence, since once one has the appropriate framework, it is mathematically trivial". That's definitely true now that I read it.

Correct me if I'm wrong, Lottery, but the framework is this: a) both agents begin with exactly the same beliefs about the world, b) both agents are perfect Bayesian agents, and know this about each other, c) neither agent can lie, and d) neither agent has any systematic bias. Once those conditions are true, no two agents can conclude that they could never come to an agreement. Literally everything about Big Yud's use of that theorem is wrong; none of those conditions hold over two humans. And even if they did, "agree to disagree" in human language does not mean "we can never come to agreement", it just means "we can't agree right now".

I guess the souls guy should have said "our priors are sufficiently different that I do not believe this conversation will conclude with a profitable exchange of data beep boop beep". Leave it to Big Yud to require a paragraph when three words would do.

Yudkowsky might argue that d is not a valid objection, as his claim was "no two rationalists can agree to disagree", and obviously a true enlightened rationalist (such as himself) is obviously immune to your human systematic biases because Bayes rule something something shut up and multiply.

But yes, your understanding of the theorem, its limitations, and basic human social interaction is correct.

In high school and early undergrad, they tend to drum into you pretty heavily that a proposition's ifs are just as important as its thens. I wonder if Yudkowsky's attempts to use theorems whose prerequisites have not been satisfied is a consequence of his being a dropout.

Lottery of Babylon fucked around with this message at 07:02 on Aug 25, 2014

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

SolTerrasa posted:

At least two, iirc; I'm one, for some value of "respected". My subfield is natural language processing, but any knowledge in AI at all is enough. Anything in particular of Big Yud's that you respect / believe is respected? Legitimately he has some good opinions about some stuff, just that basically none of it is AI.

I'm pretty sure that I'm more qualified in AI just by having read and mostly (I think) understood Gödel, Escher, Bach: An Eternal Golden Braid 15ish years ago, without subsequently attempting to do anything in the field.

(For those not familiar: it's a classic late-1970s popularization by Douglas Hofstadter, an AI researcher. It's considerably more in depth than the average science popularization book but still no substitute for a textbook, real coursework, lectures, and so on. It's very much aimed at blowing the minds of young geeks in many of the same ways the Sequences apparently do, but instead of being super :smug: it's a very humbling journey into what a gigantic problem AI is, plus a lot of really cool and wide ranging stuff written by someone who is a violently more well rounded human being than Yudkowsky.)

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
Methods of Rationality is my new favourite thing. I hope he finishes it this summer like he claimed he would, just so people can say without exaggeration that the crowning achievement of his life and his only finished work is a self-insert Harry Potter fanfic.

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

SolTerrasa posted:

I was once at a dinner party, trying to explain to a man what I did for a living, when he said: "I don't believe Artificial Intelligence is possible because only God can make a soul."

At this point I must have been divinely inspired, because I instantly responded: "You mean if I can make an Artificial Intelligence, it proves your religion is false?"

He said, "What?"

I said, "Well, if your religion predicts that I can't possibly make an Artificial Intelligence, then, if I make an Artificial Intelligence, it means your religion is false. Either your religion allows that it might be possible for me to build an AI; or, if I build an AI, that disproves your religion."

There was a pause, as the one realized he had just made his hypothesis vulnerable to falsification, and then he said, "Well, I didn't mean that you couldn't make an intelligence, just that it couldn't be emotional in the same way we are."

I said, "So if I make an Artificial Intelligence that, without being deliberately preprogrammed with any sort of script, starts talking about an emotional life that sounds like ours, that means your religion is wrong."

He said, "Well, um, I guess we may have to agree to disagree on this."
I don't believe that this happened. I think that this conversation would have gone something more like:

:catholic: "I don't believe Artificial Intelligence is possible because only God can make a soul."

:smugbert: "Well, if your religion predicts that I can't possibly make an Artificial Intelligence, then, if I make an Artificial Intelligence, it means your religion is false. Either your religion allows that it might be possible for me to build an AI; or, if I build an AI, that disproves your religion."

:catholic: "But I don't believe that you can build one, so the point is moot. Didn't you hear a word that I said?"

I'm not sure that Yudkowsky even understands his opponent's opening statement here.

Sham bam bamina! fucked around with this message at 16:18 on Aug 25, 2014

Djeser
Mar 22, 2013


it's crow time again

I'm willing to bet that the argument went the way Yudkowsky presents it, though with the missing part it was like:

"I think you can't make an AI, because only god can make souls."
"So if I build an AI then your religion is wrong?"
"Oh, I mean that if you build an AI, it won't have a soul."
"I could build an AI that seems like it has a soul."
"But you couldn't create a soul for that AI."
"Why not?"
"Because souls are eternal. I don't think you can create anything eternal."
"All right, I agree with that."

Yudkowsky is really big on 'well that proves your religion is wrong', but it's exactly as much as I'd expect him to be big on proving religion wrong.

Dr Pepper
Feb 4, 2012

Don't like it? well...

Cardiovorax posted:

Methods of Rationality is my new favourite thing. I hope he finishes it this summer like he claimed he would, just so people can say without exaggeration that the crowning achievement of his life and his only finished work is a self-insert Harry Potter fanfic.

Calling it a fanfic implies that the author in some way cares about the work in question and at least has some vague understanding of its themes.

Yud doesn't get Harry Potter and allegedly has never even read the series.

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

Dr Pepper posted:

Calling it a fanfic implies that the author [...] at least has some vague understanding of its themes.
Actually, I'm pretty sure that it doesn't. :v:

SubG
Aug 19, 2004

It's a hard world for little things.

The Unholy Ghost posted:

So I don't get it...is this just a jealously thread or is there someone else in here who has published respected work? :confused:
Yes, time for me to make my shameful confession: I've never published any work in the field of Harry Potter fan fiction.

Strategic Tea
Sep 1, 2012

Djeser posted:

I'm willing to bet that the argument went the way Yudkowsky presents it, though with the missing part it was like:

"I think you can't make an AI, because only god can make souls."
"So if I build an AI then your religion is wrong?"
"Oh, I mean that if you build an AI, it won't have a soul."
"I could build an AI that seems like it has a soul."
"But you couldn't create a soul for that AI."
"Why not?"
"Because souls are eternal. I don't think you can create anything eternal."
"All right, I agree with that."

Yudkowsky is really big on 'well that proves your religion is wrong', but it's exactly as much as I'd expect him to be big on proving religion wrong.

Can't create eternity? You... you DEATHIST!

The Unholy Ghost
Feb 19, 2011

SolTerrasa posted:

At least two, iirc; I'm one, for some value of "respected". My subfield is natural language processing, but any knowledge in AI at all is enough. Anything in particular of Big Yud's that you respect / believe is respected? Legitimately he has some good opinions about some stuff, just that basically none of it is AI.

Well, you're all going to mock me, but I've been enjoying HPMOR so far (around Ch.20). I did a quick Wikipedia search on him and it listed some of his publications at the bottom.

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



The Unholy Ghost posted:

Well, you're all going to mock me, but I've been enjoying HPMOR so far (around Ch.20). I did a quick Wikipedia search on him and it listed some of his publications at the bottom.
What do you like about it?

Remora
Aug 15, 2010

His wikipedia publications list, in order, as far as I can tell:
- published by his own institute, therefore reviewed only by his friends if at all
- published in a textbook, which I assume is not peer reviewed
- published by his own institute, therefore reviewed only by his friends if at all
- published by his own institute, therefore reviewed only by his friends if at all
- presented at a conference, not actually published except in the proceedings of same
- hasn't actually been published anywhere
- published by his own institute, therefore reviewed only by his friends if at all

edit: typo

Remora fucked around with this message at 22:45 on Aug 25, 2014

Tunicate
May 15, 2012

Which part did you like? The ten year olds discussing rape?

Slime
Jan 3, 2007

The Unholy Ghost posted:

Well, you're all going to mock me, but I've been enjoying HPMOR so far (around Ch.20). I did a quick Wikipedia search on him and it listed some of his publications at the bottom.

If you like that poo poo then your opinion is null and void because it's terrible. I tried to read it when I was an idiot student who was up his own rear end with LOGIC ABD REASON and I found it loving insufferable. In fact, I can probably credit that crap with making me realize that I was up my own rear end with all that LOGIC AND REASON bullshit. Also Wikipedia is not a valid source and I'm pretty sure if you actually read those publications I'm fairly certain you'll find them to be unaccredited and added by him or his sycophants.

Iunnrais
Jul 25, 2007

It's gaelic.
Eh, I enjoyed the early HPMOR too, but simply from the perspective of working out the logic of magic. I thought trying to use the time turner to find large prime factors was amusing. I thought that the time turner "game" was funny. I enjoyed his session trying to prove that magic doesn't care how you pronounce the magic words, only to find out that in fact, it does care how you pronounce the magic words.

Buuuuuut.... he starts leaving that stuff behind and getting weird REALLY fast. And to be honest, he mixes in his... oddities... early on as well, such as his explanation as to why his version of Harry Potter needs a time turner. "Oh, I'm a special unique snowflake because I have to get more sleep than other people. I just CAN'T wake up like other normal people in the morning!" Sounds very self-insertion excusing his own laziness. I pulled the same kind of crap excuses when I was a teenager, but I like to think I've grown out at least that particular self-deluding lie.

Anyway, HPMOR starts with some amusing "getting into the nitty gritty" of magic-- the same stuff I like about Brandon Sanderson's work-- but then devolves into the religion of Baysianism, "Anti-Deathism", justification of rape, apologetics for Voldemort, etc. When I saw what he was doing, I dropped it real fast. His worldview and philosophy are revolting, really. I think I read up to about when he conjured Carl Sagan as the ULTIMATE PATRONUS that trumps all other patronuses because everyone else is "deathist" that I said absolutely no more, but I was getting really queezy before that too.

bewilderment
Nov 22, 2007
man what



A basic introductory course on AI that you can probably have for free in an open online course would give someone more knowledge of AI than Yudkowsky seems to have.

Ratoslov
Feb 15, 2012

Now prepare yourselves! You're the guests of honor at the Greatest Kung Fu Cannibal BBQ Ever!

bewilderment posted:

A basic introductory course on AI that you can probably have for free in an open online course would give someone more knowledge of AI than Yudkowsky seems to have.

A moderately bright 12 year old would know more about AI than Yudkowsky knows, because it would take several months of instruction/browbeating to get Yudkowsky to the point of admitting that he doesn't know poo poo about AI, and until he reached that point he wouldn't learn a single goddamn thing, whereas you could start teaching the 12 year old this afternoon. Yudkowski is negatively knowledgeable.

SolTerrasa
Sep 2, 2011


The Unholy Ghost posted:

Well, you're all going to mock me, but I've been enjoying HPMOR so far (around Ch.20). I did a quick Wikipedia search on him and it listed some of his publications at the bottom.

I'm not going to mock you, this is a Big Yud mock thread, not an Unholy Ghost mock thread. I'd be happy to explain why he's wrong on anything in particular.

Arsonist Daria
Feb 27, 2011

Requiescat in pace.
I've read most of the thread at this point, and I'm still having trouble reconciling that this guy's most notable achievement is a modestly novel approach to the Harry Potter mythos. What does that say about the author of My Immortal?

LaughMyselfTo
Nov 15, 2012

by XyloJW

Ratoslov posted:

it would take several months of instruction/browbeating to get Yudkowsky to the point of admitting that he doesn't know poo poo about AI

This seems impossibly optimistic to me.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

Lumberjack Bonanza posted:

I've read most of the thread at this point, and I'm still having trouble reconciling that this guy's most notable achievement is a modestly novel approach to the Harry Potter mythos. What does that say about the author of My Immortal?

She could finish what she started without having to be given a multi-month holiday to do so?

She produced unintentional comedy rather than unintentional horror?

She was published on the merits of her work rather than being published in a textbook or by her own institute?

She's far more open to criticism?

E: She doesn't claim to be an authority on any subject?

Not My Leg
Nov 6, 2002

AYN RAND AKBAR!

The Big Yud posted:

I was once at a dinner party, trying to explain to a man what I did for a living, when he said: "I don't believe Artificial Intelligence is possible because only God can make a soul."

At this point I must have been divinely inspired, because I instantly responded: "You mean if I can make an Artificial Intelligence, it proves your religion is false?"

He said, "What?"

I said, "Well, if your religion predicts that I can't possibly make an Artificial Intelligence, then, if I make an Artificial Intelligence, it means your religion is false. Either your religion allows that it might be possible for me to build an AI; or, if I build an AI, that disproves your religion."

There was a pause, as the one realized he had just made his hypothesis vulnerable to falsification, and then he said, "Well, I didn't mean that you couldn't make an intelligence, just that it couldn't be emotional in the same way we are."

I said, "So if I make an Artificial Intelligence that, without being deliberately preprogrammed with any sort of script, starts talking about an emotional life that sounds like ours, that means your religion is wrong."

He said, "Well, um, I guess we may have to agree to disagree on this."

I said: "No, we can't, actually. There's a theorem of rationality called Aumann's Agreement Theorem which shows that no two rationalists can agree to disagree. If two people disagree with each other, at least one of them must be doing something wrong."

We went back and forth on this briefly. Finally, he said, "Well, I guess I was really trying to say that I don't think you can make something eternal."

I said, "Well, I don't think so either! I'm glad we were able to reach agreement on this, as Aumann's Agreement Theorem requires." I stretched out my hand, and he shook it, and then he wandered away.

A woman who had stood nearby, listening to the conversation, said to me gravely, "That was beautiful."

"Thank you very much," I said.

So, this is obviously stdh.txt, but the bolded part jumped out. When Yudkowski talks about an Artificial Intelligence talking about an emotional life "that sounds like ours" actually being emotional, that makes him sound like a behaviorist. Some quick Googling, however, suggests that he thinks behaviorism is stupid (this, for example).

Isn't that kind of something important to be consistent about if you're the guy who's going to develop AGI.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.

Ratoslov posted:

A moderately bright 12 year old would know more about AI than Yudkowsky knows, because it would take several months of instruction/browbeating to get Yudkowsky to the point of admitting that he doesn't know poo poo about AI, and until he reached that point he wouldn't learn a single goddamn thing, whereas you could start teaching the 12 year old this afternoon. Yudkowski is negatively knowledgeable.
This is literally true. Yudkowsky is the guy in your Whatever101 class who can never shut up and just listen and who argues with the professor about everything. Except even more full of himself and with a cult following.

The Unholy Ghost
Feb 19, 2011

Iunnrais posted:

Eh, I enjoyed the early HPMOR too, but simply from the perspective of working out the logic of magic. I thought trying to use the time turner to find large prime factors was amusing. I thought that the time turner "game" was funny. I enjoyed his session trying to prove that magic doesn't care how you pronounce the magic words, only to find out that in fact, it does care how you pronounce the magic words.

Buuuuuut.... he starts leaving that stuff behind and getting weird REALLY fast. And to be honest, he mixes in his... oddities... early on as well, such as his explanation as to why his version of Harry Potter needs a time turner. "Oh, I'm a special unique snowflake because I have to get more sleep than other people. I just CAN'T wake up like other normal people in the morning!" Sounds very self-insertion excusing his own laziness. I pulled the same kind of crap excuses when I was a teenager, but I like to think I've grown out at least that particular self-deluding lie.

Anyway, HPMOR starts with some amusing "getting into the nitty gritty" of magic-- the same stuff I like about Brandon Sanderson's work-- but then devolves into the religion of Baysianism, "Anti-Deathism", justification of rape, apologetics for Voldemort, etc. When I saw what he was doing, I dropped it real fast. His worldview and philosophy are revolting, really. I think I read up to about when he conjured Carl Sagan as the ULTIMATE PATRONUS that trumps all other patronuses because everyone else is "deathist" that I said absolutely no more, but I was getting really queezy before that too.

Hm, well I guess this is the issue. I was liking it because it seemed to be about hacking the Harry Potter world; I didn't realize the writing goes batshit somewhere along the way. I'll keep reading just to see how bonkers it gets, though.

90s Cringe Rock
Nov 29, 2006
:gay:
I followed a link to a Yudkowsky story the other day. It was short, so I read it. I'm not going to link to it, because it's poo poo. It started poorly:

quote:

Imagine a world much like this one, in which, thanks to gene-selection technologies, the average IQ is 140 (on our scale).
That wouldn't be too :godwin: if I didn't know Yudkowsky, but I do. That's not him at his most Big Yud, though, this is:

quote:

(Oh, and every time someone in this world tries to build a really powerful AI, the computing hardware spontaneously melts. This isn't really important to the story, but I need to postulate this in order to have human people sticking around, in the flesh, for seventy years.)
Yes, the omniscient narrator has to assure us that it's not internally inconsistent or irrational for (ugh) meatbags to still be around in seventy years.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.

Iunnrais posted:

everyone else is "deathist"
It really says an amazing amount about Yudkowsky that he can take a concept as simple and agreeable as "nobody should have to die if they don't want to" and turn it into something so insane that the only possible response is to ridicule it.

Adbot
ADBOT LOVES YOU

Chamale
Jul 11, 2010

I'm helping!



Cardiovorax posted:

It really says an amazing amount about Yudkowsky that he can take a concept as simple and agreeable as "nobody should have to die if they don't want to" and turn it into something so insane that the only possible response is to ridicule it.

Yud doesn't understand differences of opinion. So when you combine "In a utopia, no one would die when they weren't ready" with "I never want to die", it means that anyone who would accept death must be irrational.

  • Locked thread