Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt


SSC is a much better blogger than Big Yud, and not surprisingly he also writes much better (and shorter) wacky fiction, so I dearly hope someone tells him "yes dear, we can totally mastermind you a Hugo win" :allears:

Adbot
ADBOT LOVES YOU

Curvature of Earth
Sep 9, 2011

Projected cost of
invading Canada:
$900

There are two kinds of "better than most" fanfiction:

Stuff that would be legitimately worthy of publication if it wasn't a copyright violation. This is almost always shorter works; I've never seen one longer than ten to thirty thousand words. There are almost no author notes.

Then there's EPIC WORKS that go on for dozens of chapters and are over a hundred thousand words long. Author's notes pad out the word count of each chapter by at least another 500 words. The author also thinks they are HOT poo poo, and their work gets more self-indulgent as time goes on. Eventually they run out of steam, because it turns out including every single loving idea you have into one story is a terrible approach to storytelling. ("Murder your darlings" is anathema to these writers. If you think it up, it must be included in the story. Them's the rules!)

Curvature of Earth fucked around with this message at 17:26 on Aug 18, 2015

anilEhilated
Feb 17, 2014

But I say fuck the rain.

Grimey Drawer
Suddenly I'm really glad no one with a working brain is going to take the Hugos seriously after this year.

Curvature of Earth
Sep 9, 2011

Projected cost of
invading Canada:
$900

anilEhilated posted:

Suddenly I'm really glad no one with a working brain is going to take the Hugos seriously after this year.

What anarchy is this? Next you'll be saying that the Nobel Peace prize is dumb and that the Oscars don't represent artistic merit.

Hyper Crab Tank
Feb 10, 2014

The 16-bit retro-future of crustacean-based transportation
Well, I mean, it is better than most fan fiction, but Sturgeon's Law still applies so that's not saying too much. Honestly though if it wasn't for Yud being Yud, it would at least be something you could work with and turn into something not awful.

divabot
Jun 17, 2015

A polite little mouse!

Curvature of Earth posted:

There are two kinds of "better than most" fanfiction:
Stuff that would be legitimately worthy of publication if it wasn't a copyright violation. This is almost always shorter works; I've never seen one longer than ten to thirty thousand words. There are almost no author notes.

The generally-accepted "best Worm fic" is Cenotaph/Wake by notes, which is about 100k words per (complete) part. Even if not standalone, this is that quality.

Curvature of Earth posted:

Then there's EPIC WORKS that go on for dozens of chapters and are over a hundred thousand words long. Author's notes pad out the word count of each chapter by at least another 500 words. The author also thinks they are loving HOT poo poo, and their work gets more self-indulgent as time goes on. Eventually they run out of steam, because it turns out including every single loving idea you have into one story is a terrible approach to storytelling. ("Murder your darlings" is anathema to these writers. If you think it up, it must be included in the story. Them's the rules!)

... and that's the rest of the epic-length Worm fanfic. But then, the original is 1.7 million words so the fics tend to the extremely long and rambling.

And the original was written as a serial novel with only slight rewriting, so the fics do the same. It is possible I have read rather too many.

And Worm, unlike HPMOR, is written by a writer with a plot and characters. And, ahahaha, the author hangs out at the Dark Lord Potter forums and IRC, which I'm sure delights Mr Yudkowsky.

Hyper Crab Tank posted:

Well, I mean, it is better than most fan fiction, but Sturgeon's Law still applies so that's not saying too much. Honestly though if it wasn't for Yud being Yud, it would at least be something you could work with and turn into something not awful.

It's clear we have to invent a thing called Rational Editing. and claim it adds MORE AWESOME.

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:

divabot posted:


And Worm, unlike HPMOR, is written by a writer with a plot and characters. And, ahahaha, the author hangs out at the Dark Lord Potter forums and IRC, which I'm sure delights Mr Yudkowsky.


Hah yeah everyone there loathes HPMOR. Less so for the crazy rationalism and more because it's a terrible story. The thread they have on him is hilarious.

divabot
Jun 17, 2015

A polite little mouse!
You can tell that Hardcore Commie really didn't like HPMOR much.

Southpaugh
May 26, 2007

Smokey Bacon


quote:

My birthday being September 11th

Yud is my new favourite american disaster.

JosephWongKS
Apr 4, 2009

by Nyc_Tattoo
Chapter 14: The Unknown and the Unknowable
Part Five


quote:


Professor McGonagall actually laughed. It was a pleasant, glad sound that seemed surprisingly out of place on that stern face. "You're having another 'you turned into a cat' moment, aren't you, Mr. Potter. You probably don't want to hear this, but it's quite endearingly cute."


“Cute” wouldn’t be a word I’d use to describe Eliezarry.


quote:


"Turning into a cat doesn't even BEGIN to compare to this. You know right up until this moment I had this awful suppressed thought somewhere in the back of my mind that the only remaining answer was that my whole universe was a computer simulation like in the book Simulacron 3 but now even that is ruled out because this little toy ISN'T TURING COMPUTABLE! A Turing machine could simulate going back into a defined moment of the past and computing a different future from there, an oracle machine could rely on the halting behavior of lower-order machines, but what you're saying is that reality somehow self-consistently computes in one sweep using information that hasn't... happened... yet..."


I’m pretty sure that “reality somehow self-consistently computes in one sweep using information that hasn't happened yet” is already contemplated under one or more of the many interpretations of quantum mechanics.


quote:


Realisation struck Harry a pile-driver blow.

It all made sense now. It all finally made sense.

"SO THAT'S HOW THE COMED-TEA WORKS! Of course! The spell doesn't force funny events to happen, it just makes you feel an impulse to drink right before funny things are going to happen anyway! I'm such a fool, I should have realised when I felt the impulse to drink the Comed-Tea before Dumbledore's second speech, didn't drink it, and then choked on my own saliva instead - drinking the Comed-Tea doesn't cause the comedy, the comedy causes you to drink the Comed-Tea! I saw the two events were correlated and assumed the Comed-Tea had to be the cause and the comedy had to be the effect because I thought temporal order restrained causation and causal graphs had to be acyclic BUT IT ALL MAKES SENSE ONCE YOU DRAW THE CAUSAL ARROWS GOING BACKWARDS IN TIME! "


Oh stop being so melodramatic, Eliezarry. Surely a prolific reader and science enthusiast like him would have read about the Bell Test experiments which date back to 1964, before the time period in which HP:MOR takes place.

anilEhilated
Feb 17, 2014

But I say fuck the rain.

Grimey Drawer
Any reason why he's referencing a lovely SF book from the sixties as opposed to anything actually relevant to VR research - or, y'know, the matrix?
e: Hell, or P.K. Dick, who probably did all of the VR screwery years before simulatronmatics and better? Like, I realize Yud is probably referencing books he likes, but surely even He isn't above reading more on the subject...?

anilEhilated fucked around with this message at 09:21 on Aug 4, 2015

Hyper Crab Tank
Feb 10, 2014

The 16-bit retro-future of crustacean-based transportation

anilEhilated posted:

Any reason why he's referencing a lovely SF book from the sixties as opposed to anything actually relevant to VR research - or, y'know, the matrix?

Boring but likely answer: Yudkowsky probably read that book when he was 10. Pretty much any literary reference is probably going to be that.

divabot
Jun 17, 2015

A polite little mouse!

Hyper Crab Tank posted:

Boring but likely answer: Yudkowsky probably read that book when he was 10. Pretty much any literary reference is probably going to be that.

Note also that Yudkowsky was born in 1980, so is about Harry's age in the setting.

NihilCredo posted:

SSC is a much better blogger than Big Yud,

Yeah, not sure on that one. He's just as bad for making up jargon, semi-competently thinking out things from first principles when he could just ask someone who knows what they're talking about, ignoring people in the comments who do know what they're talking about, and actually longer-winded. Also see this summary from Christopher Hallquist (an ex-LW cultist) of Yudkowsky's explicitly anti-scientific stance, and Scott's blustering reply.

divabot fucked around with this message at 11:25 on Aug 4, 2015

sarehu
Apr 20, 2007

(call/cc call/cc)

SolTerrasa posted:

So it'd actually be kind of cool if there was self-parody. I could respect Yud a little bit more if I knew he didn't take himself so seriously. Which parts do you mean?

Harry's thoughts about the kids that beat him in math contests, for example.

Polybius91
Jun 4, 2012

Cobrastan is not a real country.

Curvature of Earth posted:

Then there's EPIC WORKS that go on for dozens of chapters and are over a hundred thousand words long. Author's notes pad out the word count of each chapter by at least another 500 words. The author also thinks they are loving HOT poo poo, and their work gets more self-indulgent as time goes on. Eventually they run out of steam, because it turns out including every single loving idea you have into one story is a terrible approach to storytelling. ("Murder your darlings" is anathema to these writers. If you think it up, it must be included in the story. Them's the rules!)
That trend is part of a larger pattern I've seen that always bugged me, and that's the idea that more words = better story. I'm not sure where this idea came from or why it has so much hold among fanfiction communities in general, but it literally goes against all the writing advice I ever heard: keep your work as short as possible without losing substance or clarity.

Then again, in Yud's case at least it makes sense, since substance and clarity aren't exactly his strong points to begin with.

Night10194
Feb 13, 2012

We'll start,
like many good things,
with a bear.

Polybius91 posted:

That trend is part of a larger pattern I've seen that always bugged me, and that's the idea that more words = better story. I'm not sure where this idea came from or why it has so much hold among fanfiction communities in general, but it literally goes against all the writing advice I ever heard: keep your work as short as possible without losing substance or clarity.

Then again, in Yud's case at least it makes sense, since substance and clarity aren't exactly his strong points to begin with.

It's the same reason a lot of fantasy reads like the author was writing with a thesaurus in one hand. It's what a bad author thinks a good author sounds like.

JosephWongKS
Apr 4, 2009

by Nyc_Tattoo
Chapter 14: The Unknown and the Unknowable
Part Six


quote:


Realisation struck Harry the second pile-driver.


Speaking of pile-drivers, I wish this story was written by the guy who wrote Harry Potter and the Most Electrifying Man.


quote:


This one he managed to keep quiet, making only a small strangling sound like a dying kitten as he realised who'd put the note on his bed this morning.

Professor McGonagall's eyes were alight. "After you graduate, or possibly even before, you really must teach some of these Muggle theories at Hogwarts, Mr. Potter. They sound quite fascinating, even if they're all wrong."

"Glehhahhh..."

Professor McGonagall offered him a few more pleasantries, demanded a few more promises to which Harry nodded, said something about not talking to snakes where anyone could hear him, reminded him to read the pamphlet, and then somehow Harry found himself standing outside her office with the door closed firmly behind him.


In dealing with Eliezarry, you can never go wrong by putting him outside your room with the door closed firmly behind him.


quote:


"Gaahhhrrrraa..." Harry said.

Why yes his mind was blown.

Not least by the fact that, if not for the Prank, he might well have never obtained a Time-Turner in the first place.

Or would Professor McGonagall have given it to him anyway, only later in the day, whenever he got around to asking about his sleep disorder or telling her about the Sorting Hat's message? And would he, at that time, have wanted to pull a prank on himself which would have led to him getting the Time-Turner earlier? So that the only self-consistent possibility was the one in which the Prank started before he even woke up in the morning...?

Harry found himself considering, for the first time in his life, that the answer to his question might be literally inconceivable. That since his own brain contained neurons that only ran forwards in time, there was nothing his brain could do, no operation it could perform, which was conjugate to the operation of a Time Turner.

Up until this point Harry had lived by the admonition of E. T. Jaynes that if you were ignorant about a phenomenon, that was a fact about your own state of mind, not a fact about the phenomenon itself; that your uncertainty was a fact about you, not a fact about whatever you were uncertain about; that ignorance existed in the mind, not in reality; that a blank map did not correspond to a blank territory. There were mysterious questions, but a mysterious answer was a contradiction in terms. A phenomenon could be mysterious to some particular person, but there could be no phenomena mysterious of themselves. To worship a sacred mystery was just to worship your own ignorance.

So Harry had looked upon magic and refused to be intimidated. People had no sense of history, they learned about chemistry and biology and astronomy and thought that these matters had always been the proper meat of science, that they had never been mysterious. The stars had once been mysteries. Lord Kelvin had once called the nature of life and biology - the response of muscles to human will and the generation of trees from seeds - a mystery "infinitely beyond" the reach of science. (Not just a little beyond, mind you, but infinitely beyond. Lord Kelvin certainly had felt a huge emotional charge from not knowing something.) Every mystery ever solved had been a puzzle from the dawn of the human species right up until someone solved it.

Now, for the first time, he was up against the prospect of a mystery that was threatening to be permanent. If Time didn't work by acyclic causal networks then Harry didn't understand what was meant by cause and effect; and if Harry didn't understand causes and effects then he didn't understand what sort of stuff reality might be made of instead; and it was entirely possible that his human mind never could understand, because his brain was made of old-fashioned linear-time neurons, and this had turned out to be an impoverished subset of reality.


Then just move on and get on with your life, you big drama queen. Sheesh.

Hyper Crab Tank
Feb 10, 2014

The 16-bit retro-future of crustacean-based transportation

JosephWongKS posted:

Up until this point Harry had lived by the admonition of E. T. Jaynes that if you were ignorant about a phenomenon, that was a fact about your own state of mind [...] So Harry had looked upon magic and refused to be intimidated.

Then why the gently caress have you been freaking out so badly every time you've been faced with a mystery you can't immediately explain? I don't know if Yudkowsky is doing this deliberately, but surely this directly contradicts what Harry has actually been doing.

Red Mike
Jul 11, 2011

quote:

Not just a little beyond, mind you, but infinitely beyond. Lord Kelvin certainly had felt a huge emotional charge from not knowing something.

quote:

the answer to his question might be literally inconceivable

I like the small parallel drawn here, because it mocks someone for doing exactly what they themselves do all the time. But it's very obviously not an intentional thing because otherwise it wouldn't actually be subtle. The author doesn't ever do subtle.

Fajita Queen
Jun 21, 2012


There isn't enough :allears: for this.

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



I do not think that word means what he thinks it means.

divabot
Jun 17, 2015

A polite little mouse!
Woolie Wool posted this over in the Dark Enlightement thread. I think it sums up an essential problem with HPMOR: these people don't actually like, but loathe and fear, art and culture (that doesn't reinforce their self-image).

Woolie Wool posted:

Neoreaction/Dork Enlightenment and their cousins over at LessWrong and satellites hate and fear culture. Certainly, they like their commodified pop culture, but more for its signifiers than for its content--many, perhaps the majority of these works have themes and messages that run directly against the sort of thought that underpins these movements. Of course, most obviously and inescapably, Eliezer Yudkowsky's nerd fanfic opus pretty much negates every premise, every theme, every moral of the actual Harry Potter book series, which features an antagonist whose name is French for "flight from death", whose all-consuming lust for immortal life has destroyed his ability to understand or experience even the slightest joy or human attachment. His followers believe in the inherent superiority of a group of people with natural, inherited gifts, even though the actual events of the story show that these people are in character no better, and frequently worse, than "Muggles" without these gifts. He is ultimately defeated by someone who has the natural talent but was raised by those who do not, and successfully integrates the his magical and Muggle sides into a mature identity, accepts his own mortality, and is willing to fight and die even for--especially for--the Muggles that according to Voldemort, he should see himself as innately superior to.

Among these sorts of people (and, I suspect, by other fascist-minded movements of the past), culture and art are reduced to signifiers and bits of "cool" that they attach to themselves. Their own attempts at creating art end up as propagandistic self-promotion and/or fetishistic self-indulgence (see: Harry Potter and the Methods of Rationality). Art that cannot be put through this reductive process and left recognizable becomes the tools of the Enemy, a vehicle through which the Cathedral intends to infect virile Neoreactionary rationality with emotions and postmodernism and liberalism and nasty icky girly ~feelings~ because their cargo-cult rationality is so unimaginative and narrow that it leaves them unable to comprehend what it actually is, what it actually means, or why someone might actually like it.

It's kind of a tacit admission of the limits and weaknesses of their "rationality"--it falls apart if it tries to seriously examine culture, philosophy, or anything else that isn't some combination of data and facts. These all get shoved under the banner of the Cathedral, a massive part of the human experience that is now totally off-limits lest it infect you with its insidious mind viruses. It also means that their society would be, in my estimation, hellishly culturally sterile, endlessly recycling old signifiers without meaning in self-indulgent, flatulent spectacles created for the glorification of Neoreaction (see for comparison what happened to art, culture, and music under 1930s fascism). Artists, as inherently mysterious and threatening forces beyond the ken of the Neoreactionary nerd-kings, would be ruthlessly controlled and eliminated if they do anything the rulers feel threatened, insulted, or confused by. The masters of this realm would dream of their transhuman dreams coming to fruition so they can finally excise this troublesome culture from the human animal and we can exist as perfectly sociopathic beings of pure reason, unaffected by other humans' experiences and perspectives. There will be no homes, only sleeping modules; no food, only Soylent; no activities beyond maximizing production, efficiency, and power. For you, at least. Someone has to watch over the human ant farm of the future, hoarding the products of your labor as an immortal AI upload Smaug.

divabot fucked around with this message at 23:11 on Aug 5, 2015

Added Space
Jul 13, 2012

Free Markets
Free People

Curse you Hayard-Gunnes!

Hyper Crab Tank posted:

Then why the gently caress have you been freaking out so badly every time you've been faced with a mystery you can't immediately explain? I don't know if Yudkowsky is doing this deliberately, but surely this directly contradicts what Harry has actually been doing.

:goonsay: In this context, being intimidated means giving up and not questioning things. Sure, someone can turn into a cat, gently caress it, it's magic. Being freaked out is a completely appropriate response, indicating that you are putting some thought into the subject. It's like if you found out Apollo and his sun-chariot was literally real and had a home on Earth - any decent scientist should be reduced to gibbering madness by this, at least for a little while.

divabot
Jun 17, 2015

A polite little mouse!

divabot posted:

Also see this summary from Christopher Hallquist (an ex-LW cultist) of Yudkowsky's explicitly anti-scientific stance, and Scott's blustering reply.

Here is our esteemed colleague su3su2u1 (Actual loving Scientist, Thank You Very Much) knocking it out of the park in backing up Hallquist's point with cites.

su3su2u1 posted:

But I think that the parts that are anti-science are central to the mission of less wrong. I think bits of anti-science show up in the idea that the “world is insane,” and Less Wrong needs to “raise the sanity waterline,” etc. A lot of the anti-science in the sequences comes in as an attempt to show “even experts” behaving irrationality. Rationalist businesses like metamed were founded on the idea that rationality trained people with math and science degrees can out-doctor trained doctors reliably enough to base a business on it. This belief is prevalent enough that Yvain/ slatestarscratchpad felt the need to try to push back against it.

This is the stuff HPMOR is explicitly intended to advocate, which is why what Harry does resembles science in no manner.

Scott's arguments in favour of LessWrong are always equivocation (or "motte and bailey", to use his preferred neologism), but su3su2u1 doesn't put up with that poo poo: "the best of Yudkowsky’s writing repudiates the median Yudkowsky writing." That is: answering with his "don't do this stupid thing" posts is meaningless in the face of his extensive "why you should do this stupid thing" posts.

divabot fucked around with this message at 15:15 on Aug 7, 2015

Xander77
Apr 6, 2009

Fuck it then. For another pit sandwich and some 'tater salad, I'll post a few more.



divabot posted:

Woolie Wool posted this over in the Dark Enlightement thread. I think it sums up an essential problem with HPMOR: these people don't actually like, but loathe and fear, art and culture (that doesn't reinforce their self-image).
Huh. What's the "Culture is a Cathedral" business all about? My first association is The Doomed City speech, (which I took to heart quite a bit at the age of twelve) but I rather doubt "Rationalist" experience with cultural analysis, even couched in sci-fi signifiers, is quite so extensive.

Doctor Spaceman
Jul 6, 2010

"Everyone's entitled to their point of view, but that's seriously a weird one."
Neoreactionaries view progressive thought/culture as being a restrictive organisation like the medieval Catholic Church.

Don't ask how this meshes with their desire to return to a pre-Enlightenment age.

Loel
Jun 4, 2012

"For the Emperor."

There was a terrible noise.
There was a terrible silence.



Xander77 posted:

Huh. What's the "Culture is a Cathedral" business all about? My first association is The Doomed City speech, (which I took to heart quite a bit at the age of twelve) but I rather doubt "Rationalist" experience with cultural analysis, even couched in sci-fi signifiers, is quite so extensive.

http://rationalwiki.org/wiki/Neoreactionary_movement Helpful link

Fried Chicken
Jan 9, 2011

Don't fry me, I'm no chicken!
Relevant to the thread - Silly Valley "effective altruism" now mirrors Yud's bullshit (they don't name Yud by name but it's his sermons being parroted to a T here and his group was involved in it)

quote:

[...]
In the beginning, EA was mostly about fighting global poverty. Now it's becoming more and more about funding computer science research to forestall an artificial intelligence–provoked apocalypse. At the risk of overgeneralizing, the computer science majors have convinced each other that the best way to save the world is to do computer science research. Compared to that, multiple attendees said, global poverty is a "rounding error."
[...]
EA Global was dominated by talk of existential risks, or X-risks. The idea is that human extinction is far, far worse than anything that could happen to real, living humans today.

To hear effective altruists explain it, it comes down to simple math. About 108 billion people have lived to date, but if humanity lasts another 50 million years, and current trends hold, the total number of humans who will ever live is more like 3 quadrillion. Humans living during or before 2015 would thus make up only 0.0036 percent of all humans ever.

The numbers get even bigger when you consider — as X-risk advocates are wont to do — the possibility of interstellar travel. Nick Bostrom — the Oxford philosopher who popularized the concept of existential risk — estimates that about 10^54 human life-years (or 10^52 lives of 100 years each) could be in our future if we both master travel between solar systems and figure out how to emulate human brains in computers.

Even if we give this 10^54 estimate "a mere 1% chance of being correct," Bostrom writes, "we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives."

Put another way: The number of future humans who will never exist if humans go extinct is so great that reducing the risk of extinction by 0.00000000000000001 percent can be expected to save 100 billion more lives than, say, preventing the genocide of 1 billion people. That argues, in the judgment of Bostrom and others, for prioritizing efforts to prevent human extinction above other endeavors. This is what X-risk obsessives mean when they claim ending world poverty would be a "rounding error."
[...]
The specific concern — expressed by representatives from groups like the Machine Intelligence Research Institute (MIRI) in Berkeley and Bostrom's Future of Humanity Institute at Oxford — is over the possibility of an "intelligence explosion." If humans are able to create an AI as smart as humans, the theory goes, then it stands to reason that that AI would be smart enough to create itself, and to make itself even smarter. That'd set up a process of exponential growth in intelligence until we get an AI so smart that it would almost certainly be able to control the world if it wanted to. And there's no guarantee that it'd allow humans to keep existing once it got that powerful. "It looks quite difficult to design a seed AI such that its preferences, if fully implemented, would be consistent with the survival of humans and the things we care about," Bostrom told me in an interview last year.

This is not a fringe viewpoint in Silicon Valley. MIRI's top donor is the Thiel Foundation, funded by PayPal and Palantir cofounder and billionaire angel investor Peter Thiel, which has given $1.627 million to date. Jaan Tallinn, the developer of Skype and Kazaa, is both a major MIRI donor and the co-founder of two groups — the Future of Life Institute and the Center for the Study of Existential Risk — working on related issues. And earlier this year, the Future of Life Institute got $10 million from Thiel's PayPal buddy, Tesla Motors/SpaceX CEO Elon Musk, who grew concerned about AI risk after reading Bostrom's book Superintelligence.
[...]
The common response I got to this was, "Yes, sure, but even if there's a very, very, very small likelihood of us decreasing AI risk, that still trumps global poverty, because infinitesimally increasing the odds that 10^52 people in the future exist saves way more lives than poverty reduction ever could."

The problem is that you could use this logic to defend just about anything. Imagine that a wizard showed up and said, "Humans are about to go extinct unless you give me $10 to cast a magical spell." Even if you only think there's a, say, 0.00000000000000001 percent chance that he's right, you should still, under this reasoning, give him the $10, because the expected value is that you're saving 10^32 lives.

Bostrom calls this scenario "Pascal's Mugging," and it's a huge problem for anyone trying to defend efforts to reduce human risk of extinction to the exclusion of anything else. These arguments give a false sense of statistical precision by slapping probability values on beliefs. But those probability values are literally just made up. Maybe giving $1,000 to the Machine Intelligence Research Institute will reduce the probability of AI killing us all by 0.00000000000000001. Or maybe it'll make it only cut the odds by 0.00000000000000000000000000000000000000000000000000000000000000001. If the latter's true, it's not a smart donation; if you multiply the odds by 10^52, you've saved an expected 0.0000000000001 lives, which is pretty miserable. But if the former's true, it's a brilliant donation, and you've saved an expected 100,000,000,000,000,000,000,000,000,000,000,000 lives.

I don't have any faith that we understand these risks with enough precision to tell if an AI risk charity can cut our odds of doom by 0.00000000000000001 or by only 0.00000000000000000000000000000000000000000000000000000000000000001. And yet for the argument to work, you need to be able to make those kinds of distinctions.

The other problem is that the AI crowd seems to be assuming that people who might exist in the future should be counted equally to people who definitely exist today. That's by no means an obvious position, and tons of philosophers dispute it. Among other things, it implies what's known as the Repugnant Conclusion: the idea that the world should keep increasing its population until the absolutely maximum number of humans are alive, living lives that are just barely worth living. But if you say that people who only might exist count less than people who really do or really will exist, you avoid that conclusion, and the case for caring only about the far future becomes considerably weaker (though still reasonably compelling).
[...]
And you have to do meta-charity well — and the more EA grows obsessed with AI, the harder it is to do that. The movement has a very real demographic problem, which contributes to very real intellectual blinders of the kind that give rise to the AI obsession. And it's hard to imagine that yoking EA to one of the whitest and most male fields (tech) and academic subjects (computer science) will do much to bring more people from diverse backgrounds into the fold.

The self-congratulatory tone of the event didn't help matters either. I physically recoiled during the introductory session when Kerry Vaughan, one of the event's organizers, declared, "I really do believe that effective altruism could be the last social movement we ever need." In the annals of sentences that could only be said with a straight face by white men, that one might take the cake.

What a trainwreck

divabot
Jun 17, 2015

A polite little mouse!

Fried Chicken posted:

Relevant to the thread - Silly Valley "effective altruism" now mirrors Yud's bullshit (they don't name Yud by name but it's his sermons being parroted to a T here and his group was involved in it)

EA ... is probably a better thing for these people to do with their money than buy yachts with it. But urgh.

Qwertycoatl
Dec 31, 2008

A couple of years ago, Givewell wasn't impressed at the capability of MIRI (then SI):

quote:

I believe that the approach SI advocates and aims to prepare for is far more dangerous than the standard approach, so if SI's work on Friendliness theory affects the risk of human extinction one way or the other, it will increase the risk of human extinction. Fortunately I believe SI's work is far more likely to have no effect one way or the other

divabot
Jun 17, 2015

A polite little mouse!

Qwertycoatl posted:

A couple of years ago, Givewell wasn't impressed at the capability of MIRI (then SI):

Yeah. I have considerable experience of involvement in incompetent charities, and that was about the worst thing you could ever have a charity reviewer say about your charity.

Since then they have straightened up and produced, like, a (very) few more actual papers and stuff. Some might even get peer reviewed and published by someone other than themselves some time!

pentyne
Nov 7, 2012
I just learned that Greg Egan, pretty much the perfect example of a MIRI messiah, actually hates everything that Lesswrong stands for. It's pretty great when the hardest, most detailed sci-fi writer who postulates great questions about the nature of consciousness vs. technology thinks Yud's entire philosophy is a huge joke and nothing more then a forum for wealthy "smart" people to fellate each other over how great they are.

divabot
Jun 17, 2015

A polite little mouse!

pentyne posted:

I just learned that Greg Egan, pretty much the perfect example of a MIRI messiah, actually hates everything that Lesswrong stands for. It's pretty great when the hardest, most detailed sci-fi writer who postulates great questions about the nature of consciousness vs. technology thinks Yud's entire philosophy is a huge joke and nothing more then a forum for wealthy "smart" people to fellate each other over how great they are.

Greg Egan in 'Zendengi' posted:

“I’m Nate Caplan.” He offered her his hand, and she shook it. In response to her sustained look of puzzlement he added, “My IQ is one hundred and sixty. I’m in perfect physical and mental health. And I can pay you half a million dollars right now, any way you want it.

Greg Egan in 'Zendengi' posted:

... when you’ve got the bugs ironed out, I want to be the first. When you start recording full synaptic details and scanning whole brains in high resolution—”

Greg Egan in 'Zendengi' posted:

“You can always reach me through my blog,” he panted. “Overpowering Falsehood dot com, the number one site for rational thinking about the future—”

The novel also features the “Benign Superintelligence Bootstrap Project”:

Greg Egan in 'Zendengi' posted:

“Their aim is to build an artificial intelligence capable of such exquisite powers of self-analysis that it will design and construct its own successor, which will be armed with superior versions of all the skills the original possessed. The successor will produce a still more proficient third version, and so on, leading to a cascade of exponentially increasing abilities. Once this process is set in motion, within weeks—perhaps within hours—a being of truly God-like powers will emerge.”

In the words of one reviewer, "The Project persuades a billionaire to donate his fortune to them in the hope that the “being of truly God-like powers” will grant him immortality come the Singularity. He dies disappointed and the Project “turn[s] five billion dollars into nothing but padded salaries and empty verbiage”."

90s Cringe Rock
Nov 29, 2006
:gay:

pentyne posted:

I just learned that Greg Egan, pretty much the perfect example of a MIRI messiah, actually hates everything that Lesswrong stands for. It's pretty great when the hardest, most detailed sci-fi writer who postulates great questions about the nature of consciousness vs. technology thinks Yud's entire philosophy is a huge joke and nothing more then a forum for wealthy "smart" people to fellate each other over how great they are.
"You know what they say the modern version of Pascal's Wager is? Sucking up to as many Transhumanists as possible, just in case one of them turns into God."

Egan's cool and probably inspired a lot of transhumanists, but I would never mistake him for a Yudkowskyite. He actually does maths and writes code.

Deptfordx
Dec 23, 2013

Fried Chicken posted:

Relevant to the thread - Silly Valley "effective altruism" now mirrors Yud's bullshit (they don't name Yud by name but it's his sermons being parroted to a T here and his group was involved in it)


What a trainwreck

I am not a violent man. After having read that, I want to punch everyone involved with 'Effective Altruism' in the face, repeatedly.

Jesus wept, 'Welp, humantity is clearly going to be around for 50 million years. Clearly that's guaranteed. So we don't need to worry about, Nuclear war, Global Warming, or indeed the billions of people living in poverty today.' :suicide:

'Oh what's that you say at the back? You know a village that could really use a source of clean water without all that Bilharzia in it. Sorry, this money is going to save 10 Quadrillion hypothetical future people. Stop being so selfish'.

Deptfordx fucked around with this message at 14:52 on Aug 12, 2015

pentyne
Nov 7, 2012

chrisoya posted:

"You know what they say the modern version of Pascal's Wager is? Sucking up to as many Transhumanists as possible, just in case one of them turns into God."

Egan's cool and probably inspired a lot of transhumanists, but I would never mistake him for a Yudkowskyite. He actually does maths and writes code.

Egan clearly has some strong opinions about how humanity and technology will progress in the coming centuries and millennia and has crafted tons of fictional stories that are extremely elaborate and tightly constructed with insane details to current and hypothetical maths and sciences.

He also seems to despise the majority of his fanbase, he never attends any cons, doesn't sign autographs, and lives in Australia and there are no pictures of him on the internet. He's like the Revese-Yud, someone with the actual intellect to postulate great questions on trans-humanity and technology, but would rather present them in the context of extremely well written and interesting stories and not engage people in internet discussions or use his "fame" for self-promotion.

Legacyspy
Oct 25, 2008

Deptfordx posted:

I am not a violent man. After having read that, I want to punch everyone involved with 'Effective Altruism' in the face, repeatedly.

Jesus wept, 'Welp, humantity is clearly going to be around for 50 million years. Clearly that's guaranteed. So we don't need to worry about, Nuclear war, Global Warming, or indeed the billions of people living in poverty today.' :suicide:

'Oh what's that you say at the back? You know a village that could really use a source of clean water without all that Bilharzia in it. Sorry, this money is going to save 10 Quadrillion hypothetical future people. Stop being so selfish'.

The EA community gives far more money to help people living in poverty than to miri. Mostly through the against maleria foundation/schistosomiasis control initiative/give directly. Myself in included. I don't give any money to miri. Poverty is literally the number #1 issue of EA people. The EA community is doing good work and its pretty disingenuous to claim they don't care about people living in poverty when they literally give millions of dollars to help people in poverty. Find some one else to punch.

Pavlov
Oct 21, 2012

I've long been fascinated with how the alt-right develops elaborate and obscure dog whistles to try to communicate their meaning without having to say it out loud
Stepan Andreyevich Bandera being the most prominent example of that

Legacyspy posted:

The EA community gives far more money to help people living in poverty than to miri. Mostly through the against maleria foundation/schistosomiasis control initiative/give directly. Myself in included. I don't give any money to miri. Poverty is literally the number #1 issue of EA people. The EA community is doing good work and its pretty disingenuous to claim they don't care about people living in poverty when they literally give millions of dollars to help people in poverty. Find some one else to punch.

They could probably do with a name change then. When I hear 'effective altruism', I don't hear "it's altruism, but more effective", I hear "well it's effectively altruism". It makes me think of people patting themselves on the back for being theoretically altruistic.

Curvature of Earth
Sep 9, 2011

Projected cost of
invading Canada:
$900

Pavlov posted:

They could probably do with a name change then. When I hear 'effective altruism', I don't hear "it's altruism, but more effective", I hear "well it's effectively altruism". It makes me think of people patting themselves on the back for being theoretically altruistic.

Effective altruism refers to, essentially, getting the most bang for your buck in terms of money, time, and effort spent saving and improving lives. Per unit of money/time/labor, what improves the lot of human beings the fastest? (This is why malaria initiatives are so popular with effective altruists; it's startlingly cheap to prevent and treat malaria.)

MIRI claimed it "saves 8 lives per dollar" in a blatant attempt to appeal to this sensibility. It didn't work. This is why GiveWell, which was founded for the explicit purpose of rating charities by effective altruist principles, called MIRI a waste of money right to their faces.

Adbot
ADBOT LOVES YOU

Pavlov
Oct 21, 2012

I've long been fascinated with how the alt-right develops elaborate and obscure dog whistles to try to communicate their meaning without having to say it out loud
Stepan Andreyevich Bandera being the most prominent example of that
Yeah I get that. I'm saying the name 'effective altruism' doesn't necessarily make people think that when they hear it. Looks like 'utilitarian altruism' would be closer to the mark.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply