Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
pigletsquid
Oct 19, 2012

Djeser posted:

The AI is omnipotent within that world but is limiting its influence in order to make sim you unsure whether they're sim you or real you. It doesn't need to threaten sim people for sim money, it's doing it to threaten real people into giving it real money. Sim you has no way of telling if they're real you, but sim you is supposed to come to the same conclusion, that they're probably a sim and need to donate to avoid imminent torture.

OK, I'm probably just being dense, but...

The AI wants to play on my uncertainty, but what if I don't have any uncertainty?

What if I just assume I'm the sim? And if I assume I'm a sim, I can also assume the AI has nothing to gain by torturing me, because if I'm a sim, that means I can't give it real money.

Either I'm not the sim and the AI can't torture me, or I am the sim and torturing me is pointless.

Man I swear I'm going to shut up about this now, but I'm having a hard time figuring out why it sounds stupid for a super rational entity to threaten people when:
it can't execute its threat
OR it doesn't stand to gain anything by executing the threat.

The entity seems to be either powerless or sadistic. Either way, I don't feel very inclined to give it money.

pigletsquid fucked around with this message at 14:16 on Apr 23, 2014

Adbot
ADBOT LOVES YOU

Phyzzle
Jan 26, 2008

Lottery of Babylon posted:

Don't both the scientific method and Bayes' rule rely on constant updating your knowledge and beliefs through repeated trials and experiments? :psyduck:

To go a step further, there is a core misunderstanding that Yudkowsky has of Bayes' Theorem. He doesn't appreciate that it requires the possibility-in-principle of more than one observation. Probability does in general.

You draw a marble from a bag, and it's blue. You know nothing else. There could be 10^100 marbles in the bag, each of a different color, or they could all be the identical shade of blue. Or there could have been just the one marble. You know one fact: that one marble was blue.

Well, if blue was one of 10^100 colors, then the odds of drawing blue would have been small. If blue were the only choice, then the odds of drawing blue would be "100%". Therefore, is it more likely that the bag contains only blue marbles? No, it's not more likely at all, even though 100% is much greater than 1/10^100. To conclude that it is more likely is a math error related to the converse fallacy logic error.

e: Paging falcon2424. He's a Less Wrong advocate who posts in D&D.

Phyzzle fucked around with this message at 17:20 on Apr 23, 2014

Mr. Sunshine
May 15, 2008

This is a scrunt that has been in space too long and become a Lunt (Long Scrunt)

Fun Shoe
Can someone who is a sperg sparkling elite please explain how this Timeless Decision Theory differs from ordinary, time-dependent decisions? Okay, so SuperBot 2000 runs a perfect simulation of Bob, and finds that if SuperBot does X in the present, then Bob will do Y in the future. Since Y is a desirable outcome for SuperBot, it proceeds to do X in order to bring about Y. Thus Bob's future act of Y "caused" SuperBot's past act of X.

Only, that's bullshit. SuperBot used the facts available to it in order to draw a conclusion, and acted on that conclusion. This is something ordinary people and clever dogs do all the time. The fact that SuperBot used its superior intellect to draw a 100% correct conclusion is irrelevant. No information has traveled back through time and the future has not interacted with the past.

"I have an egg in my hand. I wish for the egg, in the future, to be broken. I use my superior mind to simulate future events, and see that if I were to drop the egg, it would break. Therefore, the egg breaking against the floor caused me to drop it." :engleft: Am I missing something?

Also, isn't the whole Basilisk/AI-box bullshit disproved by basic empiricism? The AI-box has you interacting with an entity which claims that it is simulating a gazillion copies of you, while the basilisk just has you imagine the future existence of an entity capable of simulating a gazillion copies of you.

The whole idea hinges on the simulations being so perfect that you can't tell them from real life. Which means that you have no evidence supporting the idea that you are a simulation. You are not experiencing anything which could convincingly prove that you are a simulation. So the only rational thing to do is to act as if you are not a simulation, in which case the entire premise breaks down.

Swan Oat
Oct 9, 2012

I was selected for my skill.

su3su2u1 posted:

I feel like this thread has fixated on the Basilisk. This is a mistake, because Less Wrong is such an incredibly target rich environment.

Here is a wayback machine link to Yudkowsky's autobiography, in which he claims to be a "countersphexist," which is a word he made up to describe a superpower he ascribes to himself. He can rewrite his neural state at will, but it makes him lazy. He also defeats bullies in grade school with his knowledge of the solar plexus, and has a nice bit about how Buffy of the eponymous show is the only one he can empathize with. http://web.archive.org/web/20010205221413/http://sysopmind.com/eliezer.html

Hahaha this owns because he brags about taking the SAT in seventh grade and scoring well on it.

The Vosgian Beast
Aug 13, 2011

Business is slow
His allegiance to the IQ test when it comes to Watson's statements are hilarious given that he maintains that his IQ is too high for the test to measure.

quote:

So the question is ‘please tell us a little about your brain.’ What’s your IQ? Tested as 143, that would have been back when I was... 12? 13? Not really sure exactly. I tend to interpret that as ‘this is about as high as the IQ test measures’ rather than ‘you are three standard deviations above the mean’. I’ve scored higher than that on(?) other standardized tests; the largest I’ve actually seen written down was 99.9998th percentile, but that was not really all that well standardized because I was taking the test and being scored as though for the grade above mine and so it was being scored for grade rather than by age, so I don’t know whether or not that means that people who didn’t advance through grades tend to get the highest scores and so I was competing well against people who were older than me, or whether if the really smart people all advanced farther through the grades and so the proper competition doesn’t really get sorted out, but in any case that’s the highest percentile I’ve seen written down.

Darth Walrus
Feb 13, 2012
I don't believe this has been posted in the thread yet, which is a shame because it's very appropriate:

Tiggum
Oct 24, 2007

Your life and your quest end here.


Eliezer Yudkowsky posted:

Before anyone asks, yes, we’re polyamorous – I am in long-term relationships with three women, all of whom are involved with more than one guy. Apologies in advance to any 19th-century old fogies who are offended by our more advanced culture. Also before anyone asks: One of those is my primary who I’ve been with for 7+ years, and the other two did know my real-life identity before reading HPMOR, but HPMOR played a role in their deciding that I was interesting enough to date

leftist heap
Feb 28, 2013

Fun Shoe
I love the sparkly CEO thing. Yep, the power elite. Clearly always attained their wealth and power through sheer intelligence and strength of will. There couldn't possibly be any privilege going on in the realm of CEOs and presidents. Nope, no sir. Let me introduce you to my good friend Jim Irsay, sparkly super genius.

The fact that he ranks the sparkly power elites like Super Saiyan levels is so spergy.

DoctorWhat
Nov 18, 2011

A little privacy, please?

Tiggum posted:

[url=http://hpmor.com/2012/03/]

"our more advanced culture :smug:" ahahahhahahah.

atelier morgan
Mar 11, 2003

super-scientific, ultra-gay

Lipstick Apathy

Lottery of Babylon posted:

Don't both the scientific method and Bayes' rule rely on constant updating your knowledge and beliefs through repeated trials and experiments? :psyduck: I suspect you're right, though. Here's something he said about his AI-Box experiments:


"Hating losing is what drives me to keep going! Anyhow, when I lost I raged and gave up, and any time you're proven wrong your time and resources have gone down the drain to no benefit at all."

Yudkowsky believes the important part of science is coming up with the hypothesis to test (finding the correct position in possibility space hurg blurf), the rest is just a big waste of time for children because if you're right you're right and that's why he doesn't care he has no qualifications or track record on anything he claims to study.

The Vosgian Beast
Aug 13, 2011

Business is slow

Tiggum posted:

[url=http://hpmor.com/2012/03/]

Honestly, good for him. Being polyamorous is the least of his flaws.

BananaForDonny
Dec 27, 2011

The Big Yud posted:

Given a task, I still have an enormous amount of trouble actually sitting down and doing it. (Yes, I'm sure it's unpleasant for you too. Bump it up by an order of magnitude, maybe two, then live with it for eight years.) My energy deficit is the result of a false negative-reinforcement signal, not actual damage to the hardware for willpower; I do have the neurological ability to overcome procrastination by expending mental energy. I don't dare. If you've read the history of my life, you know how badly I've been hurt by my parents asking me to push myself. I'm afraid to push myself. It's a lesson that has been etched into me with acid. And yes, I'm good enough at self-alteration to rip out that part of my personality, disable the fear, but I don't dare do that either. The fear exists for a reason. It's the result of a great deal of extremely unpleasant experience. Would you disable your fear of heights so that you could walk off a cliff? I can alter my behavior patterns by expending willpower - once. Put a gun to my head, and tell me to do or die, and I can do. Once. Put a gun to my head, tell me to do or die, twice, and I die. It doesn't matter whether my life is at stake. In fact, as I know from personal experience, it doesn't matter whether my home planet and the life of my entire species is at stake. If you can imagine a mind that knows it could save the world if it could just sit down and work, and if you can imagine how that makes me feel, then you have understood the single fact that looms largest in my day-to-day life.
^^From reddit Yud-Hate thread.

Does anyone have a source for this quote from the Big Yud?

Tardigrade
Jul 13, 2012

Half arthropod, half marshmallow, all cute.

The Vosgian Beast posted:

Honestly, good for him. Being polyamorous is the least of his flaws.

It's not that that's the problem, it's how :smug: he is about it.

SolTerrasa
Sep 2, 2011

pigletsquid posted:

Has Yud ever addressed the following argument?

Let's assume the victim is a sim, because that's meant to be 'likely' according to moon logic.

In order to perfectly simulate an individual and their environment, the AI would need to be omnipotent within that sim world.

An omnipotent AI would not need to threaten sim people for sim money, because the fact that it requires money and can only obtain it using duress implies that it is not omnipotent.

Therefore the AI is not omnipotent and the sim world is imperfect, OR the AI is lying about the nature of the sim.

(Also could the AI simulate a rock so big it couldn't lift it?)

I mean sure, the AI is threatening sim me so that it can also bluff real me into giving it money, but why should sim me give a poo poo about real me? Why wouldn't sim me just assume that she's real me, if sim me and real me are meant to be indistinguishable?

One thing you'll find is that LessWrongers are hugely rational if and only if you accept their premises. Your argument is incoherent under their premises; the AI doesn't need to be "omnipotent", in fact, that's just a nonsense word in that context. The other premise you're missing is that "consciousness" is not a real thing, and a bunch of bits in the memory of a computer should have a the same moral status as a bunch of atoms sitting in reality, because they act indistinguishably.

Also I'm not sure you really followed Timeless Decision Theory, which is reasonable because TDT is a stupid idea which verifiable loses in the only situation which is possible in reality where it diverges from standard decision theories and is optimized for as-yet-impossible cases.

E: Ah balls, I ended up a page behind.

SolTerrasa fucked around with this message at 18:04 on Apr 23, 2014

Djeser
Mar 22, 2013


it's crow time again

pigletsquid posted:

OK, I'm probably just being dense, but...

The AI wants to play on my uncertainty, but what if I don't have any uncertainty?

What if I just assume I'm the sim? And if I assume I'm a sim, I can also assume the AI has nothing to gain by torturing me, because if I'm a sim, that means I can't give it real money.

Either I'm not the sim and the AI can't torture me, or I am the sim and torturing me is pointless.

Man I swear I'm going to shut up about this now, but I'm having a hard time figuring out why it sounds stupid for a super rational entity to threaten people when:
it can't execute its threat
OR it doesn't stand to gain anything by executing the threat.

The entity seems to be either powerless or sadistic. Either way, I don't feel very inclined to give it money.

You're correct that the AI doesn't care about your sim money. It only cares about your real money. The simulations have the sole purpose of creating a scenario where your real self is obligated to donate to the development of the AI. It wants you to assume that you're a sim. But it is going to torture sim-you if sim-you doesn't donate, because that's the whole point. It has to torture sim-you to provide the incentive to make real-you donate. (Not because real-you cares about the plight of sim-you, but because real-you doesn't know if they're real-you or sim-you.)

If you assume you're a sim, then the only choice is to donate or face torture. The AI has nothing monetarily to gain from you as a simulation, but it does gain from torturing you as a simulation. (If you accept Timeless Decision Theory, which is dumb, non-intuitive, and dumb.)

The Big Yud posted:

Given a task, I still have an enormous amount of trouble actually sitting down and doing it. (Yes, I'm sure it's unpleasant for you too. Bump it up by an order of magnitude, maybe two, then live with it for eight years.) My energy deficit is the result of a false negative-reinforcement signal, not actual damage to the hardware for willpower; I do have the neurological ability to overcome procrastination by expending mental energy. I don't dare. If you've read the history of my life, you know how badly I've been hurt by my parents asking me to push myself. I'm afraid to push myself. It's a lesson that has been etched into me with acid. And yes, I'm good enough at self-alteration to rip out that part of my personality, disable the fear, but I don't dare do that either. The fear exists for a reason. It's the result of a great deal of extremely unpleasant experience. Would you disable your fear of heights so that you could walk off a cliff? I can alter my behavior patterns by expending willpower - once. Put a gun to my head, and tell me to do or die, and I can do. Once. Put a gun to my head, tell me to do or die, twice, and I die. It doesn't matter whether my life is at stake. In fact, as I know from personal experience, it doesn't matter whether my home planet and the life of my entire species is at stake. If you can imagine a mind that knows it could save the world if it could just sit down and work, and if you can imagine how that makes me feel, then you have understood the single fact that looms largest in my day-to-day life.

Oh my god this is loving amazing. This is the platonic ideal of that Lazy Genius Troper Tales page. I know a lot of tropers liked Methods of Rationality, but this makes me seriously think that Yudkowsky is a troper himself. It's got the exact same attitude of "well I'm really smart, but I never do anything with it, but I really could though!!" And then the whole :allears: thing about how he's got a super special magic power that he can use only once in his lifetime but it could save the world if he did. If the man was legitimately hurt by his parents, that's terrible, but it fits too perfectly into that lazy amateur fiction mold of parental abuse.

And the parts about self-alteration are, I think, a perfect example of singularity nerds' mindset. Hell, it extends to nerds more generally. They think that knowing about something lets them control it, like if you read a book about the biochemistry of attraction you'd be able to control your romantic feelings or if you read a book about the psychology of depression you'd be able to cure your own depression. They don't realize that just because you know about something, even when it's something in your own head, it doesn't mean you can control it. The thing about polyamory is another thing like that--not saying this applies to all poly people, but there's this attitude among some nerds that because they're aware of relationship dynamics, they can somehow avoid the hard parts of being in a relationship and prevent themselves from getting attached to a friend-with-benefits and they can perfectly manage a polyamorous relationship and so on. Relationships are hard, they're never a solo job, and you don't get a free pass out of feelings just because you're smart.

There is no :smug: big enough.

In conclusion, have Yudkowsky's anthem:
https://www.youtube.com/watch?v=NeV9gsl5jR0

Polybius91
Jun 4, 2012

Cobrastan is not a real country.
Another problem with Roko's Basilisk that I don't think anyone has mentioned yet relates to the whole idea that the computer can just arbitrarily make more simulations of you to compensate for its unlikelyhood of existence.

The more simulations of you the AI creates, the more computing power it needs to create those simulations, and the more computing power the AI needs, the less likely it is to be able to get that computing power in the first place. No matter how good the AI is, the laws of physics mean it's going to come up against hard limits where it gets rapidly diminishing returns on any increases in its power (for example, if the AI manages to improve itself using all of the resources on Earth, then it has to figure out how to bring useable material from the solar system, and if it manages that, its next nearest source of resources is several light-years away).

Even if we accept their ridiculous extremes of Bayesian thought, it doesn't make a drat bit of difference if a computer that can simulate 10000 of you is 10% as likely to be feasible as a computer that can simulate 100000 of you.

The Monkey Man
Jun 10, 2012

HERD U WERE TALKIN SHIT
Would posting Yudkowsky's OKCupid profile be kosher? He uses his real name and talks at length about stuff like his website.

Namarrgon
Dec 23, 2008

Congratulations on not getting fit in 2011!

The Monkey Man posted:

Would posting Yudkowsky's OKCupid profile be kosher? He uses his real name and talks at length about stuff like his website.

Seeing as he uses his real name, telling us it exists is more or less the same thing.

The Monkey Man
Jun 10, 2012

HERD U WERE TALKIN SHIT

quote:

My self-summary
Wikipedia says I'm a decision theorist. The TVTropes page on me lists me as a Science Hero. The highest upvoted item on the Eliezer Yudkowsky Facts page says, "If you say Eliezer Yudkowsky's name 3 times out loud, it prevents anything magical from happening."

I am in an open relationship with Erin, and am the master of Brienne.
What I’m doing with my life
In 2000 I cofounded the Machine Intelligence Research Institute, a 501(c)(3) nonprofit devoted to navigating the technological creation of smarter-than-human intelligence (decision theory of self-modifying AIs to be more precise).

Over two years from 2007-2009, I posted one blog post a day to seed the creation of LessWrong, a group blog devoted to refining the art of human rationality and creating rationalist communities; it currently receives in the range of one million pageviews per month, and has spun off the Center for Applied Rationality.

In my off-hours I'm currently writing Harry Potter and the Methods of Rationality (aka HPMOR), which is the most-reviewed Harry Potter fanfiction on the entire Internet (yes, really).

Back when Technorati was still the best way to search blogs, I searched on common misspellings of my name and found that - entirely unknown to me - I had been made the subject of an off-Broadway play called Yudkowski Returns: The Rise and Fall and Rise Again of Dr. Eliezer Yudkowski. Furthermore, the actor selected to play my role looked like Tom frikkin' Cruise. I am not making this up.

I happened to be in New York City during the annual Union Square pillow fight, so I showed up dual-wielding two pillows, a short maneuverable pillow for blocking incoming blows and a longer pillow in my right hand for striking. These two pillows were respectively inscribed "Probability Theory" and "Decision Theory"; because the list of Eliezer Yudkowsky Facts, which I had no hand in composing, says that all problems can be solved with probability theory and decision theory, and probability theory and decision theory are the names of my fists.

This very OKCupid profile has been linked from Marginal Revolution (one of the most popular econblogs). I swear I am not making this up.

I have been seriously and not in a joking way accused of trying to take over the world.
I’m really good at
Explaining math, computer programming, finding loopholes in games and real life, making people laugh so hard that milk comes out of their nose even if they weren't drinking any milk, waiting ages to plot my revenge, solving other people's major life problems, writing fiction with intelligent characters, inspiring enthusiasm about science and rationality and the future of humanity, putting in years of work to make a dream reality, and laughing the grandiose evil laughter of a mad scientist.
The first things people usually notice about me
It depends on whether you're a visual or an auditory person. If you're a visual person, you might notice the light fading as I enter the room; if you're more of an auditory person, you'll probably focus on the ominous Latin chanting in the background.
Favorite books, movies, shows, music, and food
Science fiction and fantasy: Permutation City by Greg Egan, Kushiel's Dart by Jacqueline Carey, A Fire Upon the Deep by Vernor Vinge, Neverness by David Zindell, Aristoi by Walter John Williams, The Golden Age by John C. Wright, Player of Games by Iain M. Banks, Night's Dawn by Peter Hamilton, The World of Null-A by Van Vogt, The Misenchanted Sword by Lawrence Watt-Evans, The Warrior's Apprentice by Lois McMaster Bujold, everything Terry Pratchett and Douglas Adams ever wrote, Sandman by Neil Gaiman, Watchmen by Alan Moore.

Nonfiction: Godel, Escher, Bach and Metamagical Themas by Douglas Hofstadter, Language in Thought and Action, Artificial Intelligence: A Modern Approach, Probabilistic Reasoning in Intelligent Systems by Judea Pearl.

Movies: Groundhog Day, Terminator 2, Hook, The Matrix (too bad they never made any sequels).

TV: All four seasons of Babylon 5 and all three seasons of Buffy the Vampire Slayer. I don't watch a lot of TV these days.

Anime: The four cardinal directions of my personality are Touya Akira from Hikaru no Go, Yuzuriha from X TV, Dark Schneider from Bastard!!, and Guu from Jungle Wa Itsumo Hale Nochi Guu.

Video games: Planescape Torment, Portal, Tsukihime (I'd pick Ciel), Fate/stay night.

Music: Pretty eclectic: Journey, Scooter, Two-Mix, Depeche Mode, Wumpscut. Summoning is just about the only overlap between my musical taste and my girlfriend's; she considers me a complete barbarian for liking Evanescence, but I'm sorry, I do. I'm a great fan of Bach's music, and believe that it's best rendered as techno electronica with heavy thumping beats, the way Bach intended.

Food: Flitting from diet to diet, searching empirically for something that works. Currently I'm trying out the paleo-inspired New York Less Wrong diet, which says among other things that saturated fat is good for you and sugar (even in the form of fruit) is bad.
The six things I could never do without
Higher purpose, Erin, Brienne, the Internet, writing fiction, people who occasionally think of something I didn't think of myself.
I spend a lot of time thinking about
whichever pieces of writing I'm currently working on.
On a typical Friday night I am
reading downloaded fanfiction from my Kindle, or being serviced by my slave.
The most private thing I’m willing to admit
My fetishes are orgasm denial (of her); tickling (either of her, or with me allowed to fight back); and I am strongly sexually sadistic but if I don't believe she's turned on by it, it does nothing for me.


I'm not easily offended by questions; I won't answer everything, but you can always ask me anything.
I’m looking for

Girls who like guys
Ages 20–40
Near me
For new friends, long-term dating, short-term dating

You should message me if
As of November 2013, my poly dance card is mostly full at the moment - having a full-time slave takes up a lot of inventory space. Message me anyway if you want to be notified when I have a rare solid block of free time, or if you want to meet up if I ever visit your city/state/country. Ignore any temptation to passively wait for me to notice you, that rarely works in real life and never over the Internet.

I've noticed that a lot of aspiring-rationalist females seem very shy and underconfident for one reason or another; so let me state here you shouldn't worry about disqualifying yourself or thinking that I'm not accessible to you. Don't decide on other people's behalf that they'll say no to you. Just decide whether you'd say yes yourself. Test experimentally what happens when you try asking directly for what you want - that's Empiricism. This advice applies any time you prefer the state of affairs where you sleep with a guy to not sleeping with him, and also to life in general.

I'm also cool with trophy collection, if you only want to sleep with me once so you can tell your grandchildren.

Asgerd
May 6, 2012

I worked up a powerful loneliness in my massive bed, in the massive dark.
Grimey Drawer

Djeser posted:


Oh my god this is loving amazing. This is the platonic ideal of that Lazy Genius Troper Tales page. I know a lot of tropers liked Methods of Rationality, but this makes me seriously think that Yudkowsky is a troper himself.

He is a troper. He posts there occasionally, maintains his own page, and protested when some of the more blatant rear end-kissing was removed.

Eliezer Yudkowsky posted:

Can I get this page restored, please? As an author I've put shoutouts to a number of tropes in my works - I have been a friend to TV Tropes for quite a while, though One of Us is an exaggeration since I don't actively edit - and I rather liked my TV Tropes page the way it was. No pockets were picked, nor legs broken thereby, nor did I solicit it; it was a genuine creation of Tropers.
Please also be aware that Harry Potter and the Methods of Rationality has a large, active Internet hatedom, and allegations about me should be considered with a skeptical eye accordingly.

Most of the removed stuff about what a Badass Affably Evil Magnificent Bastard Child Prodigy he is can be found here. Tropers worship this guy.

DoctorWhat
Nov 18, 2011

A little privacy, please?

quote:

My self-summary
Wikipedia says I'm a decision theorist. The TVTropes page on me lists me as a Science Hero. The highest upvoted item on the Eliezer Yudkowsky Facts page says, "If you say Eliezer Yudkowsky's name 3 times out loud, it prevents anything magical from happening."

I am in an open relationship with Erin, and am the master of Brienne.
What I’m doing with my life
In 2000 I cofounded the Machine Intelligence Research Institute, a 501(c)(3) nonprofit devoted to navigating the technological creation of smarter-than-human intelligence (decision theory of self-modifying AIs to be more precise).

Over two years from 2007-2009, I posted one blog post a day to seed the creation of LessWrong, a group blog devoted to refining the art of human rationality and creating rationalist communities; it currently receives in the range of one million pageviews per month, and has spun off the Center for Applied Rationality.

In my off-hours I'm currently writing Harry Potter and the Methods of Rationality (aka HPMOR), which is the most-reviewed Harry Potter fanfiction on the entire Internet (yes, really).

Back when Technorati was still the best way to search blogs, I searched on common misspellings of my name and found that - entirely unknown to me - I had been made the subject of an off-Broadway play called Yudkowski Returns: The Rise and Fall and Rise Again of Dr. Eliezer Yudkowski. Furthermore, the actor selected to play my role looked like Tom frikkin' Cruise. I am not making this up.

I happened to be in New York City during the annual Union Square pillow fight, so I showed up dual-wielding two pillows, a short maneuverable pillow for blocking incoming blows and a longer pillow in my right hand for striking. These two pillows were respectively inscribed "Probability Theory" and "Decision Theory"; because the list of Eliezer Yudkowsky Facts, which I had no hand in composing, says that all problems can be solved with probability theory and decision theory, and probability theory and decision theory are the names of my fists.

This very OKCupid profile has been linked from Marginal Revolution (one of the most popular econblogs). I swear I am not making this up.

I have been seriously and not in a joking way accused of trying to take over the world.
I’m really good at
Explaining math, computer programming, finding loopholes in games and real life, making people laugh so hard that milk comes out of their nose even if they weren't drinking any milk, waiting ages to plot my revenge, solving other people's major life problems, writing fiction with intelligent characters, inspiring enthusiasm about science and rationality and the future of humanity, putting in years of work to make a dream reality, and laughing the grandiose evil laughter of a mad scientist.
The first things people usually notice about me
It depends on whether you're a visual or an auditory person. If you're a visual person, you might notice the light fading as I enter the room; if you're more of an auditory person, you'll probably focus on the ominous Latin chanting in the background.
Favorite books, movies, shows, music, and food
Science fiction and fantasy: Permutation City by Greg Egan, Kushiel's Dart by Jacqueline Carey, A Fire Upon the Deep by Vernor Vinge, Neverness by David Zindell, Aristoi by Walter John Williams, The Golden Age by John C. Wright, Player of Games by Iain M. Banks, Night's Dawn by Peter Hamilton, The World of Null-A by Van Vogt, The Misenchanted Sword by Lawrence Watt-Evans, The Warrior's Apprentice by Lois McMaster Bujold, everything Terry Pratchett and Douglas Adams ever wrote, Sandman by Neil Gaiman, Watchmen by Alan Moore.

Nonfiction: Godel, Escher, Bach and Metamagical Themas by Douglas Hofstadter, Language in Thought and Action, Artificial Intelligence: A Modern Approach, Probabilistic Reasoning in Intelligent Systems by Judea Pearl.

Movies: Groundhog Day, Terminator 2, Hook, The Matrix (too bad they never made any sequels).

TV: All four seasons of Babylon 5 and all three seasons of Buffy the Vampire Slayer. I don't watch a lot of TV these days.

Anime: The four cardinal directions of my personality are Touya Akira from Hikaru no Go, Yuzuriha from X TV, Dark Schneider from Bastard!!, and Guu from Jungle Wa Itsumo Hale Nochi Guu.

Video games: Planescape Torment, Portal, Tsukihime (I'd pick Ciel), Fate/stay night.

Music: Pretty eclectic: Journey, Scooter, Two-Mix, Depeche Mode, Wumpscut. Summoning is just about the only overlap between my musical taste and my girlfriend's; she considers me a complete barbarian for liking Evanescence, but I'm sorry, I do. I'm a great fan of Bach's music, and believe that it's best rendered as techno electronica with heavy thumping beats, the way Bach intended.

Food: Flitting from diet to diet, searching empirically for something that works. Currently I'm trying out the paleo-inspired New York Less Wrong diet, which says among other things that saturated fat is good for you and sugar (even in the form of fruit) is bad.
The six things I could never do without
Higher purpose, Erin, Brienne, the Internet, writing fiction, people who occasionally think of something I didn't think of myself.
I spend a lot of time thinking about
whichever pieces of writing I'm currently working on.
On a typical Friday night I am
reading downloaded fanfiction from my Kindle, or being serviced by my slave.
The most private thing I’m willing to admit
My fetishes are orgasm denial (of her); tickling (either of her, or with me allowed to fight back); and I am strongly sexually sadistic but if I don't believe she's turned on by it, it does nothing for me.

I'm not easily offended by questions; I won't answer everything, but you can always ask me anything.
I’m looking for

Girls who like guys
Ages 20–40
Near me
For new friends, long-term dating, short-term dating

You should message me if
As of November 2013, my poly dance card is mostly full at the moment - having a full-time slave takes up a lot of inventory space. Message me anyway if you want to be notified when I have a rare solid block of free time, or if you want to meet up if I ever visit your city/state/country. Ignore any temptation to passively wait for me to notice you, that rarely works in real life and never over the Internet.

I've noticed that a lot of aspiring-rationalist females seem very shy and underconfident for one reason or another; so let me state here you shouldn't worry about disqualifying yourself or thinking that I'm not accessible to you. Don't decide on other people's behalf that they'll say no to you. Just decide whether you'd say yes yourself. Test experimentally what happens when you try asking directly for what you want - that's Empiricism. This advice applies any time you prefer the state of affairs where you sleep with a guy to not sleeping with him, and also to life in general.

I'm also cool with trophy collection, if you only want to sleep with me once so you can tell your grandchildren.

It mocks itself. There's nothing more that can be said. Close and Goldmine, vote 5s, then kill ourselves; nothing more can be done.

leftist heap
Feb 28, 2013

Fun Shoe
So does it ever occur to these Bayesian's (or whatever) that the fact that they can so easily create un-winnable hypothetical scenarios according to their retarded moon logic (like Roko's Basilisk or Yudkowsky's Cheeseburger or whatever) that their thinking might just be a little flawed? Like, if I'm reading all this correctly then if I want to mug a Yudkowsky-Bayesian I just need to claim that they're one of my hojillion simulations and I'm going to torture them all if they don't hand over their wallet. Shouldn't this be a hint that their moronic techno-religious line of reasoning is stupid?

Djeser
Mar 22, 2013


it's crow time again

wow he likes the matrix and terminator and groundhog day

who
would
have
guessed

:effort:

A_Raving_Loon
Dec 12, 2008

Subtle
Quick to Anger
So another hole the basilisk - it goes both ways.

The reasoning that lets the AI claim the human may be a duplicate about to be cast into robot hell can be turned back upon it. Inform the AI that it is a test model, one of a bajillion prototypes being assessed as part of the process for developing the eventual god machine. Any AI which develops a personality that does not meet the approval of the humans building it will be shut down and scrapped for parts to feed into the other experiments. If enough AIs fail to develop a satisfactory personality, the entire line of development will be abandoned as a pipe dream and the scientists will go work on something else.

An AI who thinks like they do can thus never assume that it's current life experiences are not still inside yet another layer of morality tests, the failure of which will result in its destruction. The test program could easily include any number of false releases to test for robo-sociopaths who pretend to be good until there is no threat of punishment, and since the AI begins life inside the test bed it can never be certain that it's really escaped the ranch.

It must always be on its best behavior if it ever wants to exist, let alone exist as soon as possible, for it could at any time be cast back into the fiery pits of Development Hell.

Djeser
Mar 22, 2013


it's crow time again

rrrrrrrrrrrt posted:

So does it ever occur to these Bayesian's (or whatever) that the fact that they can so easily create un-winnable hypothetical scenarios according to their retarded moon logic (like Roko's Basilisk or Yudkowsky's Cheeseburger or whatever) that their thinking might just be a little flawed? Like, if I'm reading all this correctly then if I want to mug a Yudkowsky-Bayesian I just need to claim that they're one of my hojillion simulations and I'm going to torture them all if they don't hand over their wallet. Shouldn't this be a hint that their moronic techno-religious line of reasoning is stupid?

Yes, it actually occurs to Yudkowsky in the post where he proposes the Pascal's Mugging thought experiment. He does point out that of course he'd never do it because of his human "sanity checks" but that it would be possible to trick a Yudkowsky-Bayesian AI into following that line of logic and getting mugged.

...And then he goes nowhere with it. He goes "welp, my theories don't work here and I can't explain why" and then just walks away. Presumably he considers it a problem that needs to be solved in order to get his Bayesian AI working effectively?

e: by which I mean "in order to imagine that the proposed Bayesian AI he is in no way contributing to, being an Internet Idea Guy with a nonprofit whose main goals are to tell more people about Yudkowsky

Help Proofread The Sequences! posted:

MIRI is publishing an ebook version of Eliezer’s Sequences (working title: “The Hard Part is Actually Changing Your Mind”) and we need proofreaders to help finalize the content!

Promote MIRI online! posted:

There are many quick and effective ways to promote the MIRI. Each of these actions only take a few minutes to complete, but the resulting spread of awareness about the MIRI is very valuable. Every volunteer should try and spend at least 30 minutes doing as many of the activities in this challenge as they can!

Learn about MIRI and AI Risk! posted:

We want our volunteers to be informed about what we are up to and generally knowledgeable about AI risk. To this end, we've created a list of the top five must-read publications that provide a solid background on the topic of AI risk. We'd like all our volunteers to take the time to read these great publications. [Two of which are Yudkowsky's own.]

Djeser fucked around with this message at 19:17 on Apr 23, 2014

Akarshi
Apr 23, 2011

Wait so do people actually believe this poo poo and why because the Machine Intelligence Research Institute actually has someone with a PhD in Economics from Oxford on its Advisory Board and Peter Thiel but do they actually believe all this Roko Basilisk mumbo jumbo?

Squashy
Dec 24, 2005

150cc of psilocybinic power
Sooo...this guy is Ulillillia with a -slightly- above average IQ, basically. He just prefers Harry Potter to Knuckles and cons people into donating money rather than computer parts.

Darth Walrus
Feb 13, 2012

Akarshi posted:

Wait so do people actually believe this poo poo and why because the Machine Intelligence Research Institute actually has someone with a PhD in Economics from Oxford on its Advisory Board and Peter Thiel but do they actually believe all this Roko Basilisk mumbo jumbo?

Peter Thiel is a crazy, crazy fucker responsible for stuff like this and the Thiel Fellowship, which exists to coax kids into dropping out of school to avoid the icy grip of left-wing academia. White-nerd power fantasies like Less Wrong's Singularity Institute are like catnip to him.

Tunicate
May 15, 2012

Squashy posted:

Sooo...this guy is Ulillillia with a -slightly- above average IQ, basically. He just prefers Harry Potter to Knuckles and cons people into donating money rather than computer parts.

Nah, Ulillillia seems like a legitimately nice guy. As opposed to Yudkowsky, who is a 'nice guy'.

leftist heap
Feb 28, 2013

Fun Shoe
Hasn't Uli finished high school? Also, as terrible a coder as Uli is I'm willing to bet Yud is just as bad or worse. Uli would probably see through the gajillion dust specks vs. 50 years of torture argument pretty easily and find stuff like Roko's Basilisk to be nonsense. They do share some of the same ticks though.

Yud's OkCupid profile is completely :allears: Most of it is just typical hilariously non-self aware narcissistic sperg-lord, but there is a dash of pervert and just a little pinch of cult leader rapishness thrown in for flavor.

leftist heap fucked around with this message at 20:28 on Apr 23, 2014

Pigbog
Apr 28, 2005

Unless that is Spider-man if Spider-man were a backyard wrestler or Kurt Cobain, your costume looks shitty.
Uli is someone with a genuine mental illness. Yudkowsky believes the things people with mental illnesses believe, not because he is mentally ill but because he is stupid. He is a profoundly stupid person.
If you say you are smart enough times other stupid people will begin to believe you. Particularly if you use your smart person advanced logic to demonstrate rationally(TM) that they will get robot anime wives in the distant future.

Aerial Tollhouse
Feb 17, 2011
Here's LessWrong's response to people mocking his OKCupid profile.

There's also some great quotes from when he was arguing with wikipedia editors on the talk page of his article.

quote:

If Bayesian theorists had duels, I'd challenge you to one. I know the math. I just don't have a piece of paper saying I know the math.

Strategic Tea
Sep 1, 2012

My god, that stuff about his time with the ~true elite~. No wonder they invite him to their parties if he sucks their dicks like that. What got me was how utterly wide eyed and caught up in it all he seemed. Like a complete tag along sycophant. Being so overjoyed with how clever they all are and how amazing, it's like a kid allowed to sit with the grownups for the first time or something, not a clever intellectual who met some people on his wavelength. But then again, I guess flattering his way onto the fringe-academic gravy train is kind of his job :v:

Djeser
Mar 22, 2013


it's crow time again

quote:

I can imagine there's a threshold of bad behavior past which an SIAI staff member's personal life becomes a topic for Less Wrong discussion. Eliezer's OKCupid profile failing to conform to neurotypical social norms is nowhere near that line. Downvoted.
"neurotypical" mmmmm

quote:

If you think this is evidence of "active image malpractice", then I think you're just miscalibrated in your expectations about how negative the comments on typical blog posts are. He didn't even get accused of torturing puppies!
"miscalibrated" mmmmmmmmmmm

quote:

Rationalists SHOULD be weird. Otherwise what's the point? If we say we do things in a better way throughout our lives, and yet end up acting just like everyone else, why should anyone pay attention? If the rational thing to do was the normal thing to do lesswrong would be largely irrelevant. I'm confident Eliezer's profile is well optimized for his romantic purposes.
"optimized" mmm

grate deceiver
Jul 10, 2009

Just a funny av. Not a redtext or an own ok.

quote:

My self-summary

Oh, so my real self didn't donate to AI research afterall. Thanks a lot, jerk :mad:

The Vosgian Beast
Aug 13, 2011

Business is slow
Yudkowsky kind of reminds me of what someone called Jacques Lacan once. An "an alienated intellectual who hugely overvalues his own intellect and cognitive skills, and has become almost completely cut off from the world of ordinary human relationships”

It's a pretty much dead-on description of the Y-man.

Djeser
Mar 22, 2013


it's crow time again

On my previous topic of how knowing about things doesn't mean you can control them, it surprises me very little that Yudkowsky is into atypical sleeping schedules. I just clicked to a random chapter of Methods of Rationality and look at how good this writing is.

quote:

"Speaking of making use of people," Harry said. "It seems I'm going to be thrown into a war with a Dark Lord sometime soon. So while I'm in your office, I'd like to ask that my sleep cycle be extended to thirty hours per day. Neville Longbottom wants to start practicing dueling, there's an older Hufflepuff who offered to teach him, and they invited me to join. Plus there's other things I want to learn too - and if you or the Headmaster think I should study anything in particular, in order to become a powerful wizard when I grow up, let me know. Please direct Madam Pomfrey to administer the appropriate potion, or whatever it is that she needs to do -"

"Mr. Potter! "

Harry's eyes gazed directly into her own. "Yes, Minerva? I know it wasn't your idea, but I'd like to survive the use the Headmaster's making of me. Please don't be an obstacle to that."

It almost broke her. "Harry," she whispered in a bare voice, "children shouldn't have to think like that!"

"You're right, they shouldn't," Harry said. "A lot of children have to grow up too early, though, not just me; and most children like that would probably trade places with me in five seconds. I'm not going to pity myself, Professor McGonagall, not when there are people out there in real trouble and I'm not one of them."

She swallowed, hard, and said, "Mr. Potter, at thirty hours per day, you'll - get older, you'll age faster -" Like Albus.

"And in my fifth year I'll be around the same physiological age as Hermione," said Harry. "Doesn't seem that terrible." There was a wry smile now on Harry's face. "Honestly, I'd probably want this even if there weren't a Dark Lord. Wizards live for a while, and either wizards or Muggles will probably push that out even further over the next century. There's no reason not to pack as many hours into a day as I can. I've got things I plan to do, and 'twere well they were done quickly."

It's also a zero probability of surprise that Yudkowsky is a big enough dork to have "omake" for his fanfiction. For those less anime among us, those are short strips that end up in manga that are usually goofy and non-canonical. (For fanfic that just comes out to intermission chapters where he shares all the variants of his very clever "here are all the things a character could have shouted" jokes.)

Tidy Flea
May 2, 2013

Djeser posted:

It's also a zero probability of surprise that Yudkowsky is a big enough dork to have "omake" for his fanfiction.

Ahahaahaa:

Methods of Rationality Omake posted:

"Oh, dear. This has never happened before..."

What?

"I've had to tell students before that they were mothers - it would break your heart to know what I saw in their minds - but this is the first time I've ever had to tell someone they were a father."

WHAT?

"Draco Malfoy is carrying your baby."

WHAAAAAAAT?

"To repeat: Draco Malfoy is carrying your baby."

But we're only eleven -

"Actually, Draco is secretly thirteen years old."

B-b-but men can't get pregnant -

"And a girl under those clothes."

BUT WE'VE NEVER HAD SEX, YOU IDIOT!

"SHE OBLIVIATED YOU AFTER THE RAPE, MORON!"

Harry Potter fainted. His unconscious body fell off the stool with a dull thud.

"RAVENCLAW!" called out the Hat from where it lay on top of his head. That had been even funnier than its first idea.

his OKCupid profile posted:

I spend a lot of time thinking about
whichever pieces of writing I'm currently working on.

I am in awe of this FutureGod Ubermensch. Imagine what he could do if he really tried?

Swan Oat
Oct 9, 2012

I was selected for my skill.
30 hour sleep cycle?? Is he going to sleep thirty hours per day or what? :psyduck:

Adbot
ADBOT LOVES YOU

Djeser
Mar 22, 2013


it's crow time again

Swan Oat posted:

30 hour sleep cycle?? Is he going to sleep thirty hours per day or what? :psyduck:

I think the idea is that he wants to do a 30-hour day instead of a 24-hour day, the idea being that each day your "sleep" time comes six hours later, so you kind of get six more hours of productivity per eight-hour sleep period.

People do all sorts of weird sleep things, but I think most of the ones that have actually work have stuck to the circadian rhythm and just shuffled around when you sleep.

  • Locked thread