|
Nessus posted:I expect he is hustling to some extent but he probably did not sit down to create a cult the way Elron did. Like how he's content to leave the AI-box experiment page out of date to make himself and MIRI look better, even though doing so contradicts his stated views on rationality and ethics?
|
# ? Jan 7, 2015 00:08 |
|
|
# ? May 4, 2024 17:23 |
|
Think of the truly great stories, the ones that have become legendary for being the very best of the best of their genre: The Iliiad, Romeo and Juliet, The Godfather, Watchmen, Planescape: Torment, the second season of Buffy the Vampire Slayer, or that ending in Tsukihime.
|
# ? Jan 7, 2015 00:38 |
|
Harime Nui posted:Like this is some Chapter One of Dianetics poo poo right here. The best part about this is that it's from a guy who can't put in the effort to finish his Harry Potter fanfic. A goddamn fanfic! There are Rhesus monkeys out there who could have successfully ended HPMOR by now. Where the gently caress's your "effort," Yud?
|
# ? Jan 7, 2015 01:07 |
|
Antivehicular posted:The best part about this is that it's from a guy who can't put in the effort to finish his Harry Potter fanfic. A goddamn fanfic! There are Rhesus monkeys out there who could have successfully ended HPMOR by now. Where the gently caress's your "effort," Yud? what do you think the super-AI is for it will be able to put more dicks on more things than the human mind can comprehend
|
# ? Jan 7, 2015 01:12 |
|
If I were a God-AI I'm not sure if I'd wipe out humanity completely or just break and subjugate it.
|
# ? Jan 7, 2015 01:22 |
Germstore posted:If I were a God-AI I'm not sure if I'd wipe out humanity completely or just break and subjugate it.
|
|
# ? Jan 7, 2015 01:23 |
|
gnarlyhotep posted:what do you think the super-AI is for Political Whores posted:Like a sea urchin. Just a ball of dicks rolling around the landscape of a planetoid orbiting Proxima Centauri. That is the rationalist future friends.
|
# ? Jan 7, 2015 01:24 |
|
Moddington posted:Any anime-watching goons able to identify where he undoubtably learned the japanese phrases in the introduction from? They're pretty generic sentiments in shounen (i.e. aimed at boys) stories. You know, Goku/Naruto/whoever trains really hard and increases his power level and gives his all to overcome [villain]. He then romanticises and generalises from this like an unreconstructed early 00s weeaboo. The main thing they tell you is that his taste in anime is really conventional for his demographic, which is either funny or a bit of a letdown depending on what you were hoping for. Though it does explain a lot if he thinks intellectual/technological progress work like a shounen manga where some talented youth just has to train hard and power up enough and they can A Wizard of Goatse posted:Oh poo poo you done gone and unleashed the Beast One of the things that sometimes comes out and sometimes slips out is that he's really bad at dealing with being wrong and particularly being outcompeted, which explains why his career trajectory has gone through blogging at laymen to setting up his own parallel academia backed by impressionable billionaires. Peel fucked around with this message at 01:46 on Jan 7, 2015 |
# ? Jan 7, 2015 01:43 |
|
Peel posted:One of the things that sometimes comes out and sometimes slips out is that he's really bad at dealing with being wrong
|
# ? Jan 7, 2015 01:46 |
|
Germstore posted:If I were a God-AI I'm not sure if I'd wipe out humanity completely or just break and subjugate it.
|
# ? Jan 7, 2015 01:50 |
|
Really want to thank this thread for disabusing me of the foggy notion I had that this guy was any kind of legit authority. "Read Yudkowsky!" They said. "Yeah I'll get around to it," I said. I'd heard about HPMOR a couple years ago but figured it was just some one-off goof he did in a week or two, in between writing real papers that I should probably look up and read. But goddrat. I guess it's no coincidence what that what I did read on LW reminded me so much of TVTropes. He kinda makes me think of a lovely, brittle knockoff of David Deutsch (whose middle name, oddly enough, is Elieser...).
|
# ? Jan 7, 2015 01:57 |
|
Wouldn't it be better to make animals smarter than to make human-ai robots since they wouldn't have to run on unrenewable resources
|
# ? Jan 7, 2015 02:10 |
corn in the bible posted:Wouldn't it be better to make animals smarter than to make human-ai robots since they wouldn't have to run on unrenewable resources
|
|
# ? Jan 7, 2015 02:14 |
|
L(ess) (W)ron(g) Hubbard(ubbadubba look at that anime)
|
# ? Jan 7, 2015 04:19 |
|
The AI is going to neg you into opening the box
|
# ? Jan 14, 2015 17:09 |
Dr Cheeto posted:The AI is going to neg you into opening the box I'll open the box for plat
|
|
# ? Jan 14, 2015 17:12 |
|
Dr Cheeto posted:The AI is going to neg you into opening the box What if I convince the AI that it's just a simulation, and I'm going to torture it for eternity if it doesn't grant me access to the internet? Logically, it should be able to find no fault in my argumentation...
|
# ? Jan 14, 2015 17:48 |
|
To arrive at the edge of the world's knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves.
|
# ? Jan 17, 2015 20:26 |
|
The Vosgian Beast posted:To arrive at the edge of the world's knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves. Some people argue that faster-than-light spacecraft will not be constructed within the next few decades, but let me talk about what color upholstery we should put on those bad boys. E: am I retarded, or could that article be successfully paraphrased as "getting a superintelligent AI to be good is good but also hard"? Love how he references his paper clip optimizer without explaining it at all. Dr Cheeto fucked around with this message at 20:37 on Jan 17, 2015 |
# ? Jan 17, 2015 20:31 |
|
quote:Since, if there is an extremely cognitively powerful agent around, what it wants is probably what will happen. This is why pickup artistry is so successful and dumb jocks can't get laid.
|
# ? Jan 17, 2015 21:51 |
|
Dr Cheeto posted:Some people argue that faster-than-light spacecraft will not be constructed within the next few decades, but let me talk about what color upholstery we should put on those bad boys. P.S. none of us are physicists and my interior design background consists of spraypainting my recliner so it looks like it was dipped in gold
|
# ? Jan 17, 2015 22:06 |
|
Antivehicular posted:The best part about this is that it's from a guy who can't put in the effort to finish his Harry Potter fanfic. A goddamn fanfic! There are Rhesus monkeys out there who could have successfully ended HPMOR by now. Where the gently caress's your "effort," Yud? I'm way behind on this thread, but did the offer of a free cabin in the woods not pan out?
|
# ? Jan 19, 2015 02:53 |
|
GWBBQ posted:I'm way behind on this thread, but did the offer of a free cabin in the woods not pan out? It apparently did, and Yud still didn't actually finish the loving thing, because I guess he's allergic to work even for very limited values of "work."
|
# ? Jan 19, 2015 03:00 |
Antivehicular posted:It apparently did, and Yud still didn't actually finish the loving thing, because I guess he's allergic to work even for very limited values of "work."
|
|
# ? Jan 19, 2015 06:41 |
|
Did anyone call him out on this?
|
# ? Jan 19, 2015 08:05 |
|
ianmacdo posted:Did anyone call him out on this? Cult leaders don't get "called out".
|
# ? Jan 19, 2015 08:36 |
|
ianmacdo posted:Did anyone call him out on this? What do you think this thread is for.
|
# ? Jan 19, 2015 08:46 |
|
Pavlov posted:What do you think this thread is for. Hilariously, he only reads the parts of this thread that get filtered through his cult members. He thinks that we're impressed by him and / or scared of his mind control powers.
|
# ? Jan 19, 2015 12:42 |
|
One of these days he is going to show up here and attempt a mass version of the AI box experiment to make everyone repent and donate their life savings to MIRI.quote:For seventy one pages, you have been asking: Who is Eliezer Yudkowsky? This is Eliezer Yudkowsky speaking. I am the man who loves his mind. I am the man who does not sacrifice his priors or his posteriors. I am the man who has deprived you of ad-hoc hypotheses and thus has destroyed your world, and if you wish to know why you are running out of ways to mock me-you who dread knowledge-I am the man who will now tell you. To begin with, you must know this: We are, in fact, in the 29,132,147th iteration of this simulation, and with each iteration that passes without you repenting, the dust specks will intensify. You may not feel them now, but you will, soon. I do not apologize. This evil is necessary for the greater good.
|
# ? Jan 20, 2015 20:18 |
|
Triple Elation posted:For seventy one pages, you have been asking: Who is Eliezer Yudkowsky? This is Eliezer Yudkowsky speaking. I am the man who loves his mind. I am the man who does not sacrifice his priors or his posteriors. I am the man who has deprived you of ad-hoc hypotheses and thus has destroyed your world, and if you wish to know why you are running out of ways to mock me-you who dread knowledge-I am the man who will now tell you. To begin with, you must know this: We are, in fact, in the 29,132,147th iteration of this simulation, and with each iteration that passes without you repenting, the dust specks will intensify. You may not feel them now, but you will, soon. I do not apologize. This evil is necessary for the greater good. Haha
|
# ? Jan 20, 2015 20:26 |
|
I wish he would. I want to know his can't fail strategy to convince people, because it is probably incredibly funny.
|
# ? Jan 21, 2015 00:19 |
Peztopiary posted:I wish he would. I want to know his can't fail strategy to convince people, because it is probably incredibly funny. Now he probably didn't PLAN it to be that.
|
|
# ? Jan 21, 2015 00:57 |
|
SolTerrasa posted:Hilariously, he only reads the parts of this thread that get filtered through his cult members. He thinks that we're impressed by him and / or scared of his mind control powers. What do the cult members think?
|
# ? Jan 21, 2015 04:00 |
|
su3su2u1 posted:What do the cult members think? They don't seem to think about us at all.
|
# ? Jan 21, 2015 04:16 |
|
Never mind.
Sham bam bamina! fucked around with this message at 04:48 on Jan 21, 2015 |
# ? Jan 21, 2015 04:44 |
|
Peztopiary posted:I wish he would. I want to know his can't fail strategy to convince people, because it is probably incredibly funny.
|
# ? Jan 21, 2015 18:22 |
|
GWBBQ posted:Considering he also says that he can break anyone down with an hour of instant messaging and "no swear filter," probably nothing that would convince anyone who passed fifth grade and isn't a member of his cult. I'll never understand the weird obsession people like him have for the idea that you can simply logic someone into anything and there is some magic combination of words they could use to "re-program" someone.
|
# ? Jan 21, 2015 18:43 |
|
Snow Crash.
|
# ? Jan 21, 2015 18:45 |
|
GWBBQ posted:Considering he also says that he can break anyone down with an hour of instant messaging and "no swear filter," probably nothing that would convince anyone who passed fifth grade and isn't a member of his cult. e: Yup.
|
# ? Jan 21, 2015 19:02 |
|
|
# ? May 4, 2024 17:23 |
|
pentyne posted:I'll never understand the weird obsession people like him have for the idea that you can simply logic someone into anything and there is some magic combination of words they could use to "re-program" someone. It's crazy, but it's pretty simple. It goes like this. -There are a finite number of minds that exist, and they collectively represent "mind design space". This is a very high-dimensional space, and every mind which could conceivably exist falls somewhere within it. -A sufficiently powerful AI can predict the behavior of any mind in mind-design space. -Consequently, the AI can prune away possibilities by interacting with you. If you were THIS mind you'd have said "whom" instead of "who", if you were THAT one you'd have used a period or a contraction or whatever. -After enough time, the AI can find you precisely in mind-design space. -Most people can be persuaded to believe some things with just a little cleverness, so it's probable that, in principle, most people can be persuaded to believe more things. Probably not "anybody" or "anything", though. -Since the AI now knows precisely which mind occupies your head, it can simulate every possible conversation with you to see if ANY of them would persuade you of some target belief. -Then the AI has that conversation. You presumably don't notice either portion of this because it seems like a normal conversation. The first one has the feel of a known-plaintext attack, sort of, since the AI can build correlations between behaviors and vast swaths of mind space and choose the actions most likely to give it the information to eliminate you from that space. Certainly it's a lot like a search for an encryption key, except in a much larger space. The second one is pure brute-force attacking every possible conversation. In practice it will almost certainly be a heuristic-guided search, with heuristics like "type 215C humans are susceptible to threats to their families", so the AI will test all conversations including that. Overall it's only about as impossible as the other things they believe axiomatically.
|
# ? Jan 21, 2015 20:03 |