|
The Unholy Ghost posted:an awesome metaphor
|
# ? Sep 26, 2014 08:51 |
|
|
# ? May 4, 2024 11:53 |
|
I read a Harry Potter alternate universe fanfiction devoted to promoting rationalism and found it not all that bad. Actually, quite good in fact. Parts of it were quite epic and mindblowing if you must know. Really aligned with the things I want to know about, and informed me in a satisfying way. Anyway, having said that, I think I have a lot to contribute, in general.
|
# ? Sep 26, 2014 08:56 |
|
The Unholy Ghost posted:Okay, so I posted earlier in here about how I didn't understand the hate for this guy, and now I understand even less. Everyone here told me that HPMOR goes completely bonkers or whatever and that Harry summons Carl Sagan as a patronus, but... Well, if you only ever read other fanfiction, then as a basis for comparison I can see who you'd think MPHOR is "mindblowing" but this thread has already broken it down and explained just how retarded the writing is, in addition to a host of other problems with Yud in general in the way that he presents arguments, claims to have perfect solutions, discourages dissent, ridicules "published academia" etc. The list is far too long to point out in one post, and the majority of this thread is dedicated to people discussing how stupid everything Yud does is, with people who specifically work in said fields on a professional level savaging all of what Yud thinks is his brilliant ideas. Yud is literally a 21st internet charlatan, making all sorts of nebulous promises, drawing in a cult of personality around him, and getting a lot of money dumb people to produce research which he claims definately exists but is too dangerous for "the others" (a byword for people to stupid and ignorant to appreciate Yud's works and likely to pervert it and ruin the world) to read. Also, Yud wants to "have fun" in the same way that Ayn Rand wanted to be a fiction writer. Literally everything he does in some way shape or form pushes his intellectual agenda and is rife with the kinds of mistakes a 1st year English major would spot and cross out during an editorial process. There's a reason Yud has such a following, and at a first, uninformed glance he seems like a smart guy, but when you peel back the layers you realize he's nothing more then a smug self educated "autodidact" styling himself as the next Socrates or Hume pretending to swim in the intellectual ocean when he's really floating with waders in the kiddie pool.
|
# ? Sep 26, 2014 09:54 |
|
The Unholy Ghost posted:Everyone here told me that HPMOR goes completely bonkers You misunderstand. Yudkowsky is bonkers. HPMOR is mostly just long winded with a mediocre writing style. That is of course except for chapter 19 where it briefly and suddenly becomes a torture fic. I have no idea what the gently caress was up with that.
|
# ? Sep 26, 2014 14:08 |
|
The Unholy Ghost posted:Okay, so I posted earlier in here about how I didn't understand the hate for this guy, and now I understand even less. Everyone here told me that HPMOR goes completely bonkers or whatever and that Harry summons Carl Sagan as a patronus, but... So I'm blogging my experience of reading HPMOR here: http://su3su2u1.tumblr.com/tagged/Hariezer-Yudotter/chrono Basically none of the science mentioned in HPMOR is correct. I'm 27 chapters I'm yet to encounter anything eye-opening, just a lot of cloying elitism and really poor science.
|
# ? Sep 26, 2014 15:50 |
|
su3su2u1 posted:So I'm blogging my experience of reading HPMOR here: http://su3su2u1.tumblr.com/tagged/Hariezer-Yudotter/chrono
|
# ? Sep 26, 2014 16:05 |
|
In which a blogger takes a long time to say "Less Wrong is good because we care about the truth, unlike everyone else"
|
# ? Sep 26, 2014 16:32 |
|
Okay, I misspoke, it was not really the science in the story that surprised me but the logical arguments and the way Harry manipulates people. From the perspective of someone who just wants to enjoy a story, it's quite entertaining. From the thread's perspective you're watching a guy entrance people into his "cult" and give him money. If he's convincing people he can find a way into immortality that really is stupid. I think if you look at the story as Harry taking over the magical world with both charlatan and legitimate methods, the story becomes incredibly interesting. (Also: Dementors are kind of complete mysteries in Harry Potter so it's not that big of a deal if Yudkowsky wants to attribute some noncanon meaning to them. It is a Fanfiction after all [isn't it more about the overall metaphor than the exact details?])
|
# ? Sep 26, 2014 16:35 |
|
The Unholy Ghost posted:Okay, I misspoke, it was not really the science in the story that surprised me but the logical arguments and the way Harry manipulates people. From the perspective of someone who just wants to enjoy a story, it's quite entertaining. They represent depression. This is reinforced by everything they do in the story, everything Rowling has said, and everything the stories are trying to accomplish. They represent death because Yudkowsky either doesn't know what the point of Harry Potter was, or doesn't care.
|
# ? Sep 26, 2014 16:39 |
|
The Unholy Ghost posted:Okay, I misspoke, it was not really the science in the story that surprised me but the logical arguments and the way Harry manipulates people. From the perspective of someone who just wants to enjoy a story, it's quite entertaining. Maybe you'd be interested in this thread, where a whole lot of people 'manipulate' fictional characters just like Yudotter does! It's called shit_that_didn't_happen.txt for a reason. Maybe if you're lucky your favourite fictional smartybrains will marry an eternally applauding Albert Einstein-Dumbledore at the end of the fic.
|
# ? Sep 26, 2014 16:44 |
|
Also you could just watch House of Cards. Or read/watch Count of Monte Cristo. Or The Talented Mr. Ripley. Many things you could do that don't involve reading Big Yud
|
# ? Sep 26, 2014 16:49 |
|
The Vosgian Beast posted:Also you could just watch House of Cards. I've taken to thinking him as 'Yud the Spud', as time goes on.
|
# ? Sep 26, 2014 16:54 |
|
I can actually say without reservation that HPMOR is one of the best fanfics I've ever read! I do not intend this as an endorsement of HPMOR.
|
# ? Sep 26, 2014 16:55 |
|
Okay, so you know how I said I'd read Yud's Fun Theory sequence like 3 weeks ago? Well, school got into full swing and tbh I've really been trying to avoid reading it, so I didn't. I'm starting on it today. edit: I haven't even started the actual sequence and it already hurts Fuckin Yud posted:Fun Theory is also the fully general reply to religious theodicy (attempts to justify why God permits evil). Our present world has flaws even from the standpoint of such eudaimonic considerations as freedom, personal responsibility, and self-reliance. Fun Theory tries to describe the dimensions along which a benevolently designed world can and should be optimized, and our present world is clearly not the result of such optimization - there is room for improvement. Fun Theory also highlights the flaws of any particular religion's perfect afterlife - you wouldn't want to go to their Heaven. Moatman fucked around with this message at 17:46 on Sep 26, 2014 |
# ? Sep 26, 2014 17:07 |
|
Squidster posted:Incidentally, I'm enjoying your summaries! Keep on posting, good sir. The Unholy Ghost posted:I think if you look at the story as Harry taking over the magical world with both charlatan and legitimate methods, the story becomes incredibly interesting.
|
# ? Sep 26, 2014 17:15 |
|
The Unholy Ghost posted:Okay, I misspoke, it was not really the science in the story that surprised me but the logical arguments and the way Harry manipulates people. From the perspective of someone who just wants to enjoy a story, it's quite entertaining. It's not that every word he speaks is horrible, this thread has plenty of posts saying- I enjoyed such and such of his. Most everyone is quick to say they don't like him overall though. This is because he is rather deranged and something of a cult leader. If he had nothing of interest to say no one would go to his website and he wouldn't have anyone following him. Keep seeking outside info on what you read there, and maybe look at how cults work in general. Hopefully you won't get sucked in too deep.
|
# ? Sep 26, 2014 17:43 |
|
Epitope posted:It's not that every word he speaks is horrible, this thread has plenty of posts saying- I enjoyed such and such of his. Most everyone is quick to say they don't like him overall though. This is because he is rather deranged and something of a cult leader. If he had nothing of interest to say no one would go to his website and he wouldn't have anyone following him.
|
# ? Sep 26, 2014 17:47 |
|
Not Strictly Wrong would be a pretty good name for Yudkowsky's organization.
|
# ? Sep 26, 2014 17:51 |
|
The problem is that Yud has just enough intellect to superficially mimic a scholar, but no more than that. He is the quintessential internet smart guy, who gleans his knowledge from skimming the Wikipedia article in another window. Just look at his awful butchering of Bayesian statistics.
|
# ? Sep 26, 2014 18:21 |
|
Okay, post about Fun Theory stuff may take a while. It's making me irrationally angry. e: This was linked from the first fun theory post. Emphasis Yud's quote:If this was an attempt to focus the young Eliezer on intelligence uber alles, it was the most wildly successful example of reverse psychology I've ever heard of. Moatman fucked around with this message at 18:38 on Sep 26, 2014 |
# ? Sep 26, 2014 18:34 |
|
Moatman posted:Okay, post about Fun Theory stuff may take a while. It's making me irrationally angry. Please get rid of that avatar.
|
# ? Sep 26, 2014 18:49 |
|
Political Whores posted:The problem is that Yud has just enough intellect to superficially mimic a scholar, but no more than that. He is the quintessential internet smart guy, who gleans his knowledge from skimming the Wikipedia article in another window. Just look at his awful butchering of Bayesian statistics. Wait wait wait. I use bayesian stats on a daily basis, but I learned about them from a real person after I learned about them from Yud the Spud. I'm aware he misapplies them to situations where using real numbers would be ludicrous (see anything that has Knuth's Up Arrow Notation), but I'm not aware of anything he gets actually wrong. Is there something?
|
# ? Sep 26, 2014 18:52 |
|
SolTerrasa posted:Wait wait wait. I use bayesian stats on a daily basis, but I learned about them from a real person after I learned about them from Yud the Spud. I'm aware he misapplies them to situations where using real numbers would be ludicrous (see anything that has Knuth's Up Arrow Notation), but I'm not aware of anything he gets actually wrong. Is there something? He doesn't understand that you can/are supposed to update priors. Also his hateboner for any other type of statistics is pretty ridiculous. I'll switch the av once I figure out what to replace it with amd/or have enough money to Scroogeproof it.
|
# ? Sep 26, 2014 18:58 |
|
The Sequences Digression One This has taken way too long to assemble because I am too easily angered by Internet Smart Guys. A long, long time ago, LessWrong was part of another website, called Overcoming Bias. OB is run by someone named Robin Hanson, a legitimately intelligent professor of economics at what appears to be a real school. Hanson thinks that the singularity will probably happen, but he does not think that friendly AI is important. In this, he is just a more optimistic version of me. I hope that the singularity will happen, but consider it ludicrous that we would need Friendly AI as a concept before we have consistently self-modifying AI. He works at the least crazy of the MIRI-clones, the Future of Humanity Institute. He also (being older and wiser than Big Yud) has a much better grasp on the whole "systematic human bias" thing. Basically he's a better version of Big Yud minus about thirty percent of the crazy. Well, you know how Big Yud wants to misapply the Agreement Theorem to make sure no one ever has a difference of opinion? They had a debate. Robin Hanson (then Yud's mentor-figure) is uniquely suited to this debate. Anyone who's had an economics class knows: there's always that one kid who thinks he knows so, so much better than everyone else, including the professor. Yudkowsky is that kid, and Hanson treats like that, over and over and over. This debate goes so badly for Big Yud that it seems to be the reason that LessWrong exists at all; he left OB because of irreconcilable differences. There is the slight problem of . There are seven hundred pages of this debate. So I have distilled what I think are the most interesting parts, but please ask me if it turns out that some context is missing or unclear. Here we go. The posts begin innocuously enough. Hanson postulates something which he calls UberTool. UberTool is a tool which aids in task X, which aids in task Y, which aids in task Z, where task Z includes building better UberTools. Hanson asks "would you fund a venture capitalist who said they had discovered such a cycle, and whose plan was to use the subsequent inventions to dominate the world markets for X, Y, and Z?" Yudkowsky posted:You’ve got to be doing something that’s the same order of Cool as the invention of “animal Yudkowsky thinks this is bigger than farming, because he drew a direct parallel to AI, which is (in his head) capable of direct self improvement in this way. Yudkowsky is like that, he likes extremes. AI is either the savior or destroyer of humanity. An idea is either better than farming or so dangerous as to be banned speech. Hanson replies, effectively, "slow down there, Eliezer, I actually meant something more like Douglas Engelbart". Douglas Engelbart is the guy who invented GUIs. For the programmers among us, now it makes sense. He also invented hypertext (the HT in HTTP), the mouse, and networking. These are all things that make you better at using computers, thus better at programming, thus better at making new cool inventions like that. He notably did not take over the world. Hanson is saying "I would not fund UberTool. I don't think they'd take over the world, because other people who have accomplished their mission didn't." He is also saying "Yudkowsky is wrong". It's a call-out that he is not trying very hard to disguise. This was Hanson's point all along; self-improvement is not a magical device that will fix everything. His first post was a trap: get Yudkowsky to say that self-improvement *is* a magical device that will fix everything. (Never answer an economist's rhetorical questions; they are always traps ), then he explains that UberTool is really just computers and we already have them, then in post three he explains why self-improvement is not magical: Hanson posted:It is not so much that Engelbart missed a few key insights about This is a rhetorical question. Never answer an economist's rhetorical questions, they are always traps. Yudkowsky is invested in his argument, though. He's already started the Singularity Institute by this time, based on the idea that one such feedback loop (AI self-improvement) is so dangerous that it must be countered this instant, else we will all die a quick death at the hands of a cruel and uncaring god. So it is necessary in his head that all instances of feedback loops build immediately to world-domination levels. So Yudkowsky starts telling his side of the story, why AI is inherently different from all other cases. A normal person would say "Aha! Special pleading!" But Robin Hanson is a polite sort of person, and instead lets Big Yud go on and on. They go back and forth for a little while. Yudkowsky asserts that there are three kinds of predictability for the future. There is the "Strong Inside View", where every component of a system is predictable and easy to understand. This means that most of the work is engineering effort and it's easy to know where things will go. Then there's the "Outside View", where you're basically just making poo poo up. Then there's the "Weak Inside View", where you're combining the two; you can predict some elements of the system but not all of them. He asserts that Hanson is using the Outside View, and that he is using the Weak Inside View, and consequently he is just more reliable and we shall all have to trust him. Hanson, for reference, states that he believes that the next major innovation (which may be AI, or may not) will come soon, but slowly, and be adopted worldwide over the course of years or decades, if not centuries, much like the concept of "industry". He believes this based on some complicated-but-not-obviously-wrong economic analysis; I wouldn't notice if it was wrong, I'm not an economist. Yudkowsky is constantly dodging and diving around Hanson's increasingly direct requests for him to go ahead and state his position for real. Hanson says: Hanson posted:I suspect it is near time for you to reveal to us your “weak inside view,” i.e., Basically, Hanson is saying "then say why you think this, don't just assert it without evidence". This goes on, and on, and on... Yudkowsky continues never to provide evidence, or even to back up his theories at all. After one more Yudkowsky post which is "background" (read: does not state his position), Hanson starts being more direct with his "get to the point": Hanson posted:Eliezer, I can’t imagine you really think I disagree with anything important Then Hanson talks for a while about how he thinks Yudkowsky's Inside vs Outside views theory is bullshit, but can't know for sure until Yudkowsky states his position. And Yudkowsky posts again with many, many , without stating his position. Hanson posted:Eliezer, have I completely failed to communicate here? And so Eliezer starts getting rude: Eliezer posted:Well . . . it shouldn’t be surprising if you’ve communicated less than Seriously, man, get to the point. There's some more meaningless back and forth, and and . The shortest possible version is that Yudkowsky is sticking to his guns on having secret knowledge that he can't share but no really we need to trust him. Hanson tries to summarize what he assumes Yudkowsky's points are (a technique called 'steelmanning', the opposite of strawmanning, where you formulate an argument for your opponent as strongly as you possible can). Eventually Hanson just gets fed up and posts the best economics-professor-tired-of-arrogant-student post I have ever seen: Hanson posted:It seems that the basis for Eliezer's claim that my analysis is based on At this point I'm going to stop posting, because I have hit the point where I realize that this may actually only be interesting to me. We are through 60 pages of 700, and if this is boring you to tears, well, it's not worth continuing. My notes don't start turning into "oh jesus christ gently caress you gently caress you gently caress you" until page 150 or so.
|
# ? Sep 27, 2014 04:16 |
|
Please continue. I'll just note that 'steelmanning' is also known as the 'principle of charity' and has been common in philosophy for decades if not longer, if it's something LWers like to lay claim to. I've heard of it a couple times. Peel fucked around with this message at 04:40 on Sep 27, 2014 |
# ? Sep 27, 2014 04:29 |
|
SolTerrasa posted:Wait wait wait. I use bayesian stats on a daily basis, but I learned about them from a real person after I learned about them from Yud the Spud. I'm aware he misapplies them to situations where using real numbers would be ludicrous (see anything that has Knuth's Up Arrow Notation), but I'm not aware of anything he gets actually wrong. Is there something? Moatman posted:He doesn't understand that you can/are supposed to update priors. Also his hateboner for any other type of statistics is pretty ridiculous. Priors are a part of it, but what really gets me is that Yudkowsky is obviously coming at the situation from the opposite end of someone actually interested in scientific enquiry or prediction i.e. a real scientist. Where does Yudksowsky pull his prior probability distributions? From his own imagination, they have basically no ties to observable reality, and the events he references (a hypothetical god AI coming into existence, for instance) are all extremely contingent on any number of other variables, if they are possible at all. It's not like he even justifies using some variant of reference prior for his beliefs, and in practice it doesn't really matter, because choosing a probability distribution is just a smokescreen for concealing his extremely unscientific certainty of his own wild imagination. That's ultimately what the gigantic up arrow notation numbers represent, his certainty of his prediction, masked as a hypothetical about the number of people he could save. He already came to the conclusion he wants (that his stupid "research" is worth funding) and uses Bayesian statistics to create a scenario which basically validates it, with the bonus of it being ~scientific~. That's numbers he uses are "correct", but its basically just cargo cult probability calculations at that point.
|
# ? Sep 27, 2014 04:52 |
|
The Unholy Ghost posted:Okay, I misspoke, it was not really the science in the story that surprised me but the logical arguments and the way Harry manipulates people. From the perspective of someone who just wants to enjoy a story, it's quite entertaining. And yeah, you could write a Harry Potter fanfic (or parody) where the dementors represented death instead of depression, but you'd have to change them to make that work, and Yudkowsky hasn't done that. SolTerrasa posted:At this point I'm going to stop posting, because I have hit the point where I realize that this may actually only be interesting to me. We are through 60 pages of 700, and if this is boring you to tears, well, it's not worth continuing.
|
# ? Sep 27, 2014 05:13 |
|
quote:The idea of "Harry Potter, but with a protagonist smart enough to see how dumb it all is" could certainly make for a pretty good parody.
|
# ? Sep 27, 2014 05:41 |
|
The reason HPMOR is worth reading is because of the interest the HP books spark in people. It's the same reason The Wind Done Gone or Wide Sargasso Sea is worth reading. It's the same reason Weird Al Yankovic songs are worth listening to. It's because the original source material had an impact (for good or ill) so the parody is also interesting because it explores some of the "but what if?" ideas people might have had while reading the original. Yud confuses this with him being a philosopher/novelist in his own right, instead of a fan fictioneer/parodist.
|
# ? Sep 27, 2014 05:52 |
|
I'm marginally involved in the bay area biohacking community. I'm continually frustrated by how most of these people are know nothing shitheads. I went to a party on the marotol and some guy tried to talk to me about less wrong. I blew it off as some random atheist community but this thread has opened my eyes about the absurdity beneath. The dumbest thing I heard recently was some chick at noisebridge talking about how intercranial direct current stimulation was the best thing ever.
|
# ? Sep 27, 2014 07:09 |
Spazzle posted:I'm marginally involved in the bay area biohacking community. I'm continually frustrated by how most of these people are know nothing shitheads.
|
|
# ? Sep 27, 2014 08:20 |
|
Nessus posted:What the gently caress are all of these things you just mentioned? This sounds interesting. The last bit sounds like Niven's wireheading, has that been invented then? It supposedly makes you more intelligent. This means it's a good idea to buy a kit from random people on the internet, attach electrodes to your head and fire it up.
|
# ? Sep 27, 2014 08:42 |
|
In case it's not obvious, the whole thing is nonsense. More or less harmless, but about as useful as homeopathy. Biohacking is cool, though. Ever since I've heard about them, I wanted to have some of these magnetic implants that let you feel electromagnetic fields like you're touching them. Too bad they don't work long-term.
|
# ? Sep 27, 2014 09:06 |
|
Cardiovorax posted:In case it's not obvious, the whole thing is nonsense. More or less harmless, but about as useful as homeopathy. You get most of the effects by supergluing them on. Just shorter term, and with less surgery.
|
# ? Sep 27, 2014 09:21 |
|
Tunicate posted:Andrew Hussie actually did that. It's hilarious. Is this a joke post, or do you have a link to the actual thing? As SA's resident person-who-likes-fanfic-too-much I'd like to see this!
|
# ? Sep 27, 2014 09:33 |
|
Tunicate posted:You get most of the effects by supergluing them on. Just shorter term, and with less surgery.
|
# ? Sep 27, 2014 09:41 |
ungulateman posted:Is this a joke post, or do you have a link to the actual thing? As SA's resident person-who-likes-fanfic-too-much I'd like to see this!
|
|
# ? Sep 27, 2014 09:43 |
|
Tunicate posted:You get most of the effects by supergluing them on. Just shorter term, and with less surgery. Get a magnetic loop hearing aid and you can hear them too. And see them if they're large enough
|
# ? Sep 27, 2014 11:22 |
|
Moatman posted:He doesn't understand that you can/are supposed to update priors. Also his hateboner for any other type of statistics is pretty ridiculous. Also, he refuses to acknowledge that something could have a probability of 1 or 0 because a lot of his dumb pseudo-Bayesian arguments rely on there being a non-zero possibility for something ridiculous. Of course this means he has no way of describing p(a|a) or p(a|!a).
|
# ? Sep 27, 2014 11:51 |
|
|
# ? May 4, 2024 11:53 |
|
Can anyone explain to me why Yudkowsky believes humans could even control his "Friendly AI" to begin with? Surely if we built something (True AI) that was so amazing at self improvement that it vastly outpaces the need for humans, it would likely self improve itself right out of whatever initial box we build it in; isn't he just building blinders that would eventually get torn off either on accident or on purpose anyway? Does he believe that an AI we couldn't comprehend would still use whatever he thinks is "human logic"? I understand, to a point, wanting to start it off "Friendly", but surely that concept is going to be discarded like an old husk at some stage. (I'm assuming that he thinks this AI would grow rapidly and incomprehensibly powerful and uncontrollable, since otherwise who cares. I'm not sure this is a realistic scenario but even if it was, why does he believe he can do something about it?)
|
# ? Sep 27, 2014 12:16 |