|
Liquid Communism posted:I think it's less that and more that magical healing is a panacea. Mental afflictions seems to be one of the few things they can't really fix - the only lifers at St. Mungo's we see are the Longbottoms and Lockheart. Plus magical curses like Lycanthropy can only be managed, not cured. I imagine the long lifespan of wizards is at least partly the result of being able to cure muggle diseases like cancer and whatnot.
|
# ? Sep 6, 2015 06:45 |
|
|
# ? May 17, 2024 17:25 |
|
I read the whole Book three times. As with every piece of art (even if it's derivative art) you have to look at the other works of the author to really appreciate it. Each chapter works as a story supported explanation for one of Yudokowskis blog entries on less wrong. What did I take away from it? 1. Actually think for 5 minutes about a problem before giving up. 2. Divide a Task into subtasks, do what needs to be done. 3. Your golden. Bonus: A good time reading the whole thing!
|
# ? Sep 6, 2015 10:05 |
|
What did I take away from it? -Nothing to do with the actual book -The most basic possible description of problem-solving Thanks!
|
# ? Sep 6, 2015 13:05 |
|
chessmaster13 posted:1. Actually think for 5 minutes about a problem before giving up.
|
# ? Sep 6, 2015 13:14 |
|
Cingulate posted:That's a pretty bad ink-to-content ratio tbh It was enjoyable nonetheless. It's similar to TED talks. Makes you feel smarter rather then making you smarter. And this is totally okay for a work of fiction.
|
# ? Sep 6, 2015 13:51 |
|
chessmaster13 posted:It was enjoyable nonetheless. It's similar to TED talks. Makes you feel smarter rather then making you smarter. And this is totally okay for a work of fiction. But.. isn't that contrary to the idea of Less Wrong?
|
# ? Sep 6, 2015 14:54 |
|
LowellDND posted:But.. isn't that contrary to the idea of Less Wrong? Maybe, maybe not. E.Y. is a smart guy. Is he as smart as he thinks is? I don't know. But reading Less Wrong and HPMOR doesn't make you smarter in my opinion. It's a fallacy caused by a book/blog about avoiding fallacies.
|
# ? Sep 6, 2015 15:10 |
|
Oh wow. I picked this up from a used bookstore when I decided to reread the whole series last year, and was a little embarrassed to be getting the 'adult' version of such a famous kid's book. That train one would have been beyond embarrassing. As for HPMOR, a lot of people didn't get the the nonsensical and silly magic system in the original Harry Potter was in the tradition of a certain kind of kid's lit as much as the overall story was in the tradition of Campbell and the monomyth. Yud's one of the few I've ever held it against because he's constantly having his Harry mention other fantasy books and tropes. There's nothing charming about bragging about how you read all these adult novels when you were still in single digits, and grossly misinterpreting a children's book because you never bothered to read children's books.
|
# ? Sep 6, 2015 18:58 |
|
chessmaster13 posted:Maybe, maybe not. E.Y. is a smart guy. Is he as smart as he thinks is? I don't know. I actually do not think Yudkowsky is very smart. He seems to only grapple with concepts in a very superficial way. Very little of what he says illustrates a deeper understanding than one that pretty much any literate person can gain from reading wikipedia articles and arguing with people on the internet. Moreover, he tends to have a very large number of blind spots that he fails to see. His grasp of the nature and uses of Bayes' Theorem is possibly the most egregious example of this. What Yudkowsky is is imaginative. Note, that I did not say creative, his bag of tricks seems quite small (Bayes' Theorem^TM, recursion, tropes/memes). He is very prone to indulging in fanciful ideas and grand ambitions. And there is a certain charisma that goes along with that. Especially if you are a true believer, which Yudkowsky, very ironically, is.
|
# ? Sep 6, 2015 19:28 |
|
Hate Fibration posted:I actually do not think Yudkowsky is very smart. He seems to only grapple with concepts in a very superficial way. Very little of what he says illustrates a deeper understanding than one that pretty much any literate person can gain from reading wikipedia articles and arguing with people on the internet. Moreover, he tends to have a very large number of blind spots that he fails to see. His grasp of the nature and uses of Bayes' Theorem is possibly the most egregious example of this. What Yudkowsky is is imaginative. Note, that I did not say creative, his bag of tricks seems quite small (Bayes' Theorem^TM, recursion, tropes/memes). He is very prone to indulging in fanciful ideas and grand ambitions. And there is a certain charisma that goes along with that. Especially if you are a true believer, which Yudkowsky, very ironically, is. I think a lot of this is the function of him having very little formal education and thus being very unused to engaging with topics past the point that they start to feel challenging. He's never been forced to try to press on on something, and so sticks to 'easy' surface analysis that makes him feel smart (and look smart to his followers).
|
# ? Sep 6, 2015 19:39 |
|
Night10194 posted:I think a lot of this is the function of him having very little formal education and thus being very unused to engaging with topics past the point that they start to feel challenging. He's never been forced to try to press on on something, and so sticks to 'easy' surface analysis that makes him feel smart (and look smart to his followers). I'll have to think about this. What really puts me of is that there are people who "are followers". Doesn't seem right to me.
|
# ? Sep 6, 2015 20:27 |
|
chessmaster13 posted:I'll have to think about this. What really puts me of is that there are people who "are followers". Doesn't seem right to me. The guy runs an actual cult.
|
# ? Sep 6, 2015 20:28 |
|
Night10194 posted:The guy runs an actual cult. I met at least two of his cult members on my college campus even! They were both noticeably autistic. This surprised me exactly none.
|
# ? Sep 6, 2015 21:55 |
|
The main thing I learned from reading HPMOR is that Yud watches lots of anime.
|
# ? Sep 7, 2015 02:46 |
|
Pavlov posted:I met at least two of his cult members on my college campus even! One of his cult members dated a friend of mine for a little bit once. Before meeting him, I didn't know cult members could be so boring.
|
# ? Sep 7, 2015 07:24 |
|
Pavlov posted:I met at least two of his cult members on my college campus even! Jerks put signs all over the campus for their meetup. Tons and tons since they were having a meetup for Less Wrong people who weren't students, and I guess none of them knew how to use a gps or google maps. They didn't get official approval like you're supposed to, and left the signs up everywhere, so I got my petty revenge by reporting them to student government.
|
# ? Sep 7, 2015 07:28 |
|
Horking Delight posted:One of his cult members dated a friend of mine for a little bit once. Before meeting him, I didn't know cult members could be so boring. Hubbart is the cult leader for celebrities. E.Y. is the cult leader for lonely outsiders. This world is never stops to amaze me
|
# ? Sep 7, 2015 09:49 |
|
chessmaster13 posted:Hubbart is the cult leader for celebrities. Correction - a cult leader for lonely outsiders. As I've mentioned before, fanfiction cults are not new.
|
# ? Sep 7, 2015 10:33 |
|
Darth Walrus posted:Correction - a cult leader for lonely outsiders. As I've mentioned before, fanfiction cults are not new. Wasn't Shades of Grey a Twilight fanfiction? I'm not in the target audience and never consumed any of those works, but I often saw middle aged women with vampire related bumper stickers, or female friends with a bunch of vampire stuff. But on the bright side, at least Twilight, Harry Potter, Shades of Grey and HPMOR got people to *read* longer texts, the step from there to actually good literature is not that far. Better having people read anything then wasting their lives away in front of reality TV.
|
# ? Sep 7, 2015 13:14 |
|
chessmaster13 posted:But on the bright side, at least Twilight, Harry Potter, Shades of Grey and HPMOR got people to *read* longer texts, the step from there to actually good literature is not that far. ehh dunno about that. I'm pretty much wasting my life with all the Worm fanfic I read. It's the written equivalent of TV.
|
# ? Sep 7, 2015 13:49 |
I don't get this at all. Like, in that situation I'd hand over the lunch money too, but I'd also hand it over if he announced he was going to torture some random bystander to death instead of a copy of me. If anything, that'd work better because for all I know the random guy is a saint, whereas I know all too well that I can be kind of a dick sometimes. Why is the fact that he's torturing AN EXACT REPLICA OF ME supposed to be so scary?
|
|
# ? Sep 7, 2015 14:28 |
|
Clipperton posted:Why is the fact that he's torturing AN EXACT REPLICA OF ME supposed to be so scary? Personally, I don't get that specific panic either.
|
# ? Sep 7, 2015 14:53 |
|
The relevant part to Yudkowsky is how it ties in with the whole Roko's Basilisk bullshit that they threw so many panic tantrums over at Less Wrong... the premise there is a bit different, though. In that one, a post-singularity AI from the future is perfectly simulating a copy of your brain and is threatening to torture the copy if you don't comply with it. The reason the fate of simulation-you in the far future is supposed to be terrifying to meat-you in the present is that because the simulation is perfect, you - the consciousness processing all these qualia and trying to make a decision - can't be sure whether you really are meat-you, or if you are simulation-you. Either way you have to make a decision, and since the simulation is perfect, the decision made by meat-you and simulation-you is going to be the same. So, to avoid the risk of finding out that you were simulation-you all along and are about to be tortured, you have to comply with the AI, even though it may turn out you were actually meat-you and there's no torture forthcoming anyway. There a staggering laundry list of problems with the whole scenario, but just the fact that sufficiently sold LessWrongers completely flipped their poo poo over this thing is hilarious enough.
|
# ? Sep 7, 2015 14:58 |
Hyper Crab Tank posted:Either way you have to make a decision, and since the simulation is perfect, the decision made by meat-you and simulation-you is going to be the same. So, to avoid the risk of finding out that you were simulation-you all along and are about to be tortured, you have to comply with the AI, even though it may turn out you were actually meat-you and there's no torture forthcoming anyway. Hang on, I don't get that either. If the simulation-you decides not to comply, what's the point of going through with the simulated torture? It won't change what real-you decides and it's just wasting RAM that the AI could use on other stuff. Sorry if this is a derail. Does the Basilisk stuff ever come up in MOR?
|
|
# ? Sep 7, 2015 15:49 |
|
Clipperton posted:Hang on, I don't get that either. If the simulation-you decides not to comply, what's the point of going through with the simulated torture? It won't change what real-you decides and it's just wasting RAM that the AI could use on other stuff. No, Yud avoids telling people about it as much as possible because he believes it will cause them to suffer/not give him money or something. He has deleted references to it on Less Wrong. Someone else did write a sequel to HPMOR with it though, focusing on Ginny: https://www.fanfiction.net/s/11117811/1/Ginny-Weasley-and-the-Sealed-Intelligence. For some reason she also has a male magical soul in it.
|
# ? Sep 7, 2015 15:55 |
|
Clipperton posted:Hang on, I don't get that either. If the simulation-you decides not to comply, what's the point of going through with the simulated torture? It won't change what real-you decides and it's just wasting RAM that the AI could use on other stuff. The idea is that since the simulation is perfect, real-you and simulation-you will necessarily come to the same conclusion. You have to decide on a course of action not knowing whether you will be tortured or not. The entire Basilisk argument is actually way longer and more screwed up than the short bit I mentioned up there, and noteworthy really only because of how insane the reaction was. Even mentioning the problem in Less Wrong circles is grounds for instant banning and removal of your posts, because of what Yudkowsky refers to as an "info-hazard" - that is, knowledge that is in itself harmful to whoever learns about it. There is a reason for why that's relevant here, which ties into the whole argument, if you'd like me to expand and it's not too much of a derail.
|
# ? Sep 7, 2015 16:13 |
Hyper Crab Tank posted:The idea is that since the simulation is perfect, real-you and simulation-you will necessarily come to the same conclusion. You have to decide on a course of action not knowing whether you will be tortured or not. Sure, but if simulation-me decides not to comply, what then does the AI gain out of torturing simulation-me at all? Hyper Crab Tank posted:There is a reason for why that's relevant here, which ties into the whole argument, if you'd like me to expand and it's not too much of a derail. I'm interested!
|
|
# ? Sep 7, 2015 16:18 |
|
Clipperton posted:Sure, but if simulation-me decides not to comply, what then does the AI gain out of torturing simulation-me at all? It has to follow through on the threat otherwise the gambit is meaningless.
|
# ? Sep 7, 2015 16:29 |
|
Either you are simulation-you, in which case your decision is meaningless OR You are YOU-you, in which case the AI can't torture you. Also just knowing that this problem exists puts you on the AI's radar, because the premise is that the AI will punish everyone retroactively (by clone torture) who didn't dedicate their lives to AI research to bring about the AI. But people who didn't know that they have to do that are safe, it's only the ones that know they should and don't who will (have their clones) be tortured. That's why Yud removed all references to Roko's basilisk, he imagines he's savign all those people from future retroactive clone torture by keeping it a secret. Hope this helps
|
# ? Sep 7, 2015 16:30 |
Roko's Basilisk seems to get brought up every dozen pages or so in any thread that mentions Yud. The simplest explanation is that it's Pascal's Wager for singularity nerds.
|
|
# ? Sep 7, 2015 16:39 |
|
Okay, so just in case you're the kind of person who really, really believes in the runaway singularitarian stuff Yudkowsky comes up with, you might want to not read this, I guess? Personally, I think it falls apart about as easily as wet tissue paper, but anyway. It's been a few years since I read about it and I may have forgotten/misinterpreted some details, so chime in if I get something wrong. First, you need to agree with a few premises that Yudkowsky and pals take for granted. Namely: 1) Singularity is a real thing, 2) Processing power and AI capabilities will grow exponentially post-singularity, 3) Singularity is inevitable, even if it takes millions of years to get there. Okay, so consider a post-singularity AI in the future. This AI has the ability to simulate human brains perfectly. The simulation is all digital, or quantum-mechanical, or whatever fancy post-singularity technology the AI has available to it. Furthermore, the AI can perfectly simulate any person that has ever existed in the past, somehow. Further, assume this is a "good" AI - an AI that wants to maximize human welfare as much as possible. According to Yudkowsky's utilitarianism, this AI's #1 priority is to exist as soon as possible, because humanity's population growth is essentially exponential and the sooner the AI exists, the sooner it can shepherd humanity. So, the AI wants to incentivize people - in the past - to dedicate as much effort as possible to ensure its own creation. Remember, according to LW utilitarianism, any amount of suffering today is worth it if it safeguards the exponentially greater amount of not-suffering brought on by the good AI existing. Anyway, this good AI wants to make people contribute to its creation, but all those people are long dead. No problem - the AI cannot talk to the real live human, but it doesn't have to. It can simulate human brains perfectly. The AI simply decides that if the simulated human does not act in accordance with the AI's wishes to make it exist, it will infinitely torture the simulation forever and ever in ways unimaginably painful. Okay, so what? you ask. Why should the human do something because an AI in the future, which the human is not even aware of, decides to arbitrarily torture a simulation? How can the AI punishing a simulation in the future possibly affect the past, no less? As mentioned before, you, the qualia-experiencing consciousness reading these words, don't know if you are an assemblage of meaty neurons in a brain, or a computer simulation. But since the simulation is perfect, your actions and the AI's actions are the same. Therefore, whatever choice you make now will decide whether the simulation, which might be you, gets tortured or not. Okay, but still, so what - the simulation is in the future, you're not aware of it, or what it wants. This AI, which is trying to enact some kind of anti-casual extortion on humans millions of years in the past, is doomed to fail because your behavior is not influenced by events in the future that you are not aware of. Except I just explained the concept to you. This is why it's called a "basilisk" - by reading the explanation, you've become aware of the concept of anti-casual extortion and the fact than an AI could be doing this. And because you are now aware of it, suddenly the AI's extortion attempt can actually work. If you think like Yudkowsky, I've doomed you to existential crisis and possible infinite torture because I explained it to you. If you had never read this, you wouldn't have known about the mechanism involved, and so you would be immune to anti-casual extortion. Now you have no chocie but to devote all your time, money and effort to ensuring the creation of post-singularity AI (preferably by donating to the Singularity Institute), or you will go to future post-singularity computer superhell and suffer forever. That's the part that freaked people out so much. I mean, really freaked them out. People went so far as to try and erase as much evidence of themselves as they possibly could from all kinds of records, so to deprive the future AI of material with which to reconstruct and predict their brains. Yudkowsky himself deemed it an "information hazard" and erased all the information on Less Wrong about it so as to not expose people to it. Like I said, there are a shitload of reasons for why all this is bullshit and you don't need to worry. Even if you believe 100% in all the lead-up, the simplest solution is to just say "no." Just resolve not to do anything at all based on this knowledge, and the AI - which knows that you decided not to do anything - knows anti-casual extortion is ineffective and won't send you to post-singularity superhell. Hyper Crab Tank fucked around with this message at 16:43 on Sep 7, 2015 |
# ? Sep 7, 2015 16:40 |
|
Horking Delight posted:Wizards are inherently magical. They are to muggles what dragons are to lizards. Neville Longbottom is dropped out a window as a child and, despite being bad at deliberate magic, is still "magic enough" that he bounces harmlessly off the floor. Harry regrows his hair overnight when he gets a forced haircut as a child. There's (legal) rules and a system in place for the handling of unconscious/accidental magic. Hell, I'm pretty sure that JKR once said that if someone just tried to shoot Voldemort, the gun would jam. Wizards live in a world where mundane threats barely matter. They're continuously warping reality in their favor. It's probably why they're so poo poo on health and safety.
|
# ? Sep 7, 2015 16:53 |
|
Hyper Crab Tank posted:That's the part that freaked people out so much. I mean, really freaked them out. People went so far as to try and erase as much evidence of themselves as they possibly could from all kinds of records, so to deprive the future AI of material with which to reconstruct and predict their brains. Yudkowsky himself deemed it an "information hazard" and erased all the information on Less Wrong about it so as to not expose people to it. People really freaked out over the wrath of something that doesn't exist yet and might never exist and even if it came to existence might think: "Na, why should I spend any kind of recourse on torturing somebody from the past"? This whole concept is pure and simple lunacy.
|
# ? Sep 7, 2015 16:57 |
Dabir posted:It has to follow through on the threat otherwise the gambit is meaningless. I think that's what I'm not following--I would say that once you refuse to comply, the gambit has failed, regardless of whether the simu-torture happens. Although being able to create perfect virtual copies of human minds, and then kill them, does raise interesting extortion possibilities. Start with one copy, hit ctrl-a/ctrl-c/ctrl-v twenty-five times, and whoever you threaten will then be responsible for MORE DEATHS THAN HITLER if they don't marry you/give up their parking space/bring you a beer from the fridge. Still no need to bring clones into it though, I reckon you'd have better success with virtual babies, or pugs. e: Hyper Crab Tank posted:Like I said, there are a shitload of reasons for why all this is bullshit and you don't need to worry. Even if you believe 100% in all the lead-up, the simplest solution is to just say "no." Just resolve not to do anything at all based on this knowledge, and the AI - which knows that you decided not to do anything - knows anti-casual extortion is ineffective and won't send you to post-singularity superhell. Pretty sure this what I've been trying to say. Thanks!
|
|
# ? Sep 7, 2015 16:57 |
|
It had to do with Yud's proposed solution to Newcomb's Paradox. In this thought experiment, a mysterious Oracle want to play a game with you. In box A is $1000, and box B is closed and may or may not contain $1 million. You can choose to take box B alone, or to take both A and B. The Oracle is going to predict your response. If he thinks you'll take box B by itself, he'll have already filled it with the $1 million. If he thinks you'll take both, then box B will be empty. There's a dispute over the proper course of action that divides closely related branches of skepticism, induced by the fictional nature of the the paradox and the Oracle's predictive powers. One camp says there's no reasonable way his predictive powers could work, so any claims to them are nonsense. It doesn't matter what your choice is, since by the time you make it the box is already either full or empty. You might as well take both to get the extra $1000. The second camp would stand back and let others play the game and track his hit rate. Assuming there is a hit rate better then chance, it wouldn't matter how he was making the prediction. If it made no sense considering what we know about the universe, all that means is that what we know about the universe is wrong. Considering the gap between $1 million and 0 is huge, even 1% over chance is a high enough hit rate to risk playing along. He knows what you're going to guess, somehow, and you have to accept the evidence of it. Yud's proposal, which I don't completely understand, says that the information is somehow going back in time. Part of it is something like a code of honor; you always know what the Paladin will choose to do, so you can predict his behavior. So long as you make the decision that is best for your future self, your future self will act in a consistent and predictable way. There's also a bunch of nonsense I can't quite follow tacked on. In the basilisk situation, somehow the torture of your electronic future selves would be passed back to the original, and should influence your behavior.
|
# ? Sep 7, 2015 17:01 |
|
Added Space posted:In the basilisk situation, somehow the torture of your electronic future selves would be passed back to the original, and should influence your behavior. It hinges on the (incredibly improbable) idea of you basing your decisions on what you predict a super-AI will do, and the super-AI then predicting what you will have already done based on what it predicts that you predicted about it. Yeah. e: Have you ever seen the wine scene from The Princess Bride? That's basically what's going on: two intelligences trying to predict each other recursively until they both, assuming they are perfectly capable of predicting each other (ahem), come to a conclusion on what to do, acasually, even though they've never met and never will. Hyper Crab Tank fucked around with this message at 17:12 on Sep 7, 2015 |
# ? Sep 7, 2015 17:03 |
|
Hyper Crab Tank posted:It hinges on the (incredibly improbable) idea of you basing your decisions on what you predict a super-AI will do, and the super-AI then predicting what you will have already done based on what it predicts that you predicted about it. Yeah. I think you're describing one facet of the Halting problem, where predicting predictions becomes provably impossible. According to Yud, the halting problem would not exist between two copies of the same entity, because through some kind of philosophical mysticism they'd be connected and reach the same conclusion. In the basilisk problem, you might not have done what the AI wanted until the electronic copy facing torture mystically passed that information back to you. You see, it's not acausal. That's why he made so much hay over the Comed-Tea in the story; it's only acausal if you discount the possibility of the cause going backward in time. Added Space fucked around with this message at 17:20 on Sep 7, 2015 |
# ? Sep 7, 2015 17:17 |
|
He has this whole 'timeless physics' thing going on too. http://lesswrong.com/lw/qr/timeless_causality/ http://rationalwiki.org/wiki/Roko%27s_basilisk I find this to be a pretty good deconstruction of the basilisk.
|
# ? Sep 7, 2015 17:48 |
So where does Yud stand on experience/learning? Brain isn't all there is to a person and an AI can't simulate you without simulating your (untranslatable) past experiences? I just love how he's read enough psychology to know about heuristics (which are simple, short and generally salient) but doesn't know gently caress all past that.
|
|
# ? Sep 7, 2015 17:48 |
|
|
# ? May 17, 2024 17:25 |
|
It can extrapolate and fill in the blanks - accurately! - by studying records of you. Blog posts, photos, things someone else posts about you on Twitter, emails... It's really powerful and smart, you see, and big enough to devote an entire Jupiter-sized brain to reconstructing you, and everyone else.
|
# ? Sep 7, 2015 17:55 |