|
Yeah, that's the thing that gets me every time. For all his anti-religion anti-deity stance, he is HIGHLY religious and simply thinks that the "singularity AI" is a god. He uses every theological argument that's been devised by monks for the past 2000 odd years, even (or especially) the most discredited ones, and just applies it to a "sufficiently advanced AI". It's a strange hypocrisy that befuddles me that it fails to inspire cognitive dissonance in him, except that since he disparages "religion" he's never actually studied any of it, so ironically doesn't understand the weaknesses he claims to assault.
|
# ? Aug 8, 2014 13:53 |
|
|
# ? Jun 8, 2024 15:52 |
|
Iunnrais posted:Yeah, that's the thing that gets me every time. For all his anti-religion anti-deity stance, he is HIGHLY religious and simply thinks that the "singularity AI" is a god. He uses every theological argument that's been devised by monks for the past 2000 odd years, even (or especially) the most discredited ones, and just applies it to a "sufficiently advanced AI". It's a strange hypocrisy that befuddles me that it fails to inspire cognitive dissonance in him, except that since he disparages "religion" he's never actually studied any of it, so ironically doesn't understand the weaknesses he claims to assault. Because once you try to argue anything tangentially related to religion or politics the LW community just smugly smiles and treats you like a precocious child. You'd think the unofficial ban of any political discussion would be a major red flag that LW is incapable of serious debate that doesn't include "Well, you're completely wrong because you don't use Bayes, and if you did clearly you'd agree with me."
|
# ? Aug 8, 2014 14:04 |
|
pentyne posted:Because once you try to argue anything tangentially related to religion or politics the LW community just smugly smiles and treats you like a precocious child. Speaking of, this article was posted earlier, but seems relevant. quote:I'm no longer a skeptic, but still I can't resist the old skeptic urge to do a bit of debunking. After all, there are a lot of crackpots out there. There are people, for example, who believe that a superintelligent computer will arise in the next twenty years and then promptly either destroy humanity or cure death and set us free. There are people who believe that one of the best works of English literature is an unfinished Harry Potter fanfic by someone who can barely write a comprehensible English sentence. There are even people who believe the best thing you can do to help the poor and the starving is become a city stockbroker or Silicon Valley entrepreneur! And more often than not, the same people believe all these crazy things!
|
# ? Aug 8, 2014 16:32 |
|
I always end up asking myself, "Am I giving this idea a fair shake, I mean, it'd be bad if I dogmatically dismissed their ideas like they dismissed others'". Then I remember everyone's beleifs are based on assumptions, and theirs are pretty crazy.
|
# ? Aug 9, 2014 00:29 |
|
Darth Walrus posted:Speaking of, this article was posted earlier, but seems relevant. This makes it seem like sooner or later Yudkowsky will post some seriously racist stuff and the LW fans will fracture as some charge to defend him furiously and others who to this point have bought his Cult of Reason go "holy gently caress man you are a worthless human being". Granted, the racist material will be something along the lines of quote:the reason most CEOs and tech visionaries are white is because the crucible that forges such talents is unknown to the Nubian, the Slav, the Oriental, and the Tribal. If they wholly abandon the misguided trappings of their mother culture then they too can reach their true potential, but it will be some time before they can be considered equals So everyone already buying his bullshit will claim there's nothing inherently wrong with the statement, but anyone not a spoiled well off white libertarian will just give up.
|
# ? Aug 9, 2014 01:48 |
The most racist of the less-wrongers already split off and went to a site called More Right, where they are known as "Neo Reactionaries."
|
|
# ? Aug 9, 2014 02:00 |
|
A quote from one of the Less Wrong members on aging:quote:Mr. Mowshowitz calls [advancing technology] escape velocity. “That’s where medicine is advancing so fast that I can’t age fast enough to die,” he explained. “I can’t live to 1,000 now, but by the time I’m 150, the technology will be that much better that I’ll live to 300. And by the time I’m 300, I’ll live to 600 and so on,” he said, a bit breathlessly. “So I can just . . . escape, right? And now I can watch the stars burn out in the Milky Way and do whatever I want to do.” On Roko's Basilisk: quote:The Observer tried to ask the Less Wrong members at Ms. Vance’s party about it, but Mr. Mowshowitz quickly intervened. “You’ve said enough,” he said, squirming. “Stop. Stop.” Someone else, on polyamory: quote:Asked whether polyamory was part of the New York scene as well, Ms. Vance said it was uncommon. “I’d certainly say that we don’t think ‘poly’ is morally wrong or anything,” she noted, adding that the California contingent had taken the idea quite a bit further. “In one of those [co-]houses, I saw a big white board on the board with a ‘poly-graph,’ a big diagram of who was connected to whom,” she said. “It was a pretty big graph." That's the sort of poo poo that happens if you run goonhouse, I guess.
|
# ? Aug 9, 2014 02:51 |
"Actuarial escape velocity" is a concept from a Larry Niven story, and even he didn't extend it out to Literally Forever; the way he put it was that for every year you live, the average predicted lifespan goes up by more than a year. This does not necessarily mean YOU PERSONALLY will be immortal, of course.
|
|
# ? Aug 9, 2014 03:24 |
|
pentyne posted:This makes it seem like sooner or later Yudkowsky will post some seriously racist stuff and the LW fans will fracture as some charge to defend him furiously and others who to this point have bought his Cult of Reason go "holy gently caress man you are a worthless human being". This already happened, and it didn't fracture LW at all, they all agreed with him. Here's an except from Harry Potter and the Methods of Rationality. The context is Harry's thoughts as Draco Malfoy, who is eleven years old, idly chats about his plans to rape a ten-year-old girl. Yudkowsky posted:And in the slowed time of this slowed country, here and now as in the darkness-before-dawn prior to the Age of Reason, the son of a sufficiently powerful noble would simply take for granted that he was above the law. At least when it came to a little rape here and there. There were places in Muggle-land where it was still the same way, countries where that sort of nobility still existed and still thought like that, or even grimmer lands where it wasn't just the nobility. It was like that in every place and time that didn't descend directly from the Enlightenment. A line of descent, it seemed, which didn't quite include magical Britain, for all that there had been cross-cultural contamination of things like pop-top soda cans. That's the revised, de-racist-ified version after someone outside his cult complained about the original version. Here's what was once there: Yudkowsky posted:There had been ten thousand societies over the history of the world where this conversation could have taken place. Even in Muggle-land it was probably still happening, somewhere in Saudi Arabia or the darkness of the Congo. It happened in every place and time that didn't descend directly from the Enlightenment. But hey, if he changed it, he must have realized he was being racist as hell, right? Nope, this was his explanation for the change: Yudkowsky posted:"Most readers, it’s pretty clear, didn’t take that as racist, but it now also seems clear that if someone is told to *expect* racism and pointed at the chapter, that’s what they’ll see. Aren’t preconceptions lovely things?" If you see any racist connotations in talking about the backwardness of "the darkness of the congo" and how the only true civilization comes from modern white western europe then maybe you're the real racist and your brain is being clouded by your mind-killing political biases Oh, and while we're at it, here's what Yudkowsky thinks of the backstory between Lily and Snape: Harry, Yudkowsky's mouthpiece posted:Not that I've ever been through high school myself, but my books give me to understand that there's a certain kind of teenage girl who'll be outraged by a single insult if the boy is plain or poor, yet who can somehow find room in her heart to forgive a rich and handsome boy his bullying. She was shallow, in other words. Tell whoever it was that she wasn't worthy of him and he needs to get over it and move on and next time date girls who are deep instead of pretty. Yudkowsky himself posted:Those of you who are all like "Lily Evans was a good and virtuous woman and she broke off her friendship with Snape because he was studying Dark Arts, and she didn't go anywhere near James Potter until he stopped bullying", remember, Harry only knows the information Severus gave him. But also... a plain and poor boy is a good friend to a beautiful girl for years, and loves her, and pursues her fruitlessly; and then when the rich and handsome bully cleans up his act a bit, she goes home with him soon after... considering just how normal that is, I don't think you're allowed to write it and not have people at least wonder. Remember that Snape was doing the wizard equivalent of hanging out at KKK rallies and calling Lily a friend of the family. But he was a ~nice guy~ so obviously she was a shallow vapid bitch for not sleeping with him.
|
# ? Aug 9, 2014 03:48 |
|
I find it amazing that he openly admits to not having read the books. HPMOR isn't fanfiction, it's just copyright infringement. Also "I've never been through high school but I have studied human interaction from fiction books, and these realistic accounts clearly show..."
|
# ? Aug 9, 2014 05:05 |
|
I've been rewatching the Harry Potter movies this summer, so I decided to read through HPMOR. There are points where something happens that directly contradicts the whole point of the books, like when Lily tries to use Avada Kedavra on Voldemort rather than putting down her wand and dying for Harry. Also, there's a scene where Harry makes fun of Atlas Shrugged, but I gave up reading it around chapter 50 because I was getting sick of the many "Who is John Galt?" speeches. I feel like there were a couple genuinely clever moments, but they weren't worth all the crap. There's a theme I noticed in another one of Yudkowski's writings, Three Worlds Collide, where scientists should hide a terribly destructive secret (nuclear weapons, or a new technology that can blow up a star) because the government can't be trusted with it. It's an interesting idea, and I think Less Wrong hopes that's how they'll stop a theoretical evil AI from being built. I don't think Yudkowski understands how hard it is to keep a secret like that, especially now that science is so interconnected; during the Manhattan Project, interested parties figured out independently that the Americans were working on some kind of project involving uranium and theoretical physics, even if they didn't have the details.
|
# ? Aug 9, 2014 06:32 |
|
AATREK CURES KIDS posted:There's a theme I noticed in another one of Yudkowski's writings, Three Worlds Collide, where scientists should hide a terribly destructive secret (nuclear weapons, or a new technology that can blow up a star) because the government can't be trusted with it. It's an interesting idea, and I think Less Wrong hopes that's how they'll stop a theoretical evil AI from being built. I don't think Yudkowski understands how hard it is to keep a secret like that, especially now that science is so interconnected; during the Manhattan Project, interested parties figured out independently that the Americans were working on some kind of project involving uranium and theoretical physics, even if they didn't have the details. Guess who has an AI research organisation that hasn't produced any useful AI research, but which occasionally alludes to research they totally have, honest, but it's so advanced it would be dangerous to let outsiders see? Eliezer's institute's director posted:The question of whether to keep research secret must be made on a case-by-case basis. In fact, next week I have a meeting (with Eliezer and a few others) about whether to publish a particular piece of research progress.
|
# ? Aug 9, 2014 08:21 |
|
Well thank God no one else on this planet is capable of doing research and they are therefore able to keep the lid on it indefinitely instead of presenting their findings and work with others on a solution.
|
# ? Aug 9, 2014 08:42 |
|
Huh. Seeing as how Bayesian statistics has its roots in a Calvinist minister, it makes an awful lot of sense if you look at the Less Wrongers as seeing themselves as the unconditional elect for when Judgement Day comes. The Terminator version, not Biblical.
|
# ? Aug 9, 2014 08:55 |
|
Qwertycoatl posted:Guess who has an AI research organisation that hasn't produced any useful AI research, but which occasionally alludes to research they totally have, honest, but it's so advanced it would be dangerous to let outsiders see? This would be hilarious if it was coming from a young child.
|
# ? Aug 9, 2014 10:50 |
|
Qwertycoatl posted:Guess who has an AI research organisation that hasn't produced any useful AI research, but which occasionally alludes to research they totally have, honest, but it's so advanced it would be dangerous to let outsiders see?
|
# ? Aug 9, 2014 13:02 |
Anticheese posted:Huh. Seeing as how Bayesian statistics has its roots in a Calvinist minister, it makes an awful lot of sense if you look at the Less Wrongers as seeing themselves as the unconditional elect for when Judgement Day comes. But their reinvention of religion will never stop being funny!
|
|
# ? Aug 9, 2014 19:45 |
|
ArchangeI posted:I find it amazing that he openly admits to not having read the books. HPMOR isn't fanfiction, it's just copyright infringement. They actually do this often enough that there's a word for it. Well, a phrase, but they use it like a single lexical token; they call it "generalizing from fictional evidence", and here's why they think it's legit: One of the things that really sucks about Bayes Theorem is that you can (justifiably) make it arbitrarily hard to convince you of something just by being sufficiently sure that you were right the whole time. In order to use Bayes Theorem for anything, you need to have a prior, which is your P(X) beginning just from your assumptions, which essentially represents how easy it is to convince you otherwise. Most people who have no information about something will choose what's called a "non-informative" prior, which is (1 / number of alternatives to X). So the non-informative prior for a coin flip is 0.5, the non-informative prior for an outcome of a d20 roll is 0.05, etc. (it'd be dumb to have priors for those but I just woke up) I had never heard of a non-informative prior until I left LessWrong and got a real AI education, because they never use them. They prefer to go based on their instincts. Big Yud has a number of posts about refining instincts because they're essential to being a good Bayesian. So according to the God of Math it's fine to use fiction as the source of your gut instinct, but the thing is that they never assign reasonable confidence values to anything. In Bayes Theorem, They use these stupid numbers like 1/3^^^3 instead. As a Bayesian, you can say "this coin flip will be heads with P(1-1/3^^^3)", so that you aren't really obligated to be convinced when the flip turns up tails, because the odds that your eyes deceive you is substantially less than 1/3^^^3, so, well, it must have been heads, that's just the way the math works out. Combining these two things, evidence from fiction and absurdly high numbers, the followers of big yud have invented a mathematically sound way to live in a fantasy land.
|
# ? Aug 9, 2014 20:31 |
|
Yud also seems to forget that you're allowed to update your priors based on observation. Hell, you're outright meant to update them based on observation. Yud on the other hand just takes that arbitrary number and declares that it's completely right forever because reasons. For someone who treats Bayes like some sort of prophet, he sure doesn't understand how to apply the guy's work.
|
# ? Aug 9, 2014 20:52 |
|
Slime posted:Yud also seems to forget that you're allowed to update your priors based on observation. Hell, you're outright meant to update them based on observation. Yud on the other hand just takes that arbitrary number and declares that it's completely right forever because reasons. I am going to reiterate that what he and his followers really don't understand more fundamentally is conditional probabilities. That's why they think they can keep the probability that the Bayesian robber is going to kill some innocents constant while raising the stakes until you have to give him money. Well, guess what, either the robber constantly upping the number of innocents means your posterior for his being genuine gets lower and lower each time, or to begin with, each order of magnitude of people tortured without additional proof makes your prior against them stronger and stronger, because you are not a dumb.
|
# ? Aug 9, 2014 22:03 |
Absurd Alhazred posted:because you are not a dumb.
|
|
# ? Aug 10, 2014 01:16 |
|
Absurd Alhazred posted:I am going to reiterate that what he and his followers really don't understand more fundamentally is conditional probabilities. That's why they think they can keep the probability that the Bayesian robber is going to kill some innocents constant while raising the stakes until you have to give him money. Well, guess what, either the robber constantly upping the number of innocents means your posterior for his being genuine gets lower and lower each time, or to begin with, each order of magnitude of people tortured without additional proof makes your prior against them stronger and stronger, because you are not a dumb. Yudkowsky never comes out and says it, but I think his reasoning internally is something like this: If the robber can torture ten trillion people for fifty years, then the robber must be I think it really comes back to his insane fear of death. He hates death so much that he assumes it must be possible to never die. And if life can be extended to infinity, then so can anything else, including the psychotic AI's torture capabilities. His picture of the future is a cyberboot stamping on a human face forever, and he refuses to learn anything that might contradict that because he finds that picture comforting.
|
# ? Aug 10, 2014 02:32 |
|
Lottery of Babylon posted:Yudkowsky never comes out and says it, but I think his reasoning internally is something like this: But again, you need to conditionalize that poo poo. The crazier the AI, the stronger the prior against. He's treating these two things as if they're independent, there's no going around it.
|
# ? Aug 10, 2014 02:37 |
|
Where does Yudkowsky say that he hates infinity? I don't doubt that he does because that's just the kind of stupid thing he would fixate on, but I just assumed he used 3^^^^3 as a big number because he thinks it makes him look smarter than just saying infinity. SubG posted:At least from here their research institution looks a lot like a bunch of assholes doing Paranoia LARPing. So is Friend Computer their best case or worst case scenario? Don Gato fucked around with this message at 03:01 on Aug 10, 2014 |
# ? Aug 10, 2014 02:58 |
|
Don Gato posted:So is Friend Computer their best case or worst case scenario? That sounds like a question a mutant would ask.
|
# ? Aug 10, 2014 03:05 |
|
Absurd Alhazred posted:But again, you need to conditionalize that poo poo. The crazier the AI, the stronger the prior against. He's treating these two things as if they're independent, there's no going around it. I know, but I think Yudkowsky would argue that limn-->infinityP(robber's claim is credible | robber claims to be able to torture n people)=p>0 because he believes in the possibility (and therefore nonzero probability, because he thinks probabilities can't be zero) of immortal deusmachinas with infinite computing power. He won't actually phrase it that way because that would involve learning a bit of math, but that's how his thought process works. Don Gato posted:Where does Yudkowsky say that he hates infinity? I don't doubt that he does because that's just the kind of stupid thing he would fixate on, but I just assumed he used 3^^^^3 as a big number because he thinks it makes him look smarter than just saying infinity. He always uses oddly specific incredibly large numbers like 3^^^^3 instead of infinity, he says that human minds aren't capable of working with infinity, he says that 0 and 1 aren't actually probabilities (and that anyone who uses them is an idiot) on the grounds that if you convert them to log odds they go to infinity and infinity isn't a number, and he brings up the finite size and divisibility of the observable universe occasionally. I don't think he's ever said he hates infinity in so many words, but he certainly doesn't seem to get along with it very well.
|
# ? Aug 10, 2014 03:57 |
|
Lottery of Babylon posted:I know, but I think Yudkowsky would argue that limn-->infinityP(robber's claim is credible | robber claims to be able to torture n people)=p>0 because he believes in the possibility (and therefore nonzero probability, because he thinks probabilities can't be zero) of immortal deusmachinas with infinite computing power. He won't actually phrase it that way because that would involve learning a bit of math, but that's how his thought process works. p(cred|claim n) < 1/(3^^^3n^(1+1/3^^^^^^^^3)) (I mean, we're starting with a small probability for "will torture random person", right?), so as n goes to "infinity", n (cred|claim n) becomes negligible - say, of less negative utilons than a speck of dust in the eye. Surely you can gain more utility from $20 than minus a speck in the eye. QED, ffs.
|
# ? Aug 10, 2014 04:04 |
|
Absurd Alhazred posted:But the limit we care about isn't the probability at "infinity", but the expectation value at "infinity". As long as the probability "at infinity" (or rather the limit as n-->infinity) is greater than zero, the expected value of the number of people tortured goes to infinity. Absurd Alhazred posted:I think that we should be safely be able to say that: See, I agree with you, but Yudkowsky doesn't accept your premise that the probability the claim is credible goes to zero as the number of people threatened goes to infinity. And without that premise, he reaches the opposite conclusion. This is what he said addressing your solution: Yudkowsky posted:Should we penalize computations with large space and time requirements? This is a hack that solves the problem, but is it true? Are computationally costly explanations less likely? Should I think the universe is probably a coarse-grained simulation of my mind rather than real quantum physics, because a coarse-grained human mind is exponentially cheaper than real quantum physics? Should I think the galaxies are tiny lights on a painted backdrop, because that Turing machine would require less space to compute? "If you think the guy threatening to torture 3^^^^3 people is just some crazy guy and not a cunningly-disguised god, then you must also believe all of reality is a lie in order to be logically consistent" - a thing Yudkowsky literally believes. This quote happens to also demonstrate nicely his failure to understand that your beliefs need to be updated in response to evidence. Yes, before looking at any evidence, it does make sense to start out thinking that stars are likely fixed lights on a painted backdrop - but then once you examine the evidence more closely and find that, for example, stellar parallax does measurably occur, you should update your beliefs and conclude that no, they're not fixed points on a flat backdrop. He makes the same mistake in his argument for why immortality is probably possible. Simple physics are more likely (hence Occam's razor) and a universal prior should be biased towards them; Conway's Game of Life has simple physics; Conway's Game of Life permits immortality; therefore immortality can be possible with simple physics; therefore, he argues, immortality is possible with nontrivial probability. Yes, if you're presented with an arbitrary universe about which you know nothing, you might initially think immortality is plausible. But once you've examined the universe more closely, you need to update your beliefs in response to the evidence. We've examined our universe quite extensively and made a lot of observations, and we've seen a lot of evidence that immortality is not possible. But Yudkowsky wants us to discard all of our accumulated evidence and stick with our initial prior forever. Lottery of Babylon fucked around with this message at 07:18 on Aug 10, 2014 |
# ? Aug 10, 2014 04:43 |
|
Lottery of Babylon posted:As long as the probability "at infinity" (or rather the limit as n-->infinity) is greater than zero, the expected value of the number of people tortured goes to infinity. I see what you mean. The prior is the posterior, forever after. Amen.
|
# ? Aug 10, 2014 04:51 |
|
I don't normally like these comparisons but it is remarkable how much this looks like rationalist metaphysics. Why if you just make these obvious* assumptions we can prove from first principles the existence of God and the immortality of the soul. But I'd be astonished if these guys didn't denigrate philosophy as unscientific and irrational and certainly not something they're doing, so like Ayn Rand before them (who did think she was doing philosophy but that she didn't need to study anyone else's work) they make a storm of elementary errors the rest of the world hashed out decades or centuries ago. *highly contestable
|
# ? Aug 10, 2014 12:45 |
Peel posted:I don't normally like these comparisons but it is remarkable how much this looks like rationalist metaphysics. Why if you just make these obvious* assumptions we can prove from first principles the existence of God and the immortality of the soul. But I'd be astonished if these guys didn't denigrate philosophy as unscientific and irrational and certainly not something they're doing, so like Ayn Rand before them (who did think she was doing philosophy but that she didn't need to study anyone else's work) they make a storm of elementary errors the rest of the world hashed out decades or centuries ago.
|
|
# ? Aug 10, 2014 17:16 |
|
Peel posted:I don't normally like these comparisons but it is remarkable how much this looks like rationalist metaphysics. Why if you just make these obvious* assumptions we can prove from first principles the existence of God and the immortality of the soul. But I'd be astonished if these guys didn't denigrate philosophy as unscientific and irrational and certainly not something they're doing, so like Ayn Rand before them (who did think she was doing philosophy but that she didn't need to study anyone else's work) they make a storm of elementary errors the rest of the world hashed out decades or centuries ago. They're closer to Rand then they would ever admit. Rand's shining triumph in her eyes was her solution to the classic is-ought problem, where she ignored centuries of discussion and more or less "it is this way because I say it is. MORAL RELATIVISM WINS AGAIN!" Kind of similar to the "Well, Bayes logic(their version of it) answers this question completely, so why debate or discuss it?"
|
# ? Aug 11, 2014 23:38 |
|
I looked at one of their "rationality quotes" threads to see sort of things resonate with the general LessWrong community. #1 was from a 'filk' song. #2 was from My Little Pony fanfiction. About what I expected.
|
# ? Aug 16, 2014 03:13 |
|
ArchangeI posted:I find it amazing that he openly admits to not having read the books. HPMOR isn't fanfiction, it's just copyright infringement. It's like that one time on 4chan I saw someone recommend Nano to Kaoru - a manga about a frog boy who creepily manipulates a submissive girl into doing kinky things for his own pleasure - was a good way to see how healthy BDSM worked "in real life." Oh god, Yudkowsky is into BDSM and animoo, that was him wasn't it Full disclosure: I'm the target audience for this crap, and for a few months I was buying into it hardcore. But after a while a lot of the things others have brought up in this thread - the lack of actual output or empirical evidence, the grandiose verbage in lieu of a concrete plan of action, and the cult-like "us against the world" mentality that's front and center in many of his posts - drove me away. And I have only a basic understanding of programming (and zero formal training), so if my ignorant rear end could figure out the scam then there's hope for the people posting there today. (still don't think death brings meaning to life though, that's dumb post-hoc rationalization)
|
# ? Aug 18, 2014 11:34 |
|
Mr. Horrible posted:It's like that one time on 4chan I saw someone recommend Nano to Kaoru - a manga about a frog boy who creepily manipulates a submissive girl into doing kinky things for his own pleasure - was a good way to see how healthy BDSM worked "in real life." Oh god, Yudkowsky is into BDSM and animoo, that was him wasn't it hey! NtK isnt about only one side getting something outta it, it's a touching love story of a dom and a sub who---- never mind its pretty drat creepy, on reflection.
|
# ? Aug 18, 2014 21:15 |
|
Mr. Horrible posted:(still don't think death brings meaning to life though, that's dumb post-hoc rationalization) "Death brings meaning to life" may be post-hoc rationalization, but it's not like death is a lovely tech product we can pull off the market if the marketing guys can't find a good line to justify it -- death is reality, and at least right now, death is inevitable. Does death suck? Sure it does. Would it be awesome to eradicate death? Probably (although doing so would require us to radically alter the function of human society, but that's an aside). The problem is that, at our current level of technology, the LessWrong dude's answers to death don't work, and being an emotionally healthy adult means coming to terms with that, not hiding under a rock and waiting for the singularity to save us. Coming to terms with death means coming to post-hoc rationalizations of it as a method of acceptance. Of course, typing this out, it occurs to me that cryogenics are basically the LW crowd's pseudo-religious version of an afterlife myth. "Future scientists can, and will, resurrect your loved ones' frozen corpses and allow them to live in the future techno-utopia" is about as purely faith-based as "your loved ones' souls have departed their bodies to dwell in a spirit realm of perfect happiness," but I can see it providing the same comfort, and it even has substantial doctrinal resemblances to religions that dictate certain bodily conditions for the dead to be successfully resurrected.
|
# ? Aug 18, 2014 22:26 |
Antivehicular posted:Of course, typing this out, it occurs to me that cryogenics are basically the LW crowd's pseudo-religious version of an afterlife myth. "Future scientists can, and will, resurrect your loved ones' frozen corpses and allow them to live in the future techno-utopia" is about as purely faith-based as "your loved ones' souls have departed their bodies to dwell in a spirit realm of perfect happiness," but I can see it providing the same comfort, and it even has substantial doctrinal resemblances to religions that dictate certain bodily conditions for the dead to be successfully resurrected. As for death justifying life, I feel a bit as if these guys are pleading that their horror of death is SO STRONG that the fact that others might, for instance, out of ancient weary compassion, say "euthanasia of a terminal cancer patient is perhaps acceptable" or even moving into another stage of grieving is taken as a failure, as opposed to, you know, reconciling with an unhappy reality.
|
|
# ? Aug 18, 2014 22:50 |
|
Roko's Basilisk was probably the first time I've seen LW's intellectual commitments lead them to an unpalatable conclusion. Everything else exists to make them feel good about their fate, place in the world, and their own intelligence.
|
# ? Aug 18, 2014 23:14 |
|
Keep in mind I don't think they actually advocate the basilisk as part of their ideology. Big Yud is terrified of the thing and tries to make sure it's never discussed. Because if an AI heard it, it might give it ideas
|
# ? Aug 19, 2014 00:09 |
|
|
# ? Jun 8, 2024 15:52 |
|
All I know is that my next villain for an Eclipse Phase campaign is going to be a resimulated version of Yudowski as straight-up Roko's Basilisk, as . I'll see how many of my players want to punch me when that comes down the pipe.
|
# ? Aug 19, 2014 00:20 |