|
Not sure it's worth breaking the safari rule to send him the equivalent of "can God make a rock big enough that he cannot lift it? checkmate, theist". Infinite life = infinite bad things and infinite good things happening. As a utilitarian, he just needs to assume that the average good thing outweighs the average bad thing, and infinite life works out to infinite value. Telarra fucked around with this message at 02:24 on May 19, 2014 |
# ? May 19, 2014 02:16 |
|
|
# ? May 4, 2024 12:52 |
|
Yeah, you have to understand how hard it is to get the community to change its collective mind about its core beliefs. And I'm not sure I've seen Yudkowsky visibly change his mind about anything ever. One liners aren't going to do anything. This is actually expected for a community of Bayes freaks, they've arbitrarily set their confidence in their beliefs quite high and so it requires a lot of to change their minds. I'm not sure if that's a flaw.
|
# ? May 19, 2014 02:18 |
|
Patter Song posted:http://www.reddit.com/r/HPMOR/comments/23wmr4/repost_from_askreddit_because_i_figure_the/ Ah, they link to one of the best illustrations of their mindset I've ever seen. http://www.reddit.com/r/AskReddit/comments/23j3zo/you_wake_up_as_hitler_in_the_middle_of_ww2_what/cgxnom9 As a /r/badhistory poster pointed out, this guy's hyper rational plan is do what the Nazis tried to do but succeed this time. And then it appears he added on that, after conquering the rest of the world, he would disband most of the army and usher in an age of enlightenment.
|
# ? May 19, 2014 02:57 |
|
SF misses that the soulless robots everyone fears are already here.
|
# ? May 19, 2014 08:43 |
|
Oh. And try very hard, if I do lose WW2, to make sure Eugenics is not tainted quite so strongly by association with Nazi Germany. -- a yudkowsky fan
|
# ? May 19, 2014 10:06 |
|
As someone completely ignorant in CS, I thought the conversations here on AI and mathematics were very interesting. Some of his stuff (like all of HPMOR) is blatantly trash but, as others have said, he's good at pretending to be knowledgeable to laymen. Mostly, I imagine that if someone ran into him in person he would act/argue/sound exactly like Brad Pitt's character at the end of 12 Monkeys.
|
# ? May 19, 2014 22:03 |
|
Is the Less Wrong crowd focusing any attention on addressing any of the myriad problems that need to be solved if/when we achieve clinical immortality? Like, say that in the next few years scientists discover a way to keep telomeres intact during cell division and humans stop dying from of old age. Let's ignore that this does not in any way protect you from cancer, disease or accidents, and that the likelihood of dying of either of these rapidly approaches 1 () as your age increases. What about overpopulation? How do we keep a population that increases by 15000 each hour fed and clothed? How do we ensure an acceptable living standard for even a sizable minority of these people? What are the social and economical consequences of people not dying? Parents, grandparents and great-grandparents will remain with you forever. There is no such thing as retirement. Jobs will be held indefinitely, while the workforce grows exponentially. Positions of power are occupied by immortals whose values and opinions grow more and more disconnected from the surrounding society with every passing century. Or is this something that the I mean, the Yud measures intelligence by how willing you are to sign up for cryofreeze. Thinking that an unknown future society will be able to restore your body and mind from whatever frost-damaged lump of meat that the underpaid janitors at the cryofacility left behind is just slightly more realistic than thinking that they'll be able to do the same using only your decomposing body dug up from the ground. Also, I still can't wrap my head around how they can cling to their special brand of Bayesianism in the face of common loving empiricism. If something is statistically unlikely, then multiplying it by some huge factor X and claiming it's suddenly statistically likely is meaningless if you can't show that we have some reason to consider X likely or even possible. E: Yeah, and props to su3su2u1, SolTerrasa and the other sciencechatters in the thread. Fascinating stuff! Mr. Sunshine fucked around with this message at 12:39 on May 20, 2014 |
# ? May 20, 2014 12:36 |
|
Mr. Sunshine posted:Is the Less Wrong crowd focusing any attention on addressing any of the myriad problems that need to be solved if/when we achieve clinical immortality? Like, say that in the next few years scientists discover a way to keep telomeres intact during cell division and humans stop dying from of old age. Let's ignore that this does not in any way protect you from cancer, disease or accidents, and that the likelihood of dying of either of these rapidly approaches 1 () as your age increases.
|
# ? May 20, 2014 17:58 |
|
I've always wondered about the academic background of the LW crowd. While it's known that Yudkowsky is "self-taught", and associates himself with people with degrees, what about the average LW poster? It's hard to believe that there are qualified people sticking around Yudkowsky when he's making outrageous claims that, according to the scienceposters here, can be easily refuted. Let alone science grads who must be dumbfounded by his "Bayesian rejection of empiricism"
|
# ? May 20, 2014 18:10 |
|
Mr. Sunshine posted:Also, I still can't wrap my head around how they can cling to their special brand of Bayesianism in the face of common loving empiricism. If something is statistically unlikely, then multiplying it by some huge factor X and claiming it's suddenly statistically likely is meaningless if you can't show that we have some reason to consider X likely or even possible. That's actually an excellent point! The Less Wrong crowd loves to talk about SUPER HUGE numbers like 3 ^^^ 3, but they don't stop to think about the consequences of those numbers. Consider this: the width of the visible universe (which is all the universe we have access to unless we find a way to break the light barrier) is on the order of 10^26 meters. The Planck length - which is about the smallest length that's theoretically possible to measure - is around 10^-35 meters. Divide the former by the latter, and the width of the greatest conceivable distance in the smallest conceivable units is 10^61, give or take an order of magnitude or two. Just look at that pathetic number. Its exponent has a mere two digits in it! It does manage to beat out 3 ^^ 3, which is about 7.5 trillion (or about 10^10), but 3 ^^ 4 is on the order of 10^8377575105443 (that's a one followed by 8 trillion zeroes) and by the time we hit 3 ^^ 5 we're already past the number of digits I can reasonably transcribe. Yudowski's number is 3 ^^^ 3, or 3 ^^ (3 ^^ 3), or 3 ^^ (7.5 trillion). The numbers generated by arrow notation are ungodly huge. They are infeasibly huge. If you are multiplying by 3 ^^^ 3, the answer you get will never represent anything that has any significance in the real world. The prospect of having 3 ^^^ 3 people all get dust in their eye is utterly absurd because there will never be 3 ^^^ 3 people in existence. There will never even be 3 ^^ 4 people. Another example: a computer cannot run 3 ^^^ 3 simulations no matter how fast it is and no matter how few particles it needs for each one. If it needed only a single particle for each simulation (total particles in the universe: ~10^80) and could run the entire thing in the smallest measurable interval (Planck time: 10^-44 s), then by the time entropy claimed the last black holes in the universe the AI could have run about 10^232. (That's 10^80 sims per interval, 10^44 intervals per second, 10^8 seconds per year, and 10^100 years until universal heat death.) That doesn't even beat 3 ^^ 4, for fucks sake. EDIT: You want to hear something funny? While I was putting this post together, it occurred to me to ask why this notation was invented in the first place. The guy who created arrow notation was Donald Knuth, a professor of computer science. Wikipedia led me to the article where he first introduced the concept of arrow notation. Let me quote to you from the abstract: Donald Knuth posted:Finite numbers can be really enormous, and the known universe is very small. Therefore the distinction between finite and infinite is not as relevant as the distinction between realistic and unrealistic. Alien Arcana fucked around with this message at 18:44 on May 20, 2014 |
# ? May 20, 2014 18:41 |
|
Mr. Sunshine posted:Is the Less Wrong crowd focusing any attention on addressing any of the myriad problems that need to be solved if/when we achieve clinical immortality? Like, say that in the next few years scientists discover a way to keep telomeres intact during cell division and humans stop dying from of old age. Let's ignore that this does not in any way protect you from cancer, disease or accidents, and that the likelihood of dying of either of these rapidly approaches 1 () as your age increases. We will not need to fear cancer or disease, for the AI will cure everything. We will not need to fear accidents, for the AI will protect us. We will not need to feat overpopulation, for the AI will feed and clothe us. We will not need to have jobs, for the AI will take care of everything. We will not need to fear entrenched positions of power, for the only position of power shall be the AI. There will be no loyalty, except loyalty towards the AI. There will be no love, except the love of Bayesian purity. There will be no laughter, except the laugh of triumph over the defeated Deathists. There will be no art, no literature, no science. Alien Arcana posted:and 10^100 years until universal heat death. Heat deathism is still deathism. Yudkowsky admits that this is impossible: The Pascal's Wager Fallacy Fallacy posted:But current physics also says that any finite amount of matter can only do a finite amount of computation, and the universe is expanding too fast for us to collect an infinite amount of matter. We cannot, on the face of things, expect to think an unboundedly long sequence of thoughts. By his own admission, the laws of physics make it clear that immortality is impossible, you will eventually die, you can't actually run 3 ^^^^^^^ 3 simulations, and for all his handwringing science ultimately sides with the deathists. Or does it? All italics are his posted:On the other hand, many computationally simple laws of physics, like the laws of Conway's Life, permit indefinitely running Turing machines to be encoded. So we can't say that it requires a complex miracle for us to confront the prospect of unboundedly long-lived, unboundedly large civilizations. Just there being a lot more to discover about physics - say, one more discovery of the size of quantum mechanics or Special Relativity - might be enough to knock (our model of) physics out of the region that corresponds to "You can only run boundedly large Turing machines". His entire worldview is literally based on the assumption that the laws of physics as we know them will change in precisely the way he prefers. As long as he believes hard enough in immortality, the universe will transform itself into one in which immortality is possible, and at the end of days Multivac will discover a way to reverse entropy and proclaim "LET THERE BE LIGHT" and there will be light. Yuddites don't actually like science. When science points out that their entire philosophy is wrong and impossible, they decide it's science that must be wrong, stick their fingers in their ears, and decide to literally reject our reality and substitute their own. This is why it's impossible to argue with Yudkowsky. Even when you prove that everything he says is dumb and wrong and impossible and even if you get him to acknowledge that, he still won't care.
|
# ? May 20, 2014 20:26 |
|
I love that his argument literally boils down to "I reject your reality and substitute my own!"
|
# ? May 20, 2014 20:31 |
|
Alien Arcana posted:EDIT: Generally Yudkowsky becomes much funnier if you know what the actual experts on the topic say to the things that Yudowky tinkers in. For example back to life extension. The man who made the Singularity famous, Ray Kurzweil, also thinks that there as a reasonable chance that medically science will eventually progress far enough to make people effectively immortal. And he also thinks that one should try to increase one's chances to live to see this, however marginal the improvement is. For this he recommends working out and eating healty, to extend your live.
|
# ? May 20, 2014 20:35 |
|
Kurzweil also is into 'clustered water' 'water alkalinization' and homeopathy. He also hired an assistant solely to track the "180 to 210 vitamin and mineral supplements a day" that he takes. Dude's a total crank as well. Tunicate fucked around with this message at 20:40 on May 20, 2014 |
# ? May 20, 2014 20:37 |
|
Tunicate posted:Kurzweil also is into 'clustered water' 'water alkalinization' and homeopathy. He also hired an assistant solely to track the "180 to 210 vitamin and mineral supplements a day" that he takes. That does not surprise me. Singularity folks are pretty much the worst; they've ended up believing one crazy thing for which evidence is literally impossible, how unlikely could it be that they'd believe two? And yet his advice is *better* than Yudkowsky's, who recommends a ketogenic diet to lose weight and live longer and yet hasn't shed a pound of his frame (which could be described as anywhere from "bulky" to "goonish") in all the time he's been recommending it. Also, you'd be hard-pressed to use the literature on the topic to support a firm belief that it'll even work. Of course experimental science is just a special case of Bayesian math and if you set your priors high enough and keep your sample size low you can keep believing whatever you want and
|
# ? May 20, 2014 20:45 |
|
tonberrytoby posted:There is a difference between unrealistic and useless. Graham's Number is called the largest useful number. And it is large enough that it can't be expressed directly by the Knuth's arrow notation. Graham's number is the solution to a problem that has no significance - IIRC it's the upper bound to the smallest number of dimensions in which some property holds. We could assign every Planck-granular space-time index its own dimension and not even scratch the surface of the surface of that number. So if we had a genuine, real-world question related to that property, Graham's number is totally useless as an upper bound because a solution with that many dimensions is not going to be relevant to our situation, no matter what that situation is. Still I suppose there are cases where the existence or non-existence of a number might be relevant to some proof that is useful. So I'll retract my statement that such numbers are entirely useless. They just aren't numbers you should be throwing around as if they have any direct relation to the real world.
|
# ? May 20, 2014 21:01 |
|
Monocled Falcon posted:Ah, they link to one of the best illustrations of their mindset I've ever seen. I remember that thread. Link for those that are interested.
|
# ? May 20, 2014 21:06 |
|
Today I got an eyelash in my eye and thanks to you guys I now know that this is worse than if I were being tortured thanks, less wrong mock thread
|
# ? May 20, 2014 22:47 |
|
Solomonic posted:Today I got an eyelash in my eye and thanks to you guys I now know that this is worse than if I were being tortured Only because the 3,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 simulations of you running inside the mind of a future AI also got an eyelash in their eyes
|
# ? May 20, 2014 23:40 |
|
I do like how Yudkowsky just assigns numbers and probabilities at random. Anything is possible when you say it has a 0.65 chance over a period of Pi years.
|
# ? May 21, 2014 05:01 |
|
tonberrytoby posted:There is a difference between unrealistic and useless. Graham's Number is called the largest useful number. And it is large enough that it can't be expressed directly by the Knuth's arrow notation. I wont pretend to understand what any of that means, but I'm willing to bet it's an outlier. What about the second, third, or fourth largest useful numbers, are they even remotely as big?
|
# ? May 21, 2014 05:14 |
|
Wanamingo posted:I wont pretend to understand what any of that means, but I'm willing to bet it's an outlier. What about the second, third, or fourth largest useful numbers, are they even remotely as big? That's not really the point. There's no real formal ordering of "important numbers". The point is that there can never be as many things as would be required for these situations to make sense, there's an upper bound on the number of things that can exist in the universe and it's pretty low compared to the big numbers that Yudkowsky argues about.
|
# ? May 21, 2014 05:42 |
|
My favorite thing from the proof is that while Graham's Number is the upper bound, the lower bound is 13.
|
# ? May 21, 2014 05:47 |
|
Soylent Pudding posted:My favorite thing from the proof is that while Graham's Number is the upper bound, the lower bound is 13. 13 is actually an improvement from the original 6. I'm pretty sure the current upper bound is a fair bit lower than the original Graham's number, as well (though still absurdly enormous). I love how after mentioning what the upper and lower bounds are the proof literally says "Clearly, there is some room for improvement here." Wanamingo posted:I wont pretend to understand what any of that means, but I'm willing to bet it's an outlier. What about the second, third, or fourth largest useful numbers, are they even remotely as big? Well, I mean, the article mentions that other, even larger numbers have shown up in connection with other proofs since then.
|
# ? May 21, 2014 14:17 |
|
Is Transcendence the Least Wrong film of the year?
|
# ? May 22, 2014 10:53 |
|
It's loving terrible and made me (a risk analyst) and my boyfriend (an electrical engineer) rant about all the problems with it all the way home from the cinema, so I would say "yes". We decided that the fact that we even noticed all the daft problems with it meant that the storytelling probably hadn't really hooked us in the first place. There was a drama/horror/sci-fi show here in the UK called Black Mirror, which is a bit like the Twilight Zone but it's all about social media and our relationship with computers. One episode called "Be Right Back" is basically the same idea as Transcendence and the whole "AI simulation of people" thing except it's more realistic, if that makes sense. I showed it to the boyfriend straight after watching Transcendence because I figured it was really the movie we'd wanted and expected to see. It's about a young man who posts poo poo on his smartphone all the time and dies in a car crash (it's implied he was using the phone while driving), and his widow buys some software that creates a simulation of him based on his social media history and e-mail archives. Except it's not really like him at all - for a start it looks much smoother because it's based on a composite of pictures he uploaded, and people only keep nice photos of themselves, and it doesn't know all the intimate details of their relationship that they never shared with the world, or the stories behind the photos. It's around online, not sure if posting it counts as though. Food for thought for Yudkowsky and his fans.
|
# ? May 22, 2014 22:30 |
|
Oh god. I didn't realize LessWrong was this much of a thing. This dude I went to college with is always posting about their poo poo. When he was in college, he started out as a super gung-ho genetics major. I remember this guy making GBS threads on social science/liberal arts majors at a dinner party, and thinking he was being a total philistine fuckhead. A few months later he washed out (or quit?) the genetics program and became a philosophy major, and then promptly went on to not finish a graduate degree. Now he posts lots of vaguely wrong poo poo about computer science that I can't really be arsed to correct because, holy gently caress, I'm too busy actually being a computer scientist to care. Except when I complain about him on SA. DrankSinatra fucked around with this message at 23:09 on May 22, 2014 |
# ? May 22, 2014 23:05 |
|
DrankSinatra posted:Now he posts lots of vaguely wrong poo poo about computer science that I can't really be arsed to correct because, holy gently caress, I'm too busy actually being a computer scientist to care. Oh man, my favorite. Can you post some of it?
|
# ? May 23, 2014 02:24 |
|
This is amazing- some new-comer to LessWrong, not realizing Harry of Yud's fanfic is a total self-insert, offers some arm-chair psychology involving Harry being a narcissist: http://lesswrong.com/lw/jc8/harry_potter_and_the_methods_of_rationality/axjq Algernoq posted:Grandiose sense of self-importance? Check. He wants to “optimize” the entire Universe
|
# ? May 23, 2014 06:53 |
|
tonberrytoby posted:For example back to life extension. The man who made the Singularity famous, Ray Kurzweil, also thinks that there as a reasonable chance that medically science will eventually progress far enough to make people effectively immortal. I suspect that in the short to medium term, the best prospects for substantial extension of human life aren't in some crazy brain-uploading trick, but in medical technology. Researchers have already found a variety of different ways to retard and even reverse the aging process in mice, and while making this work for humans isn't trivial, it seems much more likely than computer scientists developing some weird AI-god who fixes everything. (One of the anti-aging treatments is already scheduled for human testing.)
|
# ? May 23, 2014 18:27 |
|
Today's SMBC addresses Bayseianism:
|
# ? May 24, 2014 09:11 |
|
Beyond the Reach of GodYudkowsky posted:So it's not necessarily an attempt to avoid falsification, to say that God does not grant all prayers. Even a Friendly AI might not respond to every request. Yudkowsky posted:The obvious example of a horror so great that God cannot tolerate it, is death—true death, mind-annihilation. I don't think that even Buddhism allows that. So long as there is a God in the classic sense—full-blown, ontologically fundamental, the God—we can rest assured that no sufficiently awful event will ever, ever happen. There is no soul anywhere that need fear true annihilation; God will prevent it. Yudkowsky posted:What if you build your own simulated universe? The classic example of a simulated universe is Conway's Game of Life. I do urge you to investigate Life if you've never played it—it's important for comprehending the notion of "physical law". Conway's Life has been proven Turing-complete, so it would be possible to build a sentient being in the Life universe, albeit it might be rather fragile and awkward. Other cellular automata would make it simpler. Yudkowsky posted:But suppose that instead you ask the question: At long last, Yudkowsky remembers that Hitler exists and brings him in for his ultimate disproof of God: Yudkowsky posted:Belief in a fair universe often manifests in more subtle ways than thinking that horrors should be outright prohibited: Would the twentieth century have gone differently, if Klara Pölzl and Alois Hitler had made love one hour earlier, and a different sperm fertilized the egg, on the night that Adolf Hitler was conceived? Does anyone better-versed in theology know of any doctrine that says that large events must always have large causes, and than a small change now cannot bring about a large change ten years from now? I don't know of anything in Christianity - or, for that matter, in any other religion - that says small events can't have large consequences. I can think of plenty of theological counterexamples, though (for example, that one man dying two thousand years ago could produce a globe-spanning religion with two billion followers much later). As with the "mathematician says at least one of the children is male" problem, Yudkowsky can't actually answer the question at hand, so he uses sleight of hand to substitute his own easier-but-less-interesting question. Instead of answering the question of whether God exists, Yudkowsky answers the question of whether a God-who-forbids-seemingly-minor-events-to-have-major-consequences exists. Even if we accept his particular understanding of history in which events have singular causes, he's still only disproven a very narrow, particular type of God who shares an idiosyncratic view of his. But most conceptions of God, such as a mainstream Christian one, don't have that idiosyncrasy. He's disproven a version of God nobody believed in. And he can't do any better than that. Ultimately, his argument is one of revulsion: "X; I don't like X; therefore not God." Such arguments against God are basic, common, and ancient. The only difference is that for most people, X is "evil," or "suffering," or "injustice," or "Hitler." But for Yudkowsky, it's "Hitler's dad's sperm." Lottery of Babylon fucked around with this message at 01:48 on Aug 11, 2014 |
# ? May 27, 2014 05:08 |
|
Lottery of Babylon posted:Does anyone better-versed in theology know of any doctrine that says that large events must always have large causes, and than a small change now cannot bring about a large change ten years from now? I don't know of anything in Christianity - or, for that matter, in any other religion - that says small events can't have large consequences. I can think of plenty of theological counterexamples, though (for example, that one man dying two thousand years ago could produce a globe-spanning religion with two billion followers much later). Christianity has plenty of places where big things come from small events. quote:Then He said, “To what shall we liken the kingdom of God? Or with what parable shall we picture it? It is like a mustard seed which, when it is sown on the ground, is smaller than all the seeds on earth; but when it is sown, it grows up and becomes greater than all herbs, and shoots out large branches, so that the birds of the air may nest under its shade.”
|
# ? May 27, 2014 06:13 |
|
Djeser posted:Christianity has plenty of places where big things come from small events. Thanks, I figured there had to be something like that in there. In the comments, someone calls out Yudkowsky's poor understanding of history and points out that WWII was widely predicted twenty years in advance. Yudkowsky counters that it wouldn't have turned out exactly the same with a different German leader. After that the comments somehow turn into an argument about cryonics. Not a single person points out that "big things come from little things" doesn't come anywhere close to proving God doesn't exist. "Beyond the Reach of God" is a followup to previous article and great death metal band name "The Magnitude of His Own Folly", which ends thus: Yudkowsky posted:I saw that others, still ignorant of the rules, were saying "I will go ahead and do X"; and that to the extent that X was a coherent proposal at all, I knew that would result in a bang; but they said, "I do not know it cannot work". I would try to explain to them the smallness of the target in the search space, and they would say "How can you be so sure I won't win the lottery?", wielding their own ignorance as a bludgeon. Which means it's time for our regular reminder that Yudkowsky's entire belief system is based on blind faith that the universe's laws of physics will magically change in a way that permits immortality, and that his justification for believing this is "You can't prove it won't!"
|
# ? May 27, 2014 06:56 |
|
Yudkowsky posted:I would try to explain to them the smallness of the target in the search space, and they would say "How can you be so sure I won't win the lottery?", wielding their own ignorance as a bludgeon.
|
# ? May 27, 2014 07:14 |
|
Wouldn't "escaping the reach of God" by making a really really good implementation of Conway's Game of Life be a dumb idea from the outset, since the simulation still exists in our universe and is (theologically theoretically) still within God's domain? It seems simpler to just argue that free will precludes God, since horrible things like torture, Hitler, and the butterfly effect exist?
|
# ? May 27, 2014 07:42 |
|
Anticheese posted:Wouldn't "escaping the reach of God" by making a really really good implementation of Conway's Game of Life be a dumb idea from the outset, since the simulation still exists in our universe and is (theologically theoretically) still within God's domain? He has some lines dancing around that objection: Yudkowsky posted:But if God is watching everywhere, then trying to build an unfair Life just results in the God stepping in to modify your computer's transistors. If the physics you set up in your computer program calls for a sentient Life-entity to be endlessly tortured for no particular reason, the God will intervene. God being omnipresent, there is no refuge anywhere for true horror: Life is fair. Anticheese posted:It seems simpler to just argue that free will precludes God, since horrible things like torture, Hitler, and the butterfly effect exist? What he ends up arguing is basically the free will thing, except where most people look at "a man can do horrible things" and get upset at the "horrible things" part, Yudkowsky gets upset at the "a man" part because obviously it ought to take several men to do horrible things or else it's just not fair. In Yudkowsky's worldview, if more Germans agreed that the Holocaust was the correct course of action then the Holocaust would in fact have been completely okay. Lottery of Babylon fucked around with this message at 07:52 on May 27, 2014 |
# ? May 27, 2014 07:50 |
|
I can't get over this dude's obsession with simulated life. Yeah, sure, Conway's game is Turing complete. That just means "can be used as a computer". Claiming it can thus simulate sentient life is theoretically correct, but no more useful than claiming that you can use your iPhone to simulate sentient life - given enough time, memory, correct algorithms etc. Then his entire disproof hinges on the idea that real God would intervene in the suffering of simulated life. What's the purpose? We know for a fact that God, if he exists, doesn't even intervene in the suffering of real life. In fact, the Yud's obsession with suffering, torture and death just obfuscates what could be a valid point - if we could set up a simulation of the universe which runs entirely on physical laws without divine intervention, how would it differ from our own? If there are no significant differences, what does this say about the possibility of God existing? Of course, this still hinges on being able to accurately simulate the entire universe in goddamn Conway's Game of Life. It would be more useful as a thought experiment, but that could be summed up in a few sentences and wouldn't let the Yud bloviate about death and torture and throw around computer science terms like some cult leader hopped up on William Gibson novels.
|
# ? May 27, 2014 07:56 |
|
Yuddles posted:But suppose that instead you ask the question: Isn't that what you ask by running a goddamn simulation?
|
# ? May 27, 2014 07:59 |
|
|
# ? May 4, 2024 12:52 |
|
Mr. Sunshine posted:In fact, the Yud's obsession with suffering, torture and death just obfuscates what could be a valid point - if we could set up a simulation of the universe which runs entirely on physical laws without divine intervention, how would it differ from our own? If there are no significant differences, what does this say about the possibility of God existing? That's what it seems like he's going for in the middle section. The trouble is that: 1) He never actually explains why the simulation would turn out exactly like our own. He just says "Hey, maybe in the simulation 'cows' evolve and are eaten by 'wolves'. Or maybe not. But maybe they would, heh, wolves eating cows, wouldn't that sound familiar, nudge nudge wink wink" and moves on from there assuming that any simulation would in fact necessarily be an identical copy of our world. He never justifies this assertion. 2) He frames it less in terms of "The world would be much like our own" and more in terms of "Noted heartless supervillain Genghis Khan tortures puppies for no reason and gets away with it because he's just that evil and the world is just that bleak and uh I don't actually know anything about Genghis Khan" 3) After setting up this argument, he takes a sharp turn into a completely different argument about how small causes having big effects is just wrong. Mr. Sunshine posted:Of course, this still hinges on being able to accurately simulate the entire universe in goddamn Conway's Game of Life. Most of his references are designed to make him look smart rather than to actually be useful. Conway's Game of Life is a cool thing cool nerds have heard of, therefore it needs to be shoehorned into everything even if it's so impractical it's actually counterproductive to bring it up. Knuth's up-arrow notation is how you write Graham's Number* and is therefore a cool thing cool nerds have heard of, therefore it needs to be shoehorned into everything even if it forces him to waste several paragraphs explaining the notation when all that really matters is "it's a big number". They're pointless references designed only to high-five other people who make the same pointless references; they're the computer science equivalent of quoting Monty Python. Since Yudkowsky is in the business of trying to look smart, not of doing useful things, this shouldn't be surprising. *nb: Graham's Number can't actually be written in up-arrow notation because it's too big, but the way it's written involves up-arrow notation and that's all people remember Anticheese posted:Isn't that what you ask by running a goddamn simulation? No, because if you actually ran the simulation that pesky God feller might interrupt it. If you asked a computer to compute 2+2, you can't be sure the computer wouldn't spit out 5 because of God interfering in its circuits to make it give the wrong answer. But if you consider what you would logically have to get if you added 2 plus 2, the answer would be 4. It's a long-winded way of saying, "Okay, if we make a simulation we still can't escape the reach of God. But who cares, let's pretend we could escape God's reach anyhow, then what?"
|
# ? May 27, 2014 08:22 |