Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Telarra
Oct 9, 2012

Not sure it's worth breaking the safari rule to send him the equivalent of "can God make a rock big enough that he cannot lift it? checkmate, theist".

Infinite life = infinite bad things and infinite good things happening. As a utilitarian, he just needs to assume that the average good thing outweighs the average bad thing, and infinite life works out to infinite value.

Telarra fucked around with this message at 02:24 on May 19, 2014

Adbot
ADBOT LOVES YOU

SolTerrasa
Sep 2, 2011

Yeah, you have to understand how hard it is to get the community to change its collective mind about its core beliefs. And I'm not sure I've seen Yudkowsky visibly change his mind about anything ever. One liners aren't going to do anything.

This is actually expected for a community of Bayes freaks, they've arbitrarily set their confidence in their beliefs quite high and so it requires a lot of :words: to change their minds. I'm not sure if that's a flaw.

Monocled Falcon
Oct 30, 2011

Patter Song posted:

http://www.reddit.com/r/HPMOR/comments/23wmr4/repost_from_askreddit_because_i_figure_the/

So, on the subreddit discussing Harry Potter and the Methods of Rationality, someone posed the question asking if you were to wake up and find it June 1st, 1942 and that you'd taken the place of Adolf Hitler, what would you do.

I have never seen people fail a morality test so loving hard before. A good two thirds/three quarters of the responses are attempts to minmax Nazi Germany into an unstoppable juggernaut fueled by 21st century science, and the people who say things about trying to sabotage the war effort are laughed off.

Ah, they link to one of the best illustrations of their mindset I've ever seen.

http://www.reddit.com/r/AskReddit/comments/23j3zo/you_wake_up_as_hitler_in_the_middle_of_ww2_what/cgxnom9

As a /r/badhistory poster pointed out, this guy's hyper rational plan is do what the Nazis tried to do but succeed this time. And then it appears he added on that, after conquering the rest of the world, he would disband most of the army and usher in an age of enlightenment.

Anticheese
Feb 13, 2008

$60,000,000 sexbot
:rodimus:

SF misses that the soulless robots everyone fears are already here. :ohdear:

90s Cringe Rock
Nov 29, 2006
:gay:
Oh. And try very hard, if I do lose WW2, to make sure Eugenics is not tainted quite so strongly by association with Nazi Germany. -- a yudkowsky fan

Lenisto
Nov 1, 2012
As someone completely ignorant in CS, I thought the conversations here on AI and mathematics were very interesting. Some of his stuff (like all of HPMOR) is blatantly trash but, as others have said, he's good at pretending to be knowledgeable to laymen.

Mostly, I imagine that if someone ran into him in person he would act/argue/sound exactly like Brad Pitt's character at the end of 12 Monkeys.

Mr. Sunshine
May 15, 2008

This is a scrunt that has been in space too long and become a Lunt (Long Scrunt)

Fun Shoe
Is the Less Wrong crowd focusing any attention on addressing any of the myriad problems that need to be solved if/when we achieve clinical immortality? Like, say that in the next few years scientists discover a way to keep telomeres intact during cell division and humans stop dying from of old age. Let's ignore that this does not in any way protect you from cancer, disease or accidents, and that the likelihood of dying of either of these rapidly approaches 1 (:smug:) as your age increases.

What about overpopulation? How do we keep a population that increases by 15000 each hour fed and clothed? How do we ensure an acceptable living standard for even a sizable minority of these people?
What are the social and economical consequences of people not dying? Parents, grandparents and great-grandparents will remain with you forever. There is no such thing as retirement. Jobs will be held indefinitely, while the workforce grows exponentially. Positions of power are occupied by immortals whose values and opinions grow more and more disconnected from the surrounding society with every passing century.

Or is this something that the Lord our God Benevolent AI will solve through magic super-science?
I mean, the Yud measures intelligence by how willing you are to sign up for cryofreeze. Thinking that an unknown future society will be able to restore your body and mind from whatever frost-damaged lump of meat that the underpaid janitors at the cryofacility left behind is just slightly more realistic than thinking that they'll be able to do the same using only your decomposing body dug up from the ground.


Also, I still can't wrap my head around how they can cling to their special brand of Bayesianism in the face of common loving empiricism. If something is statistically unlikely, then multiplying it by some huge factor X and claiming it's suddenly statistically likely is meaningless if you can't show that we have some reason to consider X likely or even possible.

E: Yeah, and props to su3su2u1, SolTerrasa and the other sciencechatters in the thread. Fascinating stuff!

Mr. Sunshine fucked around with this message at 12:39 on May 20, 2014

GWBBQ
Jan 2, 2005


Mr. Sunshine posted:

Is the Less Wrong crowd focusing any attention on addressing any of the myriad problems that need to be solved if/when we achieve clinical immortality? Like, say that in the next few years scientists discover a way to keep telomeres intact during cell division and humans stop dying from of old age. Let's ignore that this does not in any way protect you from cancer, disease or accidents, and that the likelihood of dying of either of these rapidly approaches 1 (:smug:) as your age increases.

What about overpopulation? How do we keep a population that increases by 15000 each hour fed and clothed? How do we ensure an acceptable living standard for even a sizable minority of these people?
What are the social and economical consequences of people not dying? Parents, grandparents and great-grandparents will remain with you forever. There is no such thing as retirement. Jobs will be held indefinitely, while the workforce grows exponentially. Positions of power are occupied by immortals whose values and opinions grow more and more disconnected from the surrounding society with every passing century.
As has been said a few times, they don't care about immortality for anyone but rich white people.

Egregious Offences
Jun 15, 2013
I've always wondered about the academic background of the LW crowd. While it's known that Yudkowsky is "self-taught", and associates himself with people with degrees, what about the average LW poster? It's hard to believe that there are qualified people sticking around Yudkowsky when he's making outrageous claims that, according to the scienceposters here, can be easily refuted. Let alone science grads who must be dumbfounded by his "Bayesian rejection of empiricism" :jerkbag:

Alien Arcana
Feb 14, 2012

You're related to soup, Admiral.

Mr. Sunshine posted:

Also, I still can't wrap my head around how they can cling to their special brand of Bayesianism in the face of common loving empiricism. If something is statistically unlikely, then multiplying it by some huge factor X and claiming it's suddenly statistically likely is meaningless if you can't show that we have some reason to consider X likely or even possible.

That's actually an excellent point! The Less Wrong crowd loves to talk about SUPER HUGE numbers like 3 ^^^ 3, but they don't stop to think about the consequences of those numbers.

Consider this: the width of the visible universe (which is all the universe we have access to unless we find a way to break the light barrier) is on the order of 10^26 meters. The Planck length - which is about the smallest length that's theoretically possible to measure - is around 10^-35 meters. Divide the former by the latter, and the width of the greatest conceivable distance in the smallest conceivable units is 10^61, give or take an order of magnitude or two.

Just look at that pathetic number. Its exponent has a mere two digits in it! It does manage to beat out 3 ^^ 3, which is about 7.5 trillion (or about 10^10), but 3 ^^ 4 is on the order of 10^8377575105443 (that's a one followed by 8 trillion zeroes) and by the time we hit 3 ^^ 5 we're already past the number of digits I can reasonably transcribe. Yudowski's number is 3 ^^^ 3, or 3 ^^ (3 ^^ 3), or 3 ^^ (7.5 trillion).

The numbers generated by arrow notation are ungodly huge. They are infeasibly huge. If you are multiplying by 3 ^^^ 3, the answer you get will never represent anything that has any significance in the real world. The prospect of having 3 ^^^ 3 people all get dust in their eye is utterly absurd because there will never be 3 ^^^ 3 people in existence. There will never even be 3 ^^ 4 people.

Another example: a computer cannot run 3 ^^^ 3 simulations no matter how fast it is and no matter how few particles it needs for each one. If it needed only a single particle for each simulation (total particles in the universe: ~10^80) and could run the entire thing in the smallest measurable interval (Planck time: 10^-44 s), then by the time entropy claimed the last black holes in the universe the AI could have run about 10^232. (That's 10^80 sims per interval, 10^44 intervals per second, 10^8 seconds per year, and 10^100 years until universal heat death.) That doesn't even beat 3 ^^ 4, for fucks sake.


EDIT:
You want to hear something funny? While I was putting this post together, it occurred to me to ask why this notation was invented in the first place. The guy who created arrow notation was Donald Knuth, a professor of computer science. Wikipedia led me to the article where he first introduced the concept of arrow notation. Let me quote to you from the abstract:

Donald Knuth posted:

Finite numbers can be really enormous, and the known universe is very small. Therefore the distinction between finite and infinite is not as relevant as the distinction between realistic and unrealistic.
That's right - Knuth created this notation specifically to describe unrealistically large numbers. He wanted to demonstrate that just because numbers go up forever doesn't mean that useful numbers go up forever.

Alien Arcana fucked around with this message at 18:44 on May 20, 2014

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Mr. Sunshine posted:

Is the Less Wrong crowd focusing any attention on addressing any of the myriad problems that need to be solved if/when we achieve clinical immortality? Like, say that in the next few years scientists discover a way to keep telomeres intact during cell division and humans stop dying from of old age. Let's ignore that this does not in any way protect you from cancer, disease or accidents, and that the likelihood of dying of either of these rapidly approaches 1 (:smug:) as your age increases.

What about overpopulation? How do we keep a population that increases by 15000 each hour fed and clothed? How do we ensure an acceptable living standard for even a sizable minority of these people?
What are the social and economical consequences of people not dying? Parents, grandparents and great-grandparents will remain with you forever. There is no such thing as retirement. Jobs will be held indefinitely, while the workforce grows exponentially. Positions of power are occupied by immortals whose values and opinions grow more and more disconnected from the surrounding society with every passing century.

Or is this something that the Lord our God Benevolent AI will solve through magic super-science?

We will not need to fear cancer or disease, for the AI will cure everything. We will not need to fear accidents, for the AI will protect us. We will not need to feat overpopulation, for the AI will feed and clothe us. We will not need to have jobs, for the AI will take care of everything. We will not need to fear entrenched positions of power, for the only position of power shall be the AI. There will be no loyalty, except loyalty towards the AI. There will be no love, except the love of Bayesian purity. There will be no laughter, except the laugh of triumph over the defeated Deathists. There will be no art, no literature, no science.

Alien Arcana posted:

and 10^100 years until universal heat death.

Heat deathism is still deathism. :smug:

Yudkowsky admits that this is impossible:

The Pascal's Wager Fallacy Fallacy posted:

But current physics also says that any finite amount of matter can only do a finite amount of computation, and the universe is expanding too fast for us to collect an infinite amount of matter. We cannot, on the face of things, expect to think an unboundedly long sequence of thoughts.

The laws of physics cannot be easily modified to permit immortality: lightspeed limits and an expanding universe and holographic limits on quantum entanglement and so on all make it inconvenient to say the least.

By his own admission, the laws of physics make it clear that immortality is impossible, you will eventually die, you can't actually run 3 ^^^^^^^ 3 simulations, and for all his handwringing science ultimately sides with the deathists.

Or does it?

All italics are his posted:

On the other hand, many computationally simple laws of physics, like the laws of Conway's Life, permit indefinitely running Turing machines to be encoded. So we can't say that it requires a complex miracle for us to confront the prospect of unboundedly long-lived, unboundedly large civilizations. Just there being a lot more to discover about physics - say, one more discovery of the size of quantum mechanics or Special Relativity - might be enough to knock (our model of) physics out of the region that corresponds to "You can only run boundedly large Turing machines".

So while we have no particular reason to expect physics to allow unbounded computation, it's not a small, special, unjustifiably singled-out possibility like the Christian God; it's a large region of what various possible physical laws will allow.

His entire worldview is literally based on the assumption that the laws of physics as we know them will change in precisely the way he prefers. As long as he believes hard enough in immortality, the universe will transform itself into one in which immortality is possible, and at the end of days Multivac will discover a way to reverse entropy and proclaim "LET THERE BE LIGHT" and there will be light.

Yuddites don't actually like science. When science points out that their entire philosophy is wrong and impossible, they decide it's science that must be wrong, stick their fingers in their ears, and decide to literally reject our reality and substitute their own.

This is why it's impossible to argue with Yudkowsky. Even when you prove that everything he says is dumb and wrong and impossible and even if you get him to acknowledge that, he still won't care.

Alien Arcana
Feb 14, 2012

You're related to soup, Admiral.
I love that his argument literally boils down to "I reject your reality and substitute my own!"

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

Alien Arcana posted:

EDIT:
You want to hear something funny? While I was putting this post together, it occurred to me to ask why this notation was invented in the first place. The guy who created arrow notation was Donald Knuth, a professor of computer science. Wikipedia led me to the article where he first introduced the concept of arrow notation. Let me quote to you from the abstract:

That's right - Knuth created this notation specifically to describe unrealistically large numbers. He wanted to demonstrate that just because numbers go up forever doesn't mean that useful numbers go up forever.
There is a difference between unrealistic and useless. Graham's Number is called the largest useful number. And it is large enough that it can't be expressed directly by the Knuth's arrow notation.


Generally Yudkowsky becomes much funnier if you know what the actual experts on the topic say to the things that Yudowky tinkers in.
For example back to life extension. The man who made the Singularity famous, Ray Kurzweil, also thinks that there as a reasonable chance that medically science will eventually progress far enough to make people effectively immortal.
And he also thinks that one should try to increase one's chances to live to see this, however marginal the improvement is.
For this he recommends working out and eating healty, to extend your live.

Tunicate
May 15, 2012

Kurzweil also is into 'clustered water' 'water alkalinization' and homeopathy. He also hired an assistant solely to track the "180 to 210 vitamin and mineral supplements a day" that he takes.

Dude's a total crank as well.

Tunicate fucked around with this message at 20:40 on May 20, 2014

SolTerrasa
Sep 2, 2011

Tunicate posted:

Kurzweil also is into 'clustered water' 'water alkalinization' and homeopathy. He also hired an assistant solely to track the "180 to 210 vitamin and mineral supplements a day" that he takes.

Dude's a total crank as well.

That does not surprise me. Singularity folks are pretty much the worst; they've ended up believing one crazy thing for which evidence is literally impossible, how unlikely could it be that they'd believe two?

And yet his advice is *better* than Yudkowsky's, who recommends a ketogenic diet to lose weight and live longer and yet hasn't shed a pound of his frame (which could be described as anywhere from "bulky" to "goonish") in all the time he's been recommending it. Also, you'd be hard-pressed to use the literature on the topic to support a firm belief that it'll even work.

Of course experimental science is just a special case of Bayesian math and if you set your priors high enough and keep your sample size low you can keep believing whatever you want and :words:

Alien Arcana
Feb 14, 2012

You're related to soup, Admiral.

tonberrytoby posted:

There is a difference between unrealistic and useless. Graham's Number is called the largest useful number. And it is large enough that it can't be expressed directly by the Knuth's arrow notation.

Graham's number is the solution to a problem that has no significance - IIRC it's the upper bound to the smallest number of dimensions in which some property holds. We could assign every Planck-granular space-time index its own dimension and not even scratch the surface of the surface of that number. So if we had a genuine, real-world question related to that property, Graham's number is totally useless as an upper bound because a solution with that many dimensions is not going to be relevant to our situation, no matter what that situation is.

Still I suppose there are cases where the existence or non-existence of a number might be relevant to some proof that is useful. So I'll retract my statement that such numbers are entirely useless. They just aren't numbers you should be throwing around as if they have any direct relation to the real world.

FouRPlaY
May 5, 2010

Monocled Falcon posted:

Ah, they link to one of the best illustrations of their mindset I've ever seen.

http://www.reddit.com/r/AskReddit/comments/23j3zo/you_wake_up_as_hitler_in_the_middle_of_ww2_what/cgxnom9

As a /r/badhistory poster pointed out, this guy's hyper rational plan is do what the Nazis tried to do but succeed this time. And then it appears he added on that, after conquering the rest of the world, he would disband most of the army and usher in an age of enlightenment.

I remember that thread. Link for those that are interested.

Solomonic
Jan 3, 2008

INCIPIT SANTA
Today I got an eyelash in my eye and thanks to you guys I now know that this is worse than if I were being tortured

thanks, less wrong mock thread

Djeser
Mar 22, 2013


it's crow time again

Solomonic posted:

Today I got an eyelash in my eye and thanks to you guys I now know that this is worse than if I were being tortured

Only because the 3,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 simulations of you running inside the mind of a future AI also got an eyelash in their eyes

Maxwell Lord
Dec 12, 2008

I am drowning.
There is no sign of land.
You are coming down with me, hand in unlovable hand.

And I hope you die.

I hope we both die.


:smith:

Grimey Drawer
I do like how Yudkowsky just assigns numbers and probabilities at random. Anything is possible when you say it has a 0.65 chance over a period of Pi years.

Wanamingo
Feb 22, 2008

by FactsAreUseless

tonberrytoby posted:

There is a difference between unrealistic and useless. Graham's Number is called the largest useful number. And it is large enough that it can't be expressed directly by the Knuth's arrow notation.

I wont pretend to understand what any of that means, but I'm willing to bet it's an outlier. What about the second, third, or fourth largest useful numbers, are they even remotely as big?

SolTerrasa
Sep 2, 2011

Wanamingo posted:

I wont pretend to understand what any of that means, but I'm willing to bet it's an outlier. What about the second, third, or fourth largest useful numbers, are they even remotely as big?

That's not really the point. There's no real formal ordering of "important numbers". The point is that there can never be as many things as would be required for these situations to make sense, there's an upper bound on the number of things that can exist in the universe and it's pretty low compared to the big numbers that Yudkowsky argues about.

Soylent Pudding
Jun 22, 2007

We've got people!


My favorite thing from the proof is that while Graham's Number is the upper bound, the lower bound is 13.

Basil Hayden
Oct 9, 2012

1921!

Soylent Pudding posted:

My favorite thing from the proof is that while Graham's Number is the upper bound, the lower bound is 13.

13 is actually an improvement from the original 6. :woop: I'm pretty sure the current upper bound is a fair bit lower than the original Graham's number, as well (though still absurdly enormous).

I love how after mentioning what the upper and lower bounds are the proof literally says "Clearly, there is some room for improvement here."

Wanamingo posted:

I wont pretend to understand what any of that means, but I'm willing to bet it's an outlier. What about the second, third, or fourth largest useful numbers, are they even remotely as big?

Well, I mean, the article mentions that other, even larger numbers have shown up in connection with other proofs since then.

J. Alfred Prufrock
Sep 9, 2008
Is Transcendence the Least Wrong film of the year?

Stottie Kyek
Apr 26, 2008

fuckin egg in a bun
It's loving terrible and made me (a risk analyst) and my boyfriend (an electrical engineer) rant about all the problems with it all the way home from the cinema, so I would say "yes". We decided that the fact that we even noticed all the daft problems with it meant that the storytelling probably hadn't really hooked us in the first place.

There was a drama/horror/sci-fi show here in the UK called Black Mirror, which is a bit like the Twilight Zone but it's all about social media and our relationship with computers. One episode called "Be Right Back" is basically the same idea as Transcendence and the whole "AI simulation of people" thing except it's more realistic, if that makes sense. I showed it to the boyfriend straight after watching Transcendence because I figured it was really the movie we'd wanted and expected to see.
It's about a young man who posts poo poo on his smartphone all the time and dies in a car crash (it's implied he was using the phone while driving), and his widow buys some software that creates a simulation of him based on his social media history and e-mail archives. Except it's not really like him at all - for a start it looks much smoother because it's based on a composite of pictures he uploaded, and people only keep nice photos of themselves, and it doesn't know all the intimate details of their relationship that they never shared with the world, or the stories behind the photos. It's around online, not sure if posting it counts as :filez: though. Food for thought for Yudkowsky and his fans.

DrankSinatra
Aug 25, 2011
Oh god. I didn't realize LessWrong was this much of a thing. This dude I went to college with is always posting about their poo poo. When he was in college, he started out as a super gung-ho genetics major.

I remember this guy making GBS threads on social science/liberal arts majors at a dinner party, and thinking he was being a total philistine fuckhead. A few months later he washed out (or quit?) the genetics program and became a philosophy major, and then promptly went on to not finish a graduate degree. Now he posts lots of vaguely wrong poo poo about computer science that I can't really be arsed to correct because, holy gently caress, I'm too busy actually being a computer scientist to care.

Except when I complain about him on SA.

DrankSinatra fucked around with this message at 23:09 on May 22, 2014

SolTerrasa
Sep 2, 2011

DrankSinatra posted:

Now he posts lots of vaguely wrong poo poo about computer science that I can't really be arsed to correct because, holy gently caress, I'm too busy actually being a computer scientist to care.

Except when I complain about him on SA.

Oh man, my favorite. Can you post some of it?

su3su2u1
Apr 23, 2014
This is amazing- some new-comer to LessWrong, not realizing Harry of Yud's fanfic is a total self-insert, offers some arm-chair psychology involving Harry being a narcissist:

http://lesswrong.com/lw/jc8/harry_potter_and_the_methods_of_rationality/axjq

Algernoq posted:

Grandiose sense of self-importance? Check. He wants to “optimize” the entire Universe

Obsessed with himself? Check. He appears to only care about people who are smarter or more powerful than him -- people who can help him
...
Goals are selfish? Check. Harry claims to want to save everyone, but in practice he tries to increase his own power most quickly
...
Troubles with normal relationships? Check.
...
Becomes furious if criticized? Check.
..
Has fantasies of unbound success, power, intelligence, etc.? Check
..
Feels entitled - has unreasonable expectations of special treatment? Check.
..
Takes advantage of others to further his own need? Check.

JDG1980
Dec 27, 2012

tonberrytoby posted:

For example back to life extension. The man who made the Singularity famous, Ray Kurzweil, also thinks that there as a reasonable chance that medically science will eventually progress far enough to make people effectively immortal.
And he also thinks that one should try to increase one's chances to live to see this, however marginal the improvement is.
For this he recommends working out and eating healty, to extend your live.

I suspect that in the short to medium term, the best prospects for substantial extension of human life aren't in some crazy brain-uploading trick, but in medical technology. Researchers have already found a variety of different ways to retard and even reverse the aging process in mice, and while making this work for humans isn't trivial, it seems much more likely than computer scientists developing some weird AI-god who fixes everything. (One of the anti-aging treatments is already scheduled for human testing.)

Mr. Sunshine
May 15, 2008

This is a scrunt that has been in space too long and become a Lunt (Long Scrunt)

Fun Shoe
Today's SMBC addresses Bayseianism:

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Beyond the Reach of God

Yudkowsky posted:

So it's not necessarily an attempt to avoid falsification, to say that God does not grant all prayers. Even a Friendly AI might not respond to every request.

But clearly, there exists some threshold of horror awful enough that God will intervene. I remember that being true, when I believed after the fashion of a child...

Where exactly is the boundary of sufficient awfulness? Even a child can imagine arguing over the precise threshold. But of course God will draw the line somewhere. Few indeed are the loving parents who, desiring their child to grow up strong and self-reliant, would let their toddler be run over by a car.

The obvious example of a horror so great that God cannot tolerate it, is
the holocaust. That's the obvious example of something unimaginably awful, right? He's making a "God let the holocaust happen, therefore God is a lie" argument. That must be it, right?

Yudkowsky posted:

The obvious example of a horror so great that God cannot tolerate it, is death—true death, mind-annihilation. I don't think that even Buddhism allows that. So long as there is a God in the classic sense—full-blown, ontologically fundamental, the God—we can rest assured that no sufficiently awful event will ever, ever happen. There is no soul anywhere that need fear true annihilation; God will prevent it.
Hahaha nope, it's just him projecting his insane death-phobia onto everything else again. Not every religion sees death as a bad thing, but Christianity does have an eternal afterlife, so I suppose he doesn't disagree with that particular god here. Alright, so where's he going with this?

Yudkowsky posted:

What if you build your own simulated universe? The classic example of a simulated universe is Conway's Game of Life. I do urge you to investigate Life if you've never played it—it's important for comprehending the notion of "physical law". Conway's Life has been proven Turing-complete, so it would be possible to build a sentient being in the Life universe, albeit it might be rather fragile and awkward. Other cellular automata would make it simpler.

Could you, by creating a simulated universe, escape the reach of God? Could you simulate a Game of Life containing sentient entities, and torture the beings therein?
Oh, of course, he's going to the same place he always goes - "Let's simulate people and then torture them."

Yudkowsky posted:

But suppose that instead you ask the question:

Given such-and-such initial conditions, and given such-and-such cellular automaton rules, what would be the mathematical result? ...What does Life look like, in this imaginary world where every step follows only from its immediate predecessor? Where things only ever happen, or don't happen, because of the cellular automaton rules? Where the initial conditions and rules don't describe any God that checks over each state? What does it look like, the world beyond the reach of God? ...

In the what-if world where every step follows only from the cellular automaton rules, the equivalent of Genghis Khan can murder a million people, and laugh, and be rich, and never be punished, and live his life much happier than the average. Who prevents it? God would prevent it from ever actually happening, of course; He would at the very least visit some shade of gloom in the Khan's heart. But in the mathematical answer to the question What if? there is no God in the axioms. So if the cellular automaton rules say that the Khan is happy, that, simply, is the whole and only answer to the what-if question. There is nothing, absolutely nothing, to prevent it.

And if the Khan tortures people horribly to death over the course of days, for his own amusement perhaps? They will call out for help, perhaps imagining a God. And if you really wrote that cellular automaton, God would intervene in your program, of course. But in the what-if question, what the cellular automaton would do under the mathematical rules, there isn't any God in the system. Since the physical laws contain no specification of a utility function—in particular, no prohibition against torture—then the victims will be saved only if the right cells happen to be 0 or 1. And it's not likely that anyone will defy the Khan; if they did, someone would strike them with a sword, and the sword would disrupt their organs and they would die, and that would be the end of that. So the victims die, screaming, and no one helps them; that is the answer to the what-if question.

Could the victims be completely innocent? Why not, in the what-if world? If you look at the rules for Conway's Game of Life (which is Turing-complete, so we can embed arbitrary computable physics in there), then the rules are really very simple. Cells with three living neighbors stay alive; cells with two neighbors stay the same, all other cells die. There isn't anything in there about only innocent people not being horribly tortured for indefinite periods.

Is this world starting to sound familiar?
Sound familiar? Checkmate theist :smug:

At long last, Yudkowsky remembers that Hitler exists and brings him in for his ultimate disproof of God:

Yudkowsky posted:

Belief in a fair universe often manifests in more subtle ways than thinking that horrors should be outright prohibited: Would the twentieth century have gone differently, if Klara Pölzl and Alois Hitler had made love one hour earlier, and a different sperm fertilized the egg, on the night that Adolf Hitler was conceived?

For so many lives and so much loss to turn on a single event, seems disproportionate. The Divine Plan ought to make more sense than that. You can believe in a Divine Plan without believing in God—Karl Marx surely did. You shouldn't have millions of lives depending on a casual choice, an hour's timing, the speed of a microscopic flagellum. It ought not to be allowed. It's too disproportionate. Therefore, if Adolf Hitler had been able to go to high school and become an architect, there would have been someone else to take his role, and World War II would have happened the same as before.

But in the world beyond the reach of God, there isn't any clause in the physical axioms which says "things have to make sense" or "big effects need big causes" or "history runs on reasons too important to be so fragile". There is no God to impose that order, which is so severely violated by having the lives and deaths of millions depend on one small molecular event.

The point of the thought experiment is to lay out the God-universe and the Nature-universe side by side, so that we can recognize what kind of thinking belongs to the God-universe. Many who are atheists, still think as if certain things are not allowed. They would lay out arguments for why World War II was inevitable and would have happened in more or less the same way, even if Hitler had become an architect. But in sober historical fact, this is an unreasonable belief; I chose the example of World War II because from my reading, it seems that events were mostly driven by Hitler's personality, often in defiance of his generals and advisors. There is no particular empirical justification that I happen to have heard of, for doubting this. The main reason to doubt would be refusal to accept that the universe could make so little sense—that horrible things could happen so lightly, for no more reason than a roll of the dice.

But why not? What prohibits it?

In the God-universe, God prohibits it. To recognize this is to recognize that we don't live in that universe. We live in the what-if universe beyond the reach of God, driven by the mathematical laws and nothing else.
This is Yudkowsky's grand disproof of God. Not "really unimaginably terrible things happen," not "bad things happen to good people," not "the Problem of Evil," but "small events can eventually have larger, distant consequences." Yudkowsky reckons that a butterfly flapping its wings ought not have the slightest effect on the weather because weather is big and butterflies are small, and therefore butterflies cannot have the slightest effect on the weather. And because he reckons that effects ought be no more significant than their causes, he assumes that God would obviously agree with him.

Does anyone better-versed in theology know of any doctrine that says that large events must always have large causes, and than a small change now cannot bring about a large change ten years from now? I don't know of anything in Christianity - or, for that matter, in any other religion - that says small events can't have large consequences. I can think of plenty of theological counterexamples, though (for example, that one man dying two thousand years ago could produce a globe-spanning religion with two billion followers much later).

As with the "mathematician says at least one of the children is male" problem, Yudkowsky can't actually answer the question at hand, so he uses sleight of hand to substitute his own easier-but-less-interesting question. Instead of answering the question of whether God exists, Yudkowsky answers the question of whether a God-who-forbids-seemingly-minor-events-to-have-major-consequences exists. Even if we accept his particular understanding of history in which events have singular causes, he's still only disproven a very narrow, particular type of God who shares an idiosyncratic view of his. But most conceptions of God, such as a mainstream Christian one, don't have that idiosyncrasy. He's disproven a version of God nobody believed in. And he can't do any better than that.

Ultimately, his argument is one of revulsion: "X; I don't like X; therefore not God." Such arguments against God are basic, common, and ancient. The only difference is that for most people, X is "evil," or "suffering," or "injustice," or "Hitler." But for Yudkowsky, it's "Hitler's dad's sperm."

Lottery of Babylon fucked around with this message at 01:48 on Aug 11, 2014

Djeser
Mar 22, 2013


it's crow time again

Lottery of Babylon posted:

Does anyone better-versed in theology know of any doctrine that says that large events must always have large causes, and than a small change now cannot bring about a large change ten years from now? I don't know of anything in Christianity - or, for that matter, in any other religion - that says small events can't have large consequences. I can think of plenty of theological counterexamples, though (for example, that one man dying two thousand years ago could produce a globe-spanning religion with two billion followers much later).

Christianity has plenty of places where big things come from small events.

quote:

Then He said, “To what shall we liken the kingdom of God? Or with what parable shall we picture it? It is like a mustard seed which, when it is sown on the ground, is smaller than all the seeds on earth; but when it is sown, it grows up and becomes greater than all herbs, and shoots out large branches, so that the birds of the air may nest under its shade.”

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Djeser posted:

Christianity has plenty of places where big things come from small events.

Thanks, I figured there had to be something like that in there.

In the comments, someone calls out Yudkowsky's poor understanding of history and points out that WWII was widely predicted twenty years in advance. Yudkowsky counters that it wouldn't have turned out exactly the same with a different German leader. After that the comments somehow turn into an argument about cryonics. Not a single person points out that "big things come from little things" doesn't come anywhere close to proving God doesn't exist.

"Beyond the Reach of God" is a followup to previous article and great death metal band name "The Magnitude of His Own Folly", which ends thus:

Yudkowsky posted:

I saw that others, still ignorant of the rules, were saying "I will go ahead and do X"; and that to the extent that X was a coherent proposal at all, I knew that would result in a bang; but they said, "I do not know it cannot work". I would try to explain to them the smallness of the target in the search space, and they would say "How can you be so sure I won't win the lottery?", wielding their own ignorance as a bludgeon.

And so I realized that the only thing I could have done to save myself, in my previous state of ignorance, was to say: "I will not proceed until I know positively that the ground is safe." And there are many clever arguments for why you should step on a piece of ground that you don't know to contain a landmine; but they all sound much less clever, after you look to the place that you proposed and intended to step, and see the bang.

Which means it's time for our regular reminder that Yudkowsky's entire belief system is based on blind faith that the universe's laws of physics will magically change in a way that permits immortality, and that his justification for believing this is "You can't prove it won't!"

Runcible Cat
May 28, 2007

Ignoring this post

Yudkowsky posted:

I would try to explain to them the smallness of the target in the search space, and they would say "How can you be so sure I won't win the lottery?", wielding their own ignorance as a bludgeon.
AhaHAHAha! x 3^3^3

Anticheese
Feb 13, 2008

$60,000,000 sexbot
:rodimus:

Wouldn't "escaping the reach of God" by making a really really good implementation of Conway's Game of Life be a dumb idea from the outset, since the simulation still exists in our universe and is (theologically theoretically) still within God's domain? It seems simpler to just argue that free will precludes God, since horrible things like torture, Hitler, and the butterfly effect exist?

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Anticheese posted:

Wouldn't "escaping the reach of God" by making a really really good implementation of Conway's Game of Life be a dumb idea from the outset, since the simulation still exists in our universe and is (theologically theoretically) still within God's domain?

He has some lines dancing around that objection:

Yudkowsky posted:

But if God is watching everywhere, then trying to build an unfair Life just results in the God stepping in to modify your computer's transistors. If the physics you set up in your computer program calls for a sentient Life-entity to be endlessly tortured for no particular reason, the God will intervene. God being omnipresent, there is no refuge anywhere for true horror: Life is fair.

But suppose that instead you ask the question:

Given such-and-such initial conditions, and given such-and-such cellular automaton rules, what would be the mathematical result?

Not even God can modify the answer to this question, unless you believe that God can implement logical impossibilities. Even as a very young child, I don't remember believing that. (And why would you need to believe it, if God can modify anything that actually exists?)

What does Life look like, in this imaginary world where every step follows only from its immediate predecessor?


Anticheese posted:

It seems simpler to just argue that free will precludes God, since horrible things like torture, Hitler, and the butterfly effect exist?

What he ends up arguing is basically the free will thing, except where most people look at "a man can do horrible things" and get upset at the "horrible things" part, Yudkowsky gets upset at the "a man" part because obviously it ought to take several men to do horrible things or else it's just not fair.

In Yudkowsky's worldview, if more Germans agreed that the Holocaust was the correct course of action then the Holocaust would in fact have been completely okay.

Lottery of Babylon fucked around with this message at 07:52 on May 27, 2014

Mr. Sunshine
May 15, 2008

This is a scrunt that has been in space too long and become a Lunt (Long Scrunt)

Fun Shoe
I can't get over this dude's obsession with simulated life. Yeah, sure, Conway's game is Turing complete. That just means "can be used as a computer". Claiming it can thus simulate sentient life is theoretically correct, but no more useful than claiming that you can use your iPhone to simulate sentient life - given enough time, memory, correct algorithms etc.

Then his entire disproof hinges on the idea that real God would intervene in the suffering of simulated life. What's the purpose? We know for a fact that God, if he exists, doesn't even intervene in the suffering of real life. In fact, the Yud's obsession with suffering, torture and death just obfuscates what could be a valid point - if we could set up a simulation of the universe which runs entirely on physical laws without divine intervention, how would it differ from our own? If there are no significant differences, what does this say about the possibility of God existing?

Of course, this still hinges on being able to accurately simulate the entire universe in goddamn Conway's Game of Life. It would be more useful as a thought experiment, but that could be summed up in a few sentences and wouldn't let the Yud bloviate about death and torture and throw around computer science terms like some cult leader hopped up on William Gibson novels.

Anticheese
Feb 13, 2008

$60,000,000 sexbot
:rodimus:

Yuddles posted:

But suppose that instead you ask the question:

Given such-and-such initial conditions, and given such-and-such cellular automaton rules, what would be the mathematical result?

Isn't that what you ask by running a goddamn simulation?

Adbot
ADBOT LOVES YOU

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Mr. Sunshine posted:

In fact, the Yud's obsession with suffering, torture and death just obfuscates what could be a valid point - if we could set up a simulation of the universe which runs entirely on physical laws without divine intervention, how would it differ from our own? If there are no significant differences, what does this say about the possibility of God existing?

That's what it seems like he's going for in the middle section. The trouble is that:

1) He never actually explains why the simulation would turn out exactly like our own. He just says "Hey, maybe in the simulation 'cows' evolve and are eaten by 'wolves'. Or maybe not. But maybe they would, heh, wolves eating cows, wouldn't that sound familiar, nudge nudge wink wink" and moves on from there assuming that any simulation would in fact necessarily be an identical copy of our world. He never justifies this assertion.

2) He frames it less in terms of "The world would be much like our own" and more in terms of "Noted heartless supervillain Genghis Khan tortures puppies for no reason and gets away with it because he's just that evil and the world is just that bleak and uh I don't actually know anything about Genghis Khan"

3) After setting up this argument, he takes a sharp turn into a completely different argument about how small causes having big effects is just wrong.

Mr. Sunshine posted:

Of course, this still hinges on being able to accurately simulate the entire universe in goddamn Conway's Game of Life.

Most of his references are designed to make him look smart rather than to actually be useful. Conway's Game of Life is a cool thing cool nerds have heard of, therefore it needs to be shoehorned into everything even if it's so impractical it's actually counterproductive to bring it up. Knuth's up-arrow notation is how you write Graham's Number* and is therefore a cool thing cool nerds have heard of, therefore it needs to be shoehorned into everything even if it forces him to waste several paragraphs explaining the notation when all that really matters is "it's a big number". They're pointless references designed only to high-five other people who make the same pointless references; they're the computer science equivalent of quoting Monty Python. Since Yudkowsky is in the business of trying to look smart, not of doing useful things, this shouldn't be surprising.

*nb: Graham's Number can't actually be written in up-arrow notation because it's too big, but the way it's written involves up-arrow notation and that's all people remember

Anticheese posted:

Isn't that what you ask by running a goddamn simulation?

No, because if you actually ran the simulation that pesky God feller might interrupt it. If you asked a computer to compute 2+2, you can't be sure the computer wouldn't spit out 5 because of God interfering in its circuits to make it give the wrong answer. But if you consider what you would logically have to get if you added 2 plus 2, the answer would be 4.

It's a long-winded way of saying, "Okay, if we make a simulation we still can't escape the reach of God. But who cares, let's pretend we could escape God's reach anyhow, then what?"

  • Locked thread