Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Tiggum
Oct 24, 2007

Your life and your quest end here.


Runcible Cat posted:

The Hell of the Imperfect Internet Connections.

The worst hell imaginable.

Adbot
ADBOT LOVES YOU

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost
Hey, could be worse. I mean, the AI could be loving with any of us right now. For all we really know, it might have figured out some way of tricking us into paying money- real money, money that could be spent on milkshakes or chocolate- to be allowed to post on a forum full of misanthropes.

But that's crazy.

ol qwerty bastard
Dec 13, 2005

If you want something done, do it yourself!
I got a bit of dust in my eye today, which remember is equivalent to a person being tortured for decades as long as it happens to a lot of people, so it's clear that I must be a simulation in e-Hell. drat! Why didn't the real original-me donate all his money to MIRI so that I would be spared this agony! Woe is me!

Olanphonia
Jul 27, 2006

I'm open to suggestions~

And here I was thinking if he had something to be mad about it would be Gould's Non-Overlapping Magisteria (NOMA) principle. Gould's most recent contributions to evo-biology (before his death, of course) was collaborating to theorize punctuated equilibrium as an alternate theory to gradualism not whatever Yudkowski is raving about there.

The Vosgian Beast
Aug 13, 2011

Business is slow

Alternate universe Pascal sounds a lot like a parody of a LessWrongite.

Let's compare our universe's Pascal to Yudkowsky by seeing them express similar sentiments

Blaise Pascal posted:

Man is but a reed, the most feeble thing in nature; but he is a thinking reed. The entire universe need not arm itself to crush him. A vapour, a drop of water suffices to kill him. But, if the universe were to crush him, man would still be more noble than that which killed him, because he knows that he dies and the advantage which the universe has over him; the universe knows nothing of this.
All our dignity consists, then, in thought. By it we must elevate ourselves, and not by space and time which we cannot fill. Let us endeavour, then, to think well; this is the principle of morality.

Eliezer Yudkowsky posted:

Me? Strive to be Superman? Pffft. The human species did not become what it is by lifting heavier weights than other species. There is only one superpower that exists in this universe, and those who seek to master it are called Bayesians.

Well I can't tell the difference.

Froolow
Dec 2, 2010

The Vosgian Beast posted:

Alternate universe Pascal sounds a lot like a parody of a LessWrongite.

I wrote the article you're talking about primarily because of a comment in this thread saying (basically) people on Less Wrong treated cryonics more like a religion than a rational attempt to extend their life - I thought people might be interested in an extended discussion of that. Pascal wasn't supposed to sound like a parody, but I guess I must have had the SA mock thread in my head when I was drafting it!

Edit: Sorry, didn't mean to break rule one of the thread; I'd been posting on Less Wrong for a while before this thread came along so I thought it was OK, but I can see how it wasn't a smart thing to do now The Vosgian Beast points it out. Really sorry, won't happen again.

Froolow fucked around with this message at 23:18 on May 3, 2014

Numerical Anxiety
Sep 2, 2011

Hello.

The Vosgian Beast posted:

Alternate universe Pascal sounds a lot like a parody of a LessWrongite.

Let's compare our universe's Pascal to Yudkowsky by seeing them express similar sentiments


I like this game.

Blaise Pascal posted:

All of our reasoning reduces itself to a ceding to sentiment. But fantasy is similar and contrary to sentiment, such that one cannot distinguish between these contraries. One says that my sentiment is fantasy, the other that fantasy is sentiment. It is necessary to have a rule. Reason offers one, but it is pliable in every sense; and thus there is no rule.

Lesswrong posted:

Rationality is the characteristic of thinking and acting optimally. An agent is rational if it wields its intelligence in such a way as to maximize the convergence between its beliefs and reality; and acts on these beliefs in such a manner as to maximize its chances of achieving whatever goals it has. For humans, this means mitigating (as much as possible) the influence of cognitive biases

Which is already in a way to say that their reading of Pascal's Wager is the gullible one - reading Pascal beyond just the wager, one realizes that the terms of the Wager itself are no less gambled upon, and there is no sound criterion anywhere. One only recurs wager upon wager, and there's no place where one could actually calculate.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost
Blaise Pascal writes like the world's most grandfatherly lecturer. You can almost see him touching his fingertips together as he writes. It's bizarrely wonderful.

The Vosgian Beast
Aug 13, 2011

Business is slow

Froolow posted:

I wrote the article you're talking about primarily because of a comment in this thread saying (basically) people on Less Wrong treated cryonics more like a religion than a rational attempt to extend their life - I thought people might be interested in an extended discussion of that. Pascal wasn't supposed to sound like a parody, but I guess I must have had the SA mock thread in my head when I was drafting it!

See Rule 1 of this thread. Doing this stuff is bad. Please don't do this.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

WHY ISN'T GOOGOLOGY A RECOGNIZED FIELD OF MATH

THE STRANGE, VAST THOUGHTS OF ELIEZER YUDKOWSKY posted:

Why isn’t googology (the study of large numbers) a recognized subfield of mathematics with its own journal? There’s all these different ways of getting large numbers, and different mathematical questions that yield large numbers; and yet all those vast structures are comparable, being either greater, less, or the same.

Yes, the real numbers have a total ordering. That's a useful property, but not exactly one that mandates the existence of an entire field of math to handle big numbers.

Of course, plenty of other fields of math already compare the sizes of large numbers. When examining the limit behavior of functions, it is natural and useful to ask how quickly they grow, and which goes to infinity more quickly. This has direct practical applications in, say, computer science (which Yudkowsky ought to know about given his self-proclaimed expertise); algorithms that run in polynomial time are much more desirable than algorithms that run in exponential time, because polynomials grow less quickly than exponential functions. But these are tools in various other fields, not a field unto themselves.

THE STRANGE, VAST THOUGHTS OF ELIEZER YUDKOWSKY posted:

The process of considering how to construct the largest possible computable numbers naturally yields the recursive ordinals and the concept of ordinal analysis. All mathematical knowledge is in a sense contained in the Busy Beaver series of huge numbers.

The Busy Beaver series sequence (dammit Yudkowsky) represents the maximum possible "score" (number of 1's printed on a blank tape) of 2-symbol n-state Turing machines. It's an interesting problem with some neat applications. But to say that the sequence contains all mathematical knowledge? Are you joking? Does the sequence contain within itself a proof that the square root of 2 is irrational? The diagonal argument for the existence of different cardinalities of infinity? All the theorems you learned in high school geometry and forgot? Anything at all of significance about group cohomology? This is like saying that all artistic knowledge is in a sense contained in the Hunger Games.

I think it's easy for someone who knows nothing about a field to pick up one piece of information and assume that that's the only thing of significance in that field. Yudkowsky famously does this with probability theory and decision theory, deciding that they can pretty much be reduced to Bayes' rule. (Naturally, Yudkowsky's precious Bayes' rule is another piece of mathematical knowledge absent from the Busy Beavers.) Someone linked him to a Wikipedia article about the Busy Beaver series sequence once, therefore he is an expert on it and knows all there is to know about all of mathematics.

THE STRANGE, VAST THOUGHTS OF ELIEZER YUDKOWSKY posted:

You’d think there’d be more Math done on that, rather than there just being a recently-formed Googology Wikia.

It's like asking why there isn't a mathematical field of "variabology" to study variables. After all, variables can take on all sorts of different values, surely they must be of interest to mathematicians! And they are - but as aspects of other mathematical fields, not as an isolated field unto themselves. As I said before, people have done research that involves the behavior of large numbers, but that doesn't make large numbers its own field of research.

I did a Google search for Googology and every hit was from the Wikia. I then searched for "Googology -wikia" and got a subreddit with one post, a defunct Geocities page, and a Facebook page with 3 likes. The Wikia main page's "list of googolists" is just a random list of mathematicians who did anything involving a large number (usually alternate notations) in the process of investigating another problem.

THE STRANGE, VAST THOUGHTS OF ELIEZER YUDKOWSKY posted:

Three hypotheses come to mind:

1) The process of determining which two large numbers is larger, is usually just boring tedious legwork and doesn’t by itself produce new interesting insights.

2) By Friedman’s Grand Conjecture, most proofs about numbers can be formalized in a system with an ordinal no greater than ω^3 (omega cubed). Naturally arising huge numbers like Skewes’ Number or Graham’s Number are tiny and easily analyzed by googological standards. Few natural math problems are intricate enough or recursive enough to produce large numbers that would be difficult to analyze.

3) Nobody’s even thought of studying large numbers, or it seems like a ‘silly’ subject to mathematicians and hence is not taken seriously. (This supposes Civilizational Incompetence.)

What does it even mean to be tiny by "googological standards"? Because the real numbers go on forever, any given real number is dwarfed by countless larger real numbers. If you zoom out far enough (and a "googologist" presumably would if that were actually a thing), there is nothing that doesn't eventually look small. Surely, then, every number is tiny by "googological standards"? (How can a field that doesn't exist even have standards?)

Yudkowsky proposes three solutions to why there are no googologists: either googology is stupid and boring; or googology is stupid and boring; or those dumb plebe mathematicians only think googology is stupid and boring because they are all collectively incompetent and only Yudkowsky sees the strings that control the system.

The question Yudkowsky never addresses is burden of proof. It is easy to ask why there is no mathematical journal of googology, or variabology, or fractionology (the study of small numbers), or eggsaladology (the study of mathematical egg salad), or thousands of other made-up fields. The default answer is "Why should there be such a journal?", and it is up to the challenger to explain why their made-up field is more valid and worthy than variabology. Yudkowsky never really gives an argument for why googology should be a thing, and when determining why it isn't he jumps straight to "maybe everyone else is dumb".

To put it in terms you could understand, Yudkowsky: my Bayesian prior says that it's more likely that googology isn't real than that it is, and you have presented no evidence to shift that. And my Bayesian prior says that rather than every other mathematician being collectively incompetent, it's probably just you.

Ineffable
Jul 4, 2012
That's amazing. What would researching "googology" even mean? Eliezer seems to think that ordering large numbers merits an entire field of mathematics. :psyduck:

I'm sort of surprised he's actually heard of the ordinals. Although I'm fairly sure they're weren't created to "construct the largest possible computable numbers".

CROWS EVERYWHERE
Dec 17, 2012

CAW CAW CAW

Dinosaur Gum
Why doesn't he just shut up and take a loving Maths course at university/college :psyduck:

Oh right, because he's awful at it and only likes his own bullshit fantasy maths.

potatocubed
Jul 26, 2012

*rathian noises*

CROWS EVERYWHERE posted:

Why doesn't he just shut up and take a loving Maths course at university/college :psyduck:

Oh right, because he's awful at it and only likes his own bullshit fantasy maths.

Behold:

Yudkowsky posted:

I remember (dimly, as human memories go) the first time I self-identified as a "Bayesian". Someone had just asked a malformed version of an old probability puzzle, saying:

quote:

If I meet a mathematician on the street, and she says, "I have two children, and at least one of them is a boy," what is the probability that they are both boys?

In the correct version of this story, the mathematician says "I have two children", and you ask, "Is at least one a boy?", and she answers "Yes". Then the probability is 1/3 that they are both boys.

But in the malformed version of the story—as I pointed out—one would common-sensically reason:

quote:

If the mathematician has one boy and one girl, then my prior probability for her saying 'at least one of them is a boy' is 1/2 and my prior probability for her saying 'at least one of them is a girl' is 1/2. There's no reason to believe, a priori, that the mathematician will only mention a girl if there is no possible alternative.

So I pointed this out, and worked the answer using Bayes's Rule, arriving at a probability of 1/2 that the children were both boys. I'm not sure whether or not I knew, at this point, that Bayes's rule was called that, but it's what I used.

And lo, someone said to me, "Well, what you just gave is the Bayesian answer, but in orthodox statistics the answer is 1/3. We just exclude the possibilities that are ruled out, and count the ones that are left, without trying to guess the probability that the mathematician will say this or that, since we have no way of really knowing that probability—it's too subjective."

I responded—note that this was completely spontaneous—"What on Earth do you mean? You can't avoid assigning a probability to the mathematician making one statement or another. You're just assuming the probability is 1, and that's unjustified."

To which the one replied, "Yes, that's what the Bayesians say. But frequentists don't believe that."

And I said, astounded: "How can there possibly be such a thing as non-Bayesian statistics?"

ol qwerty bastard
Dec 13, 2005

If you want something done, do it yourself!

Lottery of Babylon posted:

The Busy Beaver series sequence (dammit Yudkowsky) represents the maximum possible "score" (number of 1's printed on a blank tape) of 2-symbol n-state Turing machines. It's an interesting problem with some neat applications. But to say that the sequence contains all mathematical knowledge? Are you joking? Does the sequence contain within itself a proof that the square root of 2 is irrational? The diagonal argument for the existence of different cardinalities of infinity? All the theorems you learned in high school geometry and forgot? Anything at all of significance about group cohomology? This is like saying that all artistic knowledge is in a sense contained in the Hunger Games.

This seems to be, again, an area in which he's come upon an idea that isn't exactly wrong, but when you think about it for more than five seconds, it turns out to be trivial or meaningless.

What he's saying is that if you knew the busy beaver numbers (which we don't, because they're uncomputable, duh - I think the biggest one we know is BB4, and it's unlikely we'll ever figure out the fifth, much less the sixth), you could then very carefully set up a turing machine for any given mathematical theorem which would halt if the theorem is true or continue to infinity if not. This would then reduce all mathematics proofs to simply running your n-state turing machine for BBn steps, and checking if it has halted.

But there's a pretty glaring flaw in this: getting that busy beaver number in the first place would necessarily involve proving the halting behaviour of all those turing machines. So what it comes down to is that he's saying "if you prove a theorem (along with 2^n other theorems), you can later go and prove that theorem by referring to your original proof." Wow, what a groundbreaking result.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

potatocubed posted:

Behold:

quote:

And I said, astounded: "How can there possibly be such a thing as non-Bayesian statistics?"

Holy poo poo, he's basically just stated that he's willing to make poo poo up to justify his own answers. "This math problem must be wrong because I didn't get the right answer. Therefore, the problem was with the person who determined what the right answer was and with the assumptions that THEY made. Therefore, Bayes."

E: To clarify, this is a problem from the well-known, well-documented set known as "A mathematician says:" The answer is ALWAYS the counter-intuitive quirky mindset one. Yes, there is probably a fifty-fifty on the gender of the remaining child, but that's not the point. This is an "A mathematician says" problem. The point of them is to make you think about the actual question being presented and the maths behind it, not roleplay the part of the mathematician and think through WHY they would say that to a random person on the street, or why their drink order is so peculiar, or why they're being so evasive in answering your 'or'-based proposition.

Saying "Well the mathematician wouldn't say that if they didn't mean X" is the equivalent to claiming that the box actually collapses after impact and therefore doesn't move anywhere in a physics puzzler. It's changing the facts and adding information to allow for your answer to be right.

Somfin fucked around with this message at 15:09 on May 4, 2014

Basil Hayden
Oct 9, 2012

1921!
Now I want to know what these guys would make of, say, the St. Petersburg paradox.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

Basil Hayden posted:

Now I want to know what these guys would make of, say, the St. Petersburg paradox.

It's a waste of money compared to putting more money toward Yudkowsky's living expenses AI research.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

potatocubed posted:

If the mathematician has one boy and one girl, then my prior probability for her saying 'at least one of them is a boy' is 1/2 and my prior probability for her saying 'at least one of them is a girl' is 1/2. There's no reason to believe, a priori, that the mathematician will only mention a girl if there is no possible alternative.

And therefore the probability that he says one of those two things is 1... which Yudkowsky has previously established isn't a probability. He can't even be internally consistent.

The problem with this argument is that the priors aren't really justified. Yudkowsky's priors assume that the mathematician can only make statements of the form "at least one of them is a [gender]", and that the mathematician always makes exactly one statement at random chosen uniformly between true statements of that form. But why should either of those be the case? There's no reason for the mathematician to speak that way.

It makes more sense to me that if the mathematician tells you to say "at least one of them is a boy", it's because she has some reason to want you to know whether or not at least one of them is a boy, not because her brain is spewing random Markov chains and made a random statement chosen uniformly from among true statements of a certain form. Taken in that light, the correct priors are "at least one of them is a boy" has probability 3/4 and "neither is a boy" has probability 1/4, and in the BG case she would make the former statement with probability 1.

Of course, maybe that's not right either. Maybe racist grandpa is present and really really wants grandsons, so she says whatever makes it sound like she has as many sons as possible. Then the priors would be "both are boys" with probability 1/4, "at least one is a boy" with probability 1/2, and "[she says nothing at all]" with probability 1/4. Or maybe in the BG case she'd just come out and say "One is a boy and one is a girl" with probability .5. Or maybe she has any number of other possible statements she could make and various non-intuitive probabilities assigned to each of them when deciding what to say. We don't know!

One of the major vulnerabilities of Bayes' rule is that it requires you to choose your priors intelligently, and Bayes' rule itself doesn't tell you anything about how to come up with your initial priors. The world is complicated, situations are complicated, psychology is complicated - coming up with sensible priors is really hard! But Yudkowsky never seems to care about that. So he asserts that his AI will be able to handle everything perfectly because it will be ~Bayesian~, but never explains how the AI will come up with prior probabilities for every conceivable event. (He occasionally name-drops that one "universal prior", but never deals with the inconvenient fact that it's not computable in any useful time.) Similarly, here he asserts that he's right because Bayes' rule says he must be, but it only says that because of the weird priors he pulls from the ether.

The interpretation Yudkowsky dismisses makes no assumptions about the mathematician's psychology, the motivation behind the statement, other statements the mathematician could have made in this universe, what statement the mathematician would have made in parallel universes in which the children had different genders, what probability the mathematician assigns to each of the true statements that could be made, or anything else of that sort. It simply says, "Given that what the mathematician says is true, we must be in one of these three universes (GB, BG, BB), therefore the probability that we are in GB or BG is 2/3". Yudkowsky is basically making up new unjustified information with his priors by asserting without proof the process by which the mathematician determines what to say. He doesn't get a different answer because he's the only one making use of helpful available information, he gets a different answer because he's the only one making up information that isn't actually there.

Basil Hayden posted:

Now I want to know what these guys would make of, say, the St. Petersburg paradox.

They'd say "Shut up and multiply." Remember their love of Pascal's Wager and all their variations on it, whether it be cyber-hell or cryonics or 8-lives-per-dollar. They would insist that the correct answer is in fact to pay any price for one chance at the game, because the expected value says so.

Here's a snippet from one of their articles trying to refute the idea that "rationalists" signing up for cryonics more than normal people do proves that they're pretty gullible:

quote:

Imagine a lottery run by an incompetent official who accidentally sets it up so that the average payoff is far more than the average ticket price. For example, maybe the lottery sells only ten $1 tickets, but the jackpot is $1 million, so that each $1 ticket gives you a 10% chance of winning $1 million.

Goofus hears about the lottery and realizes that his expected gain from playing the lottery is $99,999. "Huh," he says, "the numbers say I could actually win money by playing this lottery. What an interesting mathematical curiosity!" Then he goes off and does something else, since everyone knows playing the lottery is what stupid people do.

Gallant hears about the lottery, performs the same calculation, and buys up all ten tickets.

The relevant difference between Goofus and Gallant is not skill at estimating the chances of winning the lottery. We can even change the problem so that Gallant is more aware of the unlikelihood of winning than Goofus - perhaps Goofus mistakenly believes there are only five tickets, and so Gallant's superior knowledge tells him that winning the lottery is even more unlikely than Goofus thinks. Gallant will still play, and Goofus will still pass.

The relevant difference is that Gallant knows how to take ideas seriously.

Of course, they can only keep themselves from looking foolish by constructing their example so that they can 100% guarantee victory by buying all the tickets. But that's not how long-odds bets usually work. You can't guarantee a win on Pascal's Wager by worshipping all gods (and all non-gods) simultaneously. You can't guarantee a win in the St. Petersburg lottery by playing it infinitely many times (you'd go broke too quickly). You can't guarantee immortality by donating to every internet AI crank and signing up for every cryonics lab.

Lottery of Babylon fucked around with this message at 15:49 on May 4, 2014

HEY GUNS
Oct 11, 2012

FOPTIMUS PRIME

Somfin posted:

Blaise Pascal writes like the world's most grandfatherly lecturer. You can almost see him touching his fingertips together as he writes. It's bizarrely wonderful.
And then when he died they found this sewn into his jacket. I love that story, but there's no such thing as a bad Pascal story.

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET
I honestly can't tell what Yudowsky's argument is in his solution to the logic puzzle posed to him earlier.

Is it that he can pick any priors he wants based on specific details of the situation he's examining, even if those priors are wrong or don't make sense, and that the Bayesian way of doing things is to pick your priors arbitrarily and make your justifications that way?

Because I'm pretty sure that no matter how you slice the problem, you either get the 1/3 answer or you make a weird unreasonable assumption about how the woman acts. Does the woman always enumerate the number of boys? Then you have 1/4 "both", 1/2 "one boy", and 1/4 "no boys", and eliminating that last 1/4 still leaves 1/4 both, 1/2 "one boy" out of a remaining whole of 3/4.

What am I failing to understand here?

Brofessor Slayton
Jan 1, 2012

Lottery of Babylon posted:

Of course, they can only keep themselves from looking foolish by constructing their example so that they can 100% guarantee victory by buying all the tickets.

Funnily enough, there's an actual historical example of that happening. It still doesn't really work for cryonics, as we don't know the odds of being revived afterwards are even non-zero (as there have been no successful unfreezings of live humans).

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Krotera posted:

I honestly can't tell what Yudowsky's argument is in his solution to the logic puzzle posed to him earlier.

Is it that he can pick any priors he wants based on specific details of the situation he's examining, even if those priors are wrong or don't make sense, and that the Bayesian way of doing things is to pick your priors arbitrarily and make your justifications that way?

Because I'm pretty sure that no matter how you slice the problem, you either get the 1/3 answer or you make a weird unreasonable assumption about how the woman acts. Does the woman always enumerate the number of boys? Then you have 1/4 "both", 1/2 "one boy", and 1/4 "no boys", and eliminating that last 1/4 still leaves 1/4 both, 1/2 "one boy" out of a remaining whole of 3/4.

What am I failing to understand here?

Yudkowsky believes that the mathematician always decides what statement to make as follows:

a) With two boys, the mathematician will always say "At least one is a boy."
b) With two girls, the mathematician will always say "At least one is a girl."
c) With one boy and one girl, the mathematician has a 50% probability of saying "At least one is a boy" and a 50% probability of saying "At least one is a girl."

Given this extra assumption, his answer of 1/2 is mathematically correct, since the answer of "At least one is a boy" eliminates not only the case where both are girls but also half of the cases (probability mass-wise) where one is a boy and one is a girl. The problem is that he pulls this extra assumption out of his rear end.

Lottery of Babylon fucked around with this message at 19:06 on May 4, 2014

potatocubed
Jul 26, 2012

*rathian noises*
According to the comments on that post ("My Bayesian Enlightenment" if you want to look it up) you need to also consider the possibility that the woman is a liar, an alien, or 100% incapable of basic human social interaction.

...I'll leave this space here for you to insert your own jokes.

Anyway, I went comment trawling and found some gold:

quote:

That's the simplest set of assumptions consistent with the problem. But the quote itself is inconsistent with the normal rules of social interaction. Saying "at least one is a boy" takes more words to convey less information than saying "both boys" or "one of each". I think it's perfectly reasonable to draw some inference from this violation of normal social rules, although it is not clear to me what inference should be drawn.

:smug: posted:

I think a more reasonable conclusion is: yes indeed it is malformed, and the person I am speaking to is evidently not competent enough to notice how this necessarily affects the answer and invalidates the familiar answer, and so they may not be a reliable guide to probability and in particular to what is or is not "orthodox" or "bayesian." What I think you ought to have discovered was not that you were Bayesian, but that you had not blundered, whereas the person you were speaking to had blundered.

Fake Edit: One of the commenters actually poked a neat hole in his argument, thus:

quote:

Let A = both boys, B = at least one boy. The prior P(B) is 3/4, while P(A) = 1/4. The mathematician's statement instructs us to find P(A|B), which by Bayes is equal to 1/3.

Under Eliezer's interpretation, however, the question is to find P(A|C), where C = the mathematician says at least one boy (*as opposed to saying at least one girl).

In other words, yes, Eliezer deliberately introduced obfuscation into his formula in order to get the outcome he wanted.

Alien Arcana
Feb 14, 2012

You're related to soup, Admiral.

Krotera posted:

I honestly can't tell what Yudowsky's argument is in his solution to the logic puzzle posed to him earlier.

Is it that he can pick any priors he wants based on specific details of the situation he's examining, even if those priors are wrong or don't make sense, and that the Bayesian way of doing things is to pick your priors arbitrarily and make your justifications that way?

Because I'm pretty sure that no matter how you slice the problem, you either get the 1/3 answer or you make a weird unreasonable assumption about how the woman acts. Does the woman always enumerate the number of boys? Then you have 1/4 "both", 1/2 "one boy", and 1/4 "no boys", and eliminating that last 1/4 still leaves 1/4 both, 1/2 "one boy" out of a remaining whole of 3/4.

What am I failing to understand here?

Yudowski doesn't seem to distinguish between thought experiments, in which you can set the parameters yourself, and exercises, in which the parameters are fixed by someone else.

The purpose of an exercise is to familiarize you with a new or difficult concept. In order to illustrate the intended concept without distracting the reader with unrelated matters, an exercise has specific parameters that are to be taken as unconditionally true. In the one-boy-two-boys problem, the mathematician is a vehicle for the parameters. His message could have just as easily been given by the narrator of the problem directly. Yudowski, however, wants to treat the mathematician as part of the exercise, which, of course, defeats the entire purpose of the exercise. It's like a physics student saying "But cows aren't spherical or frictionless!"

Once you make the mathematician part of the exercise, though, the problem is unsolvable (too many variables) unless you add additional assumptions. Which of course is exactly what he does - he assumes that the mathematician will either say "I have at least one girl" or "I have at least one boy," choosing randomly if both are true. He thinks he's analyzing the experiment and proving its shortcomings, but what he's actually done is taken it apart, put it back together in a new, less useful form, solved it, and then used that solution as proof that the original solution was incorrect.

So really it's more like a physics student saying "But cows are actually cubical, so the solution you've given be is wrong."

I highly doubt the incident he's describing ever happened - at the very least, his "friend" is a moron for saying that frequentists and Bayesians get different answers for the problem. The exercise is specifically meant to illustrate the frequentist principle of conditional probability. It makes no sense to apply Bayesian methods to it. Yudowski had to completely reinterpret the exercise to get things to come out the way he wanted them to, and he thinks that's it's the frequentists who don't understand the exercise.


EDIT: As usual, someone else said it better than me while I was typing:

Lottery of Babylon posted:

The problem is that he pulls this extra assumption out of his rear end.

Wales Grey
Jun 20, 2012
I'm more curious why Bayes would apply at all to this exercise. You could use the ratio of men-to-women as a prior to determine a more "precise" probability but doing so would defeat the purpose of the exercise, which I assume is to illustrate frequentist probability. I didn't know anything about freqentist probability prior to reading this exercise, and I came to the conclusion that frequentist probability has something to do with sets of outcomes.

Alien Arcana
Feb 14, 2012

You're related to soup, Admiral.

Wales Grey posted:

I'm more curious why Bayes would apply at all to this exercise. You could use the ratio of men-to-women as a prior to determine a more "precise" probability but doing so would defeat the purpose of the exercise, which I assume is to illustrate frequentist probability. I didn't know anything about freqentist probability prior to reading this exercise, and I came to the conclusion that frequentist probability has something to do with sets of outcomes.

"Frequentist" basically just means "not-Bayesian"; it's the school of thought which equates "Event E occurs with probability P in scenario S" with "if we repeated scenario S a very large number of times, the frequency with which event E occurred would approach P.

And yes, Bayes doesn't apply at all to this exercise. Actually, now that I've looked up the definition of frequentist inference, I realize that that doesn't apply to the exercise either. Both Bayesian and frequentist inference are methods of defining "the probability of an event". However, in this scenario, the probabilities can be calculated from first principles - the odds of boy vs girl are assumed to be even, and everything else is simple combinatorics. It's demonstration of the formula for conditional probability, P(A|B) = P(A^B)/P(B). Which, if I'm not mistaken, is something that is required for Bayes' formula to make any sense at all.

Yudowski wanted to create a scenario where frequentist inference gave a "wrong" answer and Bayesian inference gave the "right" answer, so he mutilates the problem into a new form and finds the (correct) answer to his new exercise. He then claims that his solution applies to the original problem (or rather he pretends that he didn't change the problem at all), and in a colossal case of projection suggests that the only reason "frequentists" (read: every mathematician who doesn't kowtow to his brilliance) think the answer of 1/3 is that frequentist inference doesn't work.

SamDabbers
May 26, 2003



Apparently even the illustrious Stephen Hawking thinks we should be thinking about the ramifications of a potential scary evil AI. Could Yudkowsky actually be on to something here? Is Hawking actually crazy? Stay tuned; more at 11.

Wales Grey
Jun 20, 2012

SamDabbers posted:

Apparently even the illustrious Stephen Hawking thinks we should be thinking about the ramifications of a potential scary evil AI. Could Yudkowsky actually be on to something here? Is Hawking actually crazy? Stay tuned; more at 11.

Congrats, you've managed to find a headline that is a question, and the answer isn't "no".

Tinestram
Jan 13, 2006

Excalibur? More like "Needle"

Grimey Drawer

Slime posted:

These are people who have a very naive view of utalitarianism. They seem to forget that not only would it be hilariously impossible to quantify suffering, but that even if you could 50,000 people suffering at a magnitude of 1 is better than 1 person suffering at a magnitude of 40,000. Getting a speck of dust in your eye is momentarily annoying, but a minute later you'll probably forget it ever happened. Torture a man for 50 years and the damage is permanent, assuming he's still alive at the end of it. Minor amounts of suffering distributed equally among the population would be far easier to soothe and heal.

Basically, even if you could quantify suffering and reduce it to a mere mathematical exercise like they seem to think you can, they'd still be loving wrong.
I know this is from way back on page one, and pardon me for such a long callback, but I think I have another decent response to this (speaking of equal distribution):

You can lie on a bed of 50,000 nails, or one large spike. Which would you choose?

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

runupon cracker posted:

I know this is from way back on page one, and pardon me for such a long callback, but I think I have another decent response to this (speaking of equal distribution):

You can lie on a bed of 50,000 nails, or one large spike. Which would you choose?

More importantly, are the wounds from those two events remotely comparable? Is a slight indentation in the skin really comparable to complete impalement?

GIANT OUIJA BOARD
Aug 22, 2011

177 Years of Your Dick
All
Night
Non
Stop

Krotera posted:

I honestly can't tell what Yudowsky's argument is in his solution to the logic puzzle posed to him earlier.

Is it that he can pick any priors he wants based on specific details of the situation he's examining, even if those priors are wrong or don't make sense, and that the Bayesian way of doing things is to pick your priors arbitrarily and make your justifications that way?

Because I'm pretty sure that no matter how you slice the problem, you either get the 1/3 answer or you make a weird unreasonable assumption about how the woman acts. Does the woman always enumerate the number of boys? Then you have 1/4 "both", 1/2 "one boy", and 1/4 "no boys", and eliminating that last 1/4 still leaves 1/4 both, 1/2 "one boy" out of a remaining whole of 3/4.

What am I failing to understand here?

The problem is you know about math. As someone who, like Yudkowsky, doesn't know about math, I can actually answer this for you. We have four possible outcomes: BB, BG, GB, and GG. We remove GG and we are left with the one in three chance. This is apparently the proper way to do it, but as Somfin said, it's counterintuitive. Yudkowski is looking at these three results we're left with and saying "BG and GB aren't meaningfully distinct answers." He's looking at it like this: we know that at least one child is a boy, so we're not trying to determine the probability considering both genders anymore. We're just trying to figure out the gender of the remaining child, who has a 50/50 chance of being a boy, therefore the probability that both children are boys is 50/50.

A Man With A Plan
Mar 29, 2010
Fallen Rib

SamDabbers posted:

Apparently even the illustrious Stephen Hawking thinks we should be thinking about the ramifications of a potential scary evil AI. Could Yudkowsky actually be on to something here? Is Hawking actually crazy? Stay tuned; more at 11.

What's hilarious about this article is that it mentions four institutes/centers that are working on the problem of dangerous AIs. Sounds impressive, one is even at Cambridge. Then you go look at their staff pages, and see that they have a ridiculous amount of overlap. One of the article's authors, Max Tegmark, is involved with all four, and there's probably a dozen involved in at least two of them.

Also funny is looking at MIRI's staff page. Out of 20 people or so, 1 has a PhD in computer science, another with a PhD in math. Maybe half the total have college degrees. Truly a bunch of world-changers.

http://thefutureoflife.org
http://cser.org/about/who-we-are/
http://www.fhi.ox.ac.uk/about/staff/
http://intelligence.org/team/

In case you're too lazy to click through to them.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

A Man With A Plan posted:

Out of 20 people or so, 1 has a PhD in computer science, another with a PhD in math. Maybe half the total have college degrees. Truly a bunch of world-changers.

Ah, but you see, these aren't just normal people. These are Yudkowskyites, well versed in the True Way of Yud, privy to the Secrets of Bayes, and therefore far more potent, mentally, than normal college-educated morons who care about politics (the mindkiller!) or [REDACTED] (category-5 basilisk-class memetic virus!) They will bring about the Good AI and save the world from the Evil AI, and then billions upon billions of simulations will retroactively have suffered!

Hate Fibration
Apr 8, 2013

FLÄSHYN!

SamDabbers posted:

Apparently even the illustrious Stephen Hawking thinks we should be thinking about the ramifications of a potential scary evil AI. Could Yudkowsky actually be on to something here? Is Hawking actually crazy? Stay tuned; more at 11.

I always feel intensely embarrassed on behalf of scientists who venture outside of their expertise and hold forth in public. It's always especially bad with physicists too.

Why is it always physicists?

Also with regards to the specks of dust thing, there's an overwhelming number of potential problems with the utility function. Why is the amount of suffering incurred by dust specks not something that approaches a horizontal asymptote? Why is the utility function single valued? Why can't it output an n-tuple with a dictionary ordering? Etc etc

Hate Fibration fucked around with this message at 03:43 on May 5, 2014

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET

GIANT OUIJA BOARD posted:

The problem is you know about math. As someone who, like Yudkowsky, doesn't know about math, I can actually answer this for you. We have four possible outcomes: BB, BG, GB, and GG. We remove GG and we are left with the one in three chance. This is apparently the proper way to do it, but as Somfin said, it's counterintuitive. Yudkowski is looking at these three results we're left with and saying "BG and GB aren't meaningfully distinct answers." He's looking at it like this: we know that at least one child is a boy, so we're not trying to determine the probability considering both genders anymore. We're just trying to figure out the gender of the remaining child, who has a 50/50 chance of being a boy, therefore the probability that both children are boys is 50/50.

To give him credit, if he's making the assumption Lottery of Babylon's making, it's not an "I don't know math" situation but an "I make very specific assumptions about human behavior without evidence, but I'm allowed to be certain they're right because Bayes." The question is whether it's worse to be bad at math or to be the kind of person who makes things up out of the blue. (although he's both.)

To Lottery of Babylon, by the way -- thanks for taking the time to explain very stupid things in a clear fashion so that unlike LWites we can at least be intellectually honest about them before we come to conclusions.

Wales Grey
Jun 20, 2012

Hate Fibration posted:

I always feel intensely embarrassed on behalf of scientists who venture outside of their expertise and hold forth in public. It's always especially bad with physicists too.

Why is it always physicists?

Because everything is just applied physics when you get down to it, plus I am like super smart and therefore my opinions on things are more informed than non-science people even on subjects I am not particularly well versed in or trained to consider practical issues in.*

*I actually enjoy reading knowledgeable and reasoned opinions by outsiders on a subject. What I enjoy less is when a "smart person" is presented as an expert on the subject explicitly or implicitly, by the publisher or the publishee, when the person isn't particularly qualified on the topic.

Mr. Sunshine
May 15, 2008

This is a scrunt that has been in space too long and become a Lunt (Long Scrunt)

Fun Shoe

GIANT OUIJA BOARD posted:

The problem is you know about math. As someone who, like Yudkowsky, doesn't know about math, I can actually answer this for you. We have four possible outcomes: BB, BG, GB, and GG. We remove GG and we are left with the one in three chance. This is apparently the proper way to do it, but as Somfin said, it's counterintuitive. Yudkowski is looking at these three results we're left with and saying "BG and GB aren't meaningfully distinct answers." He's looking at it like this: we know that at least one child is a boy, so we're not trying to determine the probability considering both genders anymore. We're just trying to figure out the gender of the remaining child, who has a 50/50 chance of being a boy, therefore the probability that both children are boys is 50/50.

I'm really confused here, because I only see three possibilities: Either both children are boys, both are girls, or there's one boy and one girl. Why would BG and GB be distinct answers?

Chamale
Jul 11, 2010

I'm helping!



Mr. Sunshine posted:

I'm really confused here, because I only see three possibilities: Either both children are boys, both are girls, or there's one boy and one girl. Why would BG and GB be distinct answers?

Birth order. If someone has two children, assuming a 50% gender split, there's a 25% chance of two boys, 25% chance of a boy then a girl, 25% chance of a girl then a boy, and 25% chance of two girls.

Slime
Jan 3, 2007
The odds of someone having two children both be boys is 25%. However, in this example we're given more information, and can discount the possibility of there being two girls and leaving 3 possible results with equal odds each. The absolute odds of having 2 boys is still 25%, but we've narrowed down the odds of this particular man having two boys by being able to discount one of the possibilities.

What I think Diaz has done is seen one boy, and determined the odds of getting another boy which is 50%. As far as he's concerned the confirmed boy doesn't matter anymore. In order to get the result of two boys, it's now a 50% chance.

Adbot
ADBOT LOVES YOU

Mr. Sunshine
May 15, 2008

This is a scrunt that has been in space too long and become a Lunt (Long Scrunt)

Fun Shoe
But the birth order isn't relevant for the question, is it? I mean, we have three distinct sets of possibilities - Two boys, two girls and one of each. We can dismiss the two girls possibility, leaving us with a 50/50 split between two boys or one of each. What am I missing?

  • Locked thread