Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Pidmon
Mar 18, 2009

NO ONE risks painful injury on your GREEN SLIME GHOST POGO RIDE.

No one but YOU.

Mr. Sunshine posted:

But the birth order isn't relevant for the question, is it? I mean, we have three distinct sets of possibilities - Two boys, two girls and one of each. We can dismiss the two girls possibility, leaving us with a 50/50 split between two boys or one of each. What am I missing?

That the fact that there's two instances with a boy and a girl IS significant and the point of the whole excersize.

Also I might be weird but I thought of our hypothetical mathemetician as a lady.

Adbot
ADBOT LOVES YOU

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

Mr. Sunshine posted:

But the birth order isn't relevant for the question, is it? I mean, we have three distinct sets of possibilities - Two boys, two girls and one of each. We can dismiss the two girls possibility, leaving us with a 50/50 split between two boys or one of each. What am I missing?

Because this is a "Mathematician" question and not a normal question. There are four possible scenarios, based on the two children- GG, GB, BG, BB. When you remove GG from the equation, you are left with three outcomes- GB, BG, and BB- which means that there are three equally-likely scenarios to choose from. Two of them might well be identical, but that doesn't decrease the likelihood of them happening. The probability of a single boy is still double the probability of two boys, even if you ignore birth order. Since there are three possible scenarios, they are equally likely, and only one of the three fits the question, the probability is 1/3.

To put it another way, you are 25% likely to have GG, 25% likely to have BB, and 50% likely to have either BG or GB.

Mr. Sunshine
May 15, 2008

This is a scrunt that has been in space too long and become a Lunt (Long Scrunt)

Fun Shoe
Right, I think I got it. It just seems counter-intuitive, since on the face of it the question boils down to "What is the gender of one of my children?". But when you consider the probability of a set of two children having a certain gender combination, it does indeed produce a 50/25/25 split.

E: And now I get why Yudkowsky hosed up. He assigned probabilities according to what the mathematician did or didn't say, not according to the gender-distribution of a two-child set.

E2: What the hell does a Baysean prior even do, apart from letting you prove that you were right all along? The way the yuddists use them to arrive at conclusions (8 lives saved per dollar! You are infinitesimally likely to be real! You'll win the lottery!), it seems they just pull numbers out of their rear end, and then use those numbers to prove that the numbers were correct.

Mr. Sunshine fucked around with this message at 13:11 on May 5, 2014

ol qwerty bastard
Dec 13, 2005

If you want something done, do it yourself!
These sorts of statistical questions are great because even if they're counterintuitive, you can easily check them with a Monte Carlo simulation (which is also a great beginner's programming exercise because it's probably the absolute simplest thing that you can write a program for that's actually useful). Generate 10,000 (or however many you want) random families, throw out the ones that are both girls, and then count how many have two boys. You'll find it's 1/3.

However, the Monte Carlo method (i.e. the Actually Trying It A Bunch Of Times And Seeing What Happens method) is pretty much the definition of frequentist, and so would naturally be eschewed by Yudkowsky in favour of his "Bayesian" Pull-Numbers-Out-Of-Your-rear end method.

Slime
Jan 3, 2007

ol qwerty bastard posted:

These sorts of statistical questions are great because even if they're counterintuitive, you can easily check them with a Monte Carlo simulation (which is also a great beginner's programming exercise because it's probably the absolute simplest thing that you can write a program for that's actually useful). Generate 10,000 (or however many you want) random families, throw out the ones that are both girls, and then count how many have two boys. You'll find it's 1/3.

However, the Monte Carlo method (i.e. the Actually Trying It A Bunch Of Times And Seeing What Happens method) is pretty much the definition of frequentist, and so would naturally be eschewed by Yudkowsky in favour of his "Bayesian" Pull-Numbers-Out-Of-Your-rear end method.

The dumbest thing is that his "Bayesian" method should be taking samples in order to ensure that the model is accurate. Bayes Theorem doesn't just pull numbers out of its rear end, it's all about using real world data to create a model.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Mr. Sunshine posted:

E2: What the hell does a Baysean prior even do, apart from letting you prove that you were right all along? The way the yuddists use them to arrive at conclusions (8 lives saved per dollar! You are infinitesimally likely to be real! You'll win the lottery!), it seems they just pull numbers out of their rear end, and then use those numbers to prove that the numbers were correct.

The idea behind Bayes' rule is that it helps you use evidence to update your initial beliefs (your priors) into new beliefs in light of the new data. It's a very useful and versatile tool, and without priors it doesn't do anything at all. The problem is that there's a bit of a "garbage in, garbage out" element: if you choose your priors badly, the output isn't going to be very good. The way Yudkowsky uses it, "Bayesian" is synonymous with "math" and "priors" are synonymous with "assumptions". A lot of the things that he claims are based on Bayes' rule really aren't.

For example, the eight lives per dollar thing doesn't use Bayes rule or priors at all. It takes two assumptions (one-in-a-zillion chance of saving a zillion lives) and runs a simple expected value calculation to get the expected value, and then presents the expected value as something meaningful. This has two flaws: expected values aren't always useful (see the St. Petersburg Lottery), and any answer is worthless if it is derived from blatantly false assumptions. But it also has nothing to do with Bayes' rule. Bayes' rule is about updating your beliefs in response to evidence, but the lesswrong dream factory never actually does that. Eight lives per dollar isn't the refined value obtained after many trials, it's just made up out of whole cloth.

In the mathematician's-children argument, Yudkowsky describes his model of the mathematician's psychological behavior as "priors". This is wrong. (And infectiously so; he got me to misuse the term too when I first quoted him.) His model is used to compute the probabilities involved in the Bayes' rule formula, which is a separate factor from the priors. The priors are answers (and associated probabilities) to the thing you're trying to know, e.g. "How many of the children are boys?". He used "priors" to describe "What will the the mathematician say?", which is incorrect because the mathematician's statement is not what he is ultimately trying to learn.

To use Bayes' rule to learn about something, you're basically using the scientific method: start with a hypothesis, run an experiment, modify the hypothesis based on the experiment's outcome, and repeat. But as we've seen, Yudkowsky considers himself "above" the scientific method, and for all his fellating of "Bayesianism", he would dismiss actual experimentation and the collection of actual data as an evil, dirty frequentist act.

If you want to understand about Bayes' rule and how priors work, you should really avoid reading anything Yudkowsky says on the subject. He's really bad at it, misuses the terms constantly, is unlikely to actually know what they mean, abuses the terminology to suit is own ends, and outright makes things up when it suits him. (Also this is true of pretty much any subject, not just Bayes' rule.)

e: beaten much more succinctly:

Slime posted:

The dumbest thing is that his "Bayesian" method should be taking samples in order to ensure that the model is accurate. Bayes Theorem doesn't just pull numbers out of its rear end, it's all about using real world data to create a model.




Yudkowsky doesn't like it when we point out that he's just pulling Pascal's Wager, so he's decided to tell us why we're wrong:
The Pascal's Wager Fallacy Fallacy

Eliezer Yudkowsky posted:

So I observed that:

1. Although the laws of physics as we know them don't allow any agent to survive for infinite subjective time (do an unboundedly long sequence of computations), it's possible that our model of physics is mistaken. (I go into some detail on this possibility below the cutoff.)

2. If it is possible for an agent - or, say, the human species - to have an infinite future, and you cut yourself off from that infinite future and end up stuck in a future that is merely very large, this one mistake outweighs all the finite mistakes you made over the course of your existence.

And the one said, "Isn't that a form of Pascal's Wager?"

I'm going to call this the Pascal's Wager Fallacy Fallacy.

You see it all the time in discussion of cryonics. The one says, "If cryonics works, then the payoff could be, say, at least a thousand additional years of life." And the other one says, "Isn't that a form of Pascal's Wager?"

The original problem with Pascal's Wager is not that the purported payoff is large. This is not where the flaw in the reasoning comes from. That is not the problematic step. The problem with Pascal's original Wager is that the probability is exponentially tiny (in the complexity of the Christian God) and that equally large tiny probabilities offer opposite payoffs for the same action (the Muslim God will drat you for believing in the Christian God).

However, what we have here is the term "Pascal's Wager" being applied solely because the payoff being considered is large - the reasoning being perceptually recognized as an instance of "the Pascal's Wager fallacy" as soon as someone mentions a big payoff - without any attention being given to whether the probabilities are in fact small or whether counterbalancing anti-payoffs exist.

Yudkowsky believes that the only problem with Pascal's Wager is that it unfairly singles out the Christian God from among other gods, and that the positive utility-probability of choosing the correct god is negated by the negative utility-probability of choosing the wrong one. Already this seems like a spurious claim. Pascal's Wager has a lot of holes, and remains an unconvincing argument even if that particular hole were smoothed over.

(Imagine an alien planet on which only one god is worshipped: Xaxaxar. A very large chunk of the population worships Xaxaxar, and there are also many who do not; but none of them have any knowledge of any civilization worshipping any other god. There are no serious social consequences to declining to worship Xaxaxar (no burning of heretics), but proper worship of Xaxaxar requires a tithe of 100 space-dollars per space-week. Would it be unjustified, then, for the aliens to single out Xaxaxar above all other hypothetical gods? That he alone has worshippers and a thriving religion makes it much more probable from their perspective for him to exist than for other gods, and even if a different god did exist, they would be unlikely to jealously punish Xaxaxar-worshippers, as if they really cared so much about worship they could have used their power to ensure a thriving religion of their own. "Xaxaxar's Wager" does not seem to fall into the one pit that Yudkowsky thinks Pascal's Wager does - and yet there are still many good arguments against Xaxaxar's Wager.)

Even were it not for the problems Yudkowsky singles out, Pascal's Wager would still fail in much the same way as the St. Petersburg Lottery. But for argument's sake, let's see how Yudkowsky argues that cryonics doesn't fall into this particular pitfall:

Eliezer Yudkowsky posted:

But current physics also says that any finite amount of matter can only do a finite amount of computation, and the universe is expanding too fast for us to collect an infinite amount of matter. We cannot, on the face of things, expect to think an unboundedly long sequence of thoughts.

The laws of physics cannot be easily modified to permit immortality: lightspeed limits and an expanding universe and holographic limits on quantum entanglement and so on all make it inconvenient to say the least.

On the other hand, many computationally simple laws of physics, like the laws of Conway's Life, permit indefinitely running Turing machines to be encoded. So we can't say that it requires a complex miracle for us to confront the prospect of unboundedly long-lived, unboundedly large civilizations. Just there being a lot more to discover about physics - say, one more discovery of the size of quantum mechanics or Special Relativity - might be enough to knock (our model of) physics out of the region that corresponds to "You can only run boundedly large Turing machines".

So while we have no particular reason to expect physics to allow unbounded computation, it's not a small, special, unjustifiably singled-out possibility like the Christian God; it's a large region of what various possible physical laws will allow.

Here's Yudkowsky's argument: physics as we understand it doesn't actually admit the possibility of the infinite-power immortal computers he always imagines. But, it is possible to write down laws, such as those of Conway's Game of Life, in which machines can keep on computing forever. Therefore, there's a decent chance that we'll discover that our understanding of physics is incorrect, the Second Law of Thermodynamics is outright false, and immortality is possible. And since the rules of Conway's Game of Life are simple, the set of physical laws in which immortality is possible has low ~Kolmogorov complexity~, so the probability of a deity-AI being physically possible is not low, whereas the probability of a deity-God is low.

There are a lot of obvious objections to this. One is that this doesn't deal with the entire cryonics problem, it just deals with arguing that one facet of the ideal scenario is not quite as improbable as it seems. But it doesn't even do that properly. Conway's Game of Life has low Kolmogorov complexity... but we already know that our universe is not Conway's Game of Life. The real question is not the complexity of laws in which immortality is possible, but rather the complexity of laws in which immortality is possible that do not disagree with our observations of our universe. What's the Kolmogorov complexity of those laws? Not so low anymore. And without further investigating that, we have no particular reason to think that such physics is any more likely than, well, God.

Having spent most of his article failing to defend one fraction of his point, Yudkowsky has only one paragraph left:

Eliezer Yudkowsky posted:

And cryonics, of course, is the default extrapolation from known neuroscience: if memories are stored the way we now think, and cryonics organizations are not disturbed by any particular catastrophe, and technology goes on advancing toward the physical limits, then it is possible to revive a cryonics patient (and yes you are the same person). There are negative possibilities (woken up in dystopia and not allowed to die) but they are exotic, not having equal probability weight to counterbalance the positive possibilities.

There's a lot wrong with this. You can't assume that the fundamental laws of physics will be found to be different, then turn around and say that everything else must continue along its default path. You can't treat Moore's Law as an actual physical law that will continue without bound. You can't pretend we actually know as much about neuroscience as he implies here. You can't pretend that a damaged, long-dead brain with necessarily have its data intact. You can't treat "someday cryonics will work" as equivalent to "the lovely cryonics lab suckering me out of money will totally do it right". You can't make "counterbalancing anti-payoffs don't exist here" the core of your argument for why this isn't Pascal's Wager, then dismiss it in a throwaway remark without justification. You can't say that the positive outcome is good enough to outweigh the negatives without saying anything about why the positive outcome is so good.

But the most glaring problem is that Eliezer "3^^^^3 copies of you are simulated and tortured for eternity" Yudkowsky thinks that the potential negative outcomes of an AI scanning your brain are trivial and vanishingly unlikely.

Lottery of Babylon fucked around with this message at 13:33 on May 5, 2014

Wrestlepig
Feb 25, 2011

my mum says im cool

Toilet Rascal

Swan Oat posted:

The funniest thing about Roko's Basilisk is that when Yudkowski finally did discuss it on reddit a couple years ago he tried to make people call it THE BABYFUCKER, for some reason.

I am grateful the Something Awful dot com forums are able to have a rational discussion of The Babyfucker.

This is from a while back but did he mean that name for the theory or the website?

Edit: Somebody already made this joke, which is strange because it's really unusual and out there.

Wrestlepig fucked around with this message at 14:31 on May 5, 2014

ol qwerty bastard
Dec 13, 2005

If you want something done, do it yourself!

Slime posted:

The dumbest thing is that his "Bayesian" method should be taking samples in order to ensure that the model is accurate. Bayes Theorem doesn't just pull numbers out of its rear end, it's all about using real world data to create a model.

Yeah, this. Bayes theorem is incredibly powerful if you're using it right. Yudkowsky is not using it right.

He clearly knows, at some level, that to do science right you have to get data from the real world - at least, he has his Harry Potter quote Feynman about observation having the final say in science - so it's really strange that he then has no trouble just inventing whatever ludicrous numbers he wants in order to justify his predetermined beliefs and then calling it "rationality". And his numbers really are absolutely ludicrous sometimes. He'll assign near-infinitesimal probabilities to an event but then multiply it by idiotic things like 3^^^^3 or whatever, as if it's even meaningful to talk about a number so large in the context of any physical thing that can happen ever in the universe. The biggest numbers any real scientist ever runs into are on the order of 1010n, in the context of statistical mechanics, which is such a miniscule amount in comparison that it might as well be equal to 0.

Chamale
Jul 11, 2010

I'm helping!



The thing about Bayesian math is that depending on the initial priors, you can end up with wildly different conclusions but a lot of evidence will eventually lead you in the right direction.

Suppose a woman finds a pair of panties that aren't hers in her husband's dresser. Let's assume that out of all the reasons a man would have those panties in his dresser, 90% are that he's cheating on his wife and 10% are perfectly innocent - perhaps the panties are a present for her next birthday. If the woman initially thought there is only a 1% chance her husband is unfaithful, she now assess that there is an 8% chance he is cheating on her. If she already had doubts and thought there was a 25% chance of infidelity, she now determines there is a 77% chance her husband is cheating.

When you set a probability of something as 0 or 1, you'll continue to have irrational beliefs no matter what evidence arises. However, setting the probability of something to a level like 1/(3^^^^3) also makes you very slow to accept evidence.

grate deceiver
Jul 10, 2009

Just a funny av. Not a redtext or an own ok.

quote:

1. Although the laws of physics as we know them don't allow any agent to survive for infinite subjective time (do an unboundedly long sequence of computations), it's possible that our model of physics is mistaken.

This does not sound like a religious apologist's argument, no sir, not at all. Totally different, guys, I swear.

Dylan16807
May 12, 2010

ol qwerty bastard posted:

These sorts of statistical questions are great because even if they're counterintuitive, you can easily check them with a Monte Carlo simulation (which is also a great beginner's programming exercise because it's probably the absolute simplest thing that you can write a program for that's actually useful). Generate 10,000 (or however many you want) random families, throw out the ones that are both girls, and then count how many have two boys. You'll find it's 1/3.

However, the Monte Carlo method (i.e. the Actually Trying It A Bunch Of Times And Seeing What Happens method) is pretty much the definition of frequentist, and so would naturally be eschewed by Yudkowsky in favour of his "Bayesian" Pull-Numbers-Out-Of-Your-rear end method.

You're still injecting an assumption when you generalize from an example to a population. Which is the real point of the exercise, not proving one kind of statistics is better than another.

If you generalize the example into the population of 2-children mathematicians with a boy, then GG is excluded and the odds are 1/3.

If you generalize the example into the population of 2-children mathematicians naming a random child's gender, then GG is not excluded and the odds are 1/2.

Runcible Cat
May 28, 2007

Ignoring this post

Hate Fibration posted:

I always feel intensely embarrassed on behalf of scientists who venture outside of their expertise and hold forth in public. It's always especially bad with physicists too.

Why is it always physicists?
It's not - Richard Dawkins and James Watson are notably cringeworthy counter-examples.

Mors Rattus
Oct 25, 2007

FATAL & Friends
Walls of Text
#1 Builder
2014-2018

It is almost always 'hard' sciences, though. And almost always from folks with a certain disdain for 'soft' sciences, because they involve people and qualitative analysis and often cannot be handled with pure math problems and a set of "right" answers.

GWBBQ
Jan 2, 2005


Mr. Sunshine posted:

E2: What the hell does a Baysean prior even do, apart from letting you prove that you were right all along? The way the yuddists use them to arrive at conclusions (8 lives saved per dollar! You are infinitesimally likely to be real! You'll win the lottery!), it seems they just pull numbers out of their rear end, and then use those numbers to prove that the numbers were correct.
Every so often, a new study comes out that purports to flip everything we know on its head, like proof of telepathy, remote viewing, or some other psychic power. In these cases, it's more reasonable to assume that there was a flaw in methodology, conscious or unconscious bias on the part of the researcher, or an unlikely but possible set of results, than it is to take the study at face value and discard the entire body of previous research into the same phenomena that showed no result.

Another real life example. Andrew Wakefield published a study with the conclusion that the MMR vaccine could cause autism in children. Is it more likely that a Doctor with a small sample size discovered an effect that had not been seen in larger studies, or that a Doctor who stood to make enormous amounts of money by "proving" the MMR vaccine was unsafe and introducing his alternative had reached a biased conclusion?

For a more rigorously mathematical one, this section of the Wikipedia article for Bayes' Theorem demonstrates why taking prior probability into account for drug testing is important because there is not an equal chance that a randomly selected individual does or does not use drugs.
http://en.wikipedia.org/wiki/Bayes%27_theorem#Drug_testing
Leonard Mlodinow wrote in The Drunkard's Walk: How Randomness Rules our Lives (a great book that pretty much anyone in this thread would enjoy) about testing positive for HIV and the math behind a positive result on a single test corresponding to only a 1 in 11 chance that he was actually infected.

Hate Fibration
Apr 8, 2013

FLÄSHYN!

Runcible Cat posted:

It's not - Richard Dawkins and James Watson are notably cringeworthy counter-examples.

Point. I think it's just that physicists stand out the most in my mind though because they're the closest to what I'm interested in. So I tend to feel a more acute sense of embarrassment.

quote:

It is almost always 'hard' sciences, though. And almost always from folks with a certain disdain for 'soft' sciences, because they involve people and qualitative analysis and often cannot be handled with pure math problems and a set of "right" answers.

I think that's really interesting though. Because one of the things, that I got at least, when studying math was an appreciation for the scope of mathematics and how formal deductive methods can fail, or more likely, not be able to give you useful answers. And it's something that I see the people on Less Wrong fail to grasp, a lot. I'm pretty sure it actually comes from a weak grasp of the material in question, the brushing off the problem of selecting proper priors in Bayesian analysis being the most egregious example. I actually think that STEM people's disdain for the 'soft' fields arises from a lack of appreciation of this fact too. It ties in quite nicely with how Less Wrongers disdain for said areas of study cause them to do ridiculous things(like comparing anime and Shakespeare)

And now, here's Less Wrongers talking about how ART IMPACTS PEOPLE EMOTIONALLY OH MY GOD

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Robin Hanson posted:

President Bush just spoke of "income inequality" for the first time, Tyler Cowen (the most impressive mind I’ve met) said last week that "inequality as a major and chronic American problem has been overstated," while Brad DeLong just said that "on the level of individual societies, I believe that inequality does loom as a serious political-economic problem."

I find it striking that these discussions focus almost entirely on the smallest of these seven kinds of inequality:

1. Inequality across species
2. Inequality across the eras of human history
3. Non-financial inequality, such as of popularity, respect, beauty, sex, kids
4. Income inequality between the nations of a world
5. Income inequality between the families of a nation
6. Income inequality between the siblings of a family
7. Income inequality between the days of a person’s life

That's a coincidence because I find it striking that you could possibly be such a loving idiot.

His explanation for why "between the siblings of a family" is a greater form of inequality consists of half a line:

Robin Hanson posted:

Consider that "sibling differences [within each family] account for three-quarters of all differences between individuals in explaining American economic inequality"
with a hyperlink going to the Amazon purchase page for a pop science book. Well, if that's not persuasive I don't know what is.

Robin Hanson posted:

Clearly, we do not just have a generic aversion to inequality; our concern is very selective. The best explanation I can think of is that our distant ancestors got into the habit of complaining about inequality of transferable assets with a tribe, as a way to coordinate a veiled threat to take those assets if they were not offered freely. Such threats would have been far less effective regarding the other forms of inequality.

Hmm yes quite an interesting theory. Or maybe it's because we focus on a type of inequality that wouldn't be completely stupid to focus on. Want to eliminate inequality between eras of human history? Better return to a state of pre-civilizational subsistence, so we don't end up better off than our ancestors, and halt all scientific and technological progress forever, so our descendants don't live any better than we do. Want to eliminate inequality of sex and kids? Better institute state-enforced rape breeding programs.

But yes I'm sure income disparity complaints are just leftover primitive whiny tribal warcry :supaburn: CLASS WARFARE AGAINST THE DEFENSELESS RICH :supaburn:

Robin Hanson posted:

Added 5/7/07: There is also a huge ignored inequality between actual and possible siblings.

:psyduck:

Lottery of Babylon fucked around with this message at 06:13 on May 6, 2014

Pidmon
Mar 18, 2009

NO ONE risks painful injury on your GREEN SLIME GHOST POGO RIDE.

No one but YOU.

Bet you they're talking about vaginas and not cocks there. Millions of 'potential dead siblings' every time some game designer starts designing a female character's outfit is A-OK to these nerds.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost
All right, let's talk about these seven kinds. I know how these lists happen, by the way. Someone comes up with a number, and then desperately tries to fill the list.

1. Inequality across species
Clear bullshit. We've won the species war so far. The guy who wrote this doesn't care about this issue. Not about income anyway. Next.

2. Inequality across the eras of human history
We're supposed to do better with every generation. That's the point of advancing technology. Not about income, anyway. This is a non-issue. Next.

3. Non-financial inequality, such as of popularity, respect, beauty, sex, kids
gently caress off. If I have sufficient money, I can have any of these things, easily. This is a legit point, but it does not outrank actual inequality. Also not about income.

4. Income inequality between the nations of a world
Actual legit point. A lot of people, though, do talk about this kind of inequality. Not in terms of income, though. Nice save there, guy. :thumbsup:

5. Income inequality between the families of a nation
This is a huge part of inequality discussion- inheritance law and dynasty building is massive when it comes to inequality. This is usually what is talked about. Again, though, not in terms of income- in terms of hoardings.

6. Income inequality between the siblings of a family
Okay, this is one of those cases where a guy lists off all the problems with women and it becomes increasingly obvious that he has specific problems with one specific woman.

7. Income inequality between the days of a person’s life
And finally this one, which makes no loving sense at all.

NOT MENTIONED: Inequality between planets in a system / systems in the galaxy / galaxies. Income inequality between pets. Income inequality between working periods of the day and non-working periods of the day. Income inequality between men and women.

HEY GUNS
Oct 11, 2012

FOPTIMUS PRIME

Somfin posted:

2. Inequality across the eras of human history
We're supposed to do better with every generation. That's the point of advancing technology. Not about income, anyway. This is a non-issue. Next.
There's historians in this thread, and we don't like it when you say that a development is "supposed to" happen. You'd like technological improvements to mean that peoples' lives get better, I would too, but (1) that still doesn't mean we can talk about what historical processes are "supposed to" do and (2) it's not always the case. For instance, living standards for almost everyone in England plummeted during the early 19th century until the 1840s.

HEY GUNS fucked around with this message at 10:31 on May 6, 2014

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

HEY GAL posted:

There's historians in this thread, and we don't like it when you say that a development is "supposed to" happen. You'd like technological improvements to mean that peoples' lives get better, I would too, but (1) that still doesn't mean we can talk about what historical processes are "supposed to" do and (2) it's not always the case. For instance, living standards for almost everyone in England plummeted during the early 19th century until the 1840s.

By supposed to, I meant that that was what people are (for the most part) trying to achieve through technological innovation. People, in general, want their children to have better, easier, richer lives than they did. I didn't mean some sort of deterministic goal-oriented thing. Sorry about the phrasing.

potatocubed
Jul 26, 2012

*rathian noises*
I would object to considering inequality between historical eras as a subject for discussion because history in in the past. It is literally impossible to redistribute the wealth from 2014 to Victorian England, for all the good it might do.

Although I suppose you could consider redistributing the wealth from 2014 to 2020 or something.

Djeser
Mar 22, 2013


it's crow time again

I'm assuming that argument is that incomes are higher now, unlike they were in the glorious [Holy Roman Empire/Third Reich/Mussolini's Italy/Classical Sparta/Classical Athens/Roman Empire/Byzantine Empire/chosen political fetish period].

And the "income equality between years of your life" is asking why young people don't make as much money as older people.

(yes this is dumb)

HEY GUNS
Oct 11, 2012

FOPTIMUS PRIME

potatocubed posted:

I would object to considering inequality between historical eras as a subject for discussion because history in in the past. It is literally impossible to redistribute the wealth from 2014 to Victorian England, for all the good it might do.
Well, historians don't discuss this stuff in order to change it. It's interesting to talk about if you want to talk about the economic effects of the Black Death, the Industrial Revolution, the 17th Century Crisis, etc, either because it'll help us understand something about our own era through comparison, or just because we want to know more about these periods.

In his mouth, though, it sounds an awful lot like the claim that because women have it rough in many Islamic countries, American feminists have no ground to complain. He's throwing chaff.

Strategic Tea
Sep 1, 2012

potatocubed posted:

I would object to considering inequality between historical eras as a subject for discussion because history in in the past. It is literally impossible to redistribute the wealth from 2014 to Victorian England, for all the good it might do.

You can if you have an AI simulate it perfectly from facebook's data! :eng99:

fade5
May 31, 2012

by exmarx

Lottery of Babylon posted:

Yudkowsky posted:

The process of considering how to construct the largest possible computable numbers naturally yields the recursive ordinals and the concept of ordinal analysis. All mathematical knowledge is in a sense contained in the Busy Beaver series of huge numbers.
The Busy Beaver series sequence (dammit Yudkowsky)

fade5 posted:

Make sure you don't confuse sequences with series though.:v:
It is a calculus joke, so Yudkowsky wouldn't get it.
Hey, look at that, I was right earlier in the thread, Yudkowsky can't distinguish between the two.:laugh: Next you'll tell me he doesn't know that there are both finite series and infinite series, or that the are multiple types of series including arithmetic series and geometric series, or how to use basic tests to determine series divergence or convergence. I just had sequences and series in Calculus II, this is all very recent to me.

To explain the joke a bit, and to make sure I actually learned this stuff correctly in Calculus:

A sequence is a list of numbers, and the order in which the numbers are listed is important.
1, 2, 3, 4, 5, ... (this is an infinite arithmetic sequence)
4, 40, 400, 4000, 40,000 (this is an infinite geometric sequence)
Sequences are usually based in a mathematical formula.

A series is a sum of numbers, usually of a given sequence, so using the previous examples,
1 + 2 + 3 + 4 + 5 + ...
4 + 40 + 400 + 4000 + 40,000 + ···
are examples of series.

Or, written in series notation, (this was harder to type that I thought it would be)

∑ K = 1 + 2 + 3 + 4 + 5 + ...
K=1


∑ 4(10)K-1= 4 + 40 + 400 + 4000 + 40,000 + ···
K=1

So, in summation, (more Calculus jokes:v:) you don't know poo poo Yudkowsky.

fade5 fucked around with this message at 21:13 on May 6, 2014

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

Strategic Tea posted:

You can if you have an AI simulate it perfectly from facebook's data! :eng99:

Well, you just have go get a perfect simulation of Earth going from Google Maps data and people's webcams, and then just wind the clock back. The AI will simulate the fall of Troy, the Mongols, that one day Homer stubbed his toe on a rock, everything. Hell, we'll probably be able to see human evolution happening!

Dr Pepper
Feb 4, 2012

Don't like it? well...

I really like how everything comes down to magical A.Is, like that stupid post listed in the OP dismissing one of the oldest intellectual disciplines on the planet because among other things.

quote:

Many naturalists aren't trained in cognitive science or AI. Cognitive science is essential because the tool we use to philosophize is the brain, and if you don't know how your tool works then you'll use it poorly. AI is useful because it keeps you honest: you can't write confused concepts or non-natural hypotheses in a programming language.

Yes that's right, in order to be a good Philosopher you must be a computer programmer.

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

Dr Pepper posted:

Yes that's right, in order to be a good Philosopher you must be a computer programmer.
And no programmer has ever produced bad, confused code that doesn't compile. :rolleye:

SolTerrasa
Sep 2, 2011

Sham bam bamina! posted:

And no programmer has ever produced bad, confused code that doesn't compile. :rolleye:

And of course there's no CS equivalent for a concept which appears valid on its surface but is surprisingly difficult to evaluate for full correctness.

E: for the non CS people in the room, the answer is literally all code, and it was proven by Turing.

SolTerrasa fucked around with this message at 18:26 on May 8, 2014

Mors Rattus
Oct 25, 2007

FATAL & Friends
Walls of Text
#1 Builder
2014-2018

Man, it's not like philosophers have been studying logic and the proper method of forming a logical argument since before computers existed. Nope, totally need to learn to code.

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET
The only real advantage I can think of is that a computer can quickly evaluate your ideas for obvious stupidity.

Like, if you're trying to write some proof in a programming language, say that the number of animals on Noah's Ark is k, and later define k to be a kind of fruit, the computer will automatically complain at you and then you won't have to go to a person to find out your idea was stupid.

That's actually a pretty significant advantage if you ask me, because not all mistakes are as stupid and obvious as that and it's nice to have your PC check in advance whether you're shooting yourself in the foot, but it's not a magic bullet.

(Short version: it's far easier to prove a program wrong than to prove it right, but eliminating obviously wrong programs has its perks.)

Ratoslov
Feb 15, 2012

Now prepare yourselves! You're the guests of honor at the Greatest Kung Fu Cannibal BBQ Ever!

Brains are not computers! Jesus loving Christ, do we need to pass a law requiring a poster in every Computer Science classroom explaining that brains and modern personal computers have only the most superficial of similarities, despite both being computation devices? Just because you can run both on potatoes and both break if you overheat them doesn't mean you can draw useful metaphors between the two of them.

The Vosgian Beast
Aug 13, 2011

Business is slow
People everywhere rail against the idea that being an expert in one thing does not make you an expert in all things.

meat sweats
May 19, 2011

The Vosgian Beast posted:

People everywhere rail against the idea that being an expert in one thing does not make you an expert in all things.

Or being an expert at no things. Has this guy ever held a job besides basically starting a cult that believes giving him money will prevent you from dying?

Mr. Sunshine
May 15, 2008

This is a scrunt that has been in space too long and become a Lunt (Long Scrunt)

Fun Shoe
Has this guy actually written a single line of actual code? The only philosophy that programming makes you good at is nihilism.

Numerical Anxiety
Sep 2, 2011

Hello.

The Vosgian Beast posted:

People everywhere rail against the idea that being an expert in one thing does not make you an expert in all things.

With one exception, that being lying.

SolTerrasa
Sep 2, 2011

Mr. Sunshine posted:

Has this guy actually written a single line of actual code? The only philosophy that programming makes you good at is nihilism.

Not that he's published, not that I know of, but I personally wouldn't doubt him on it. It's hard to explain, but he... Ugh, I only know how to talk about this using the terminology I learned over there. He signals his experience in a way that CS people recognize? Or... He writes like we talk, when he writes about code.

I don't think that he has anything of note done on his actual AI project, but I'd be really surprised if he wasn't fluent and practiced in a language or two, minimum.

Hate Fibration
Apr 8, 2013

FLÄSHYN!

SolTerrasa posted:

Not that he's published, not that I know of, but I personally wouldn't doubt him on it. It's hard to explain, but he... Ugh, I only know how to talk about this using the terminology I learned over there. He signals his experience in a way that CS people recognize? Or... He writes like we talk, when he writes about code.

He has some coauthor credits with a mathematician crony of his that works in formal logic and theoretical computer science, I know that much.

I remember when I was looking through his older site, sysops or whatever on the wayback machine, he said that he programmed in some language, and that he even helped develop a programming language. Unfortunately I cannot remember the URL at all.

SolTerrasa
Sep 2, 2011

Hate Fibration posted:

He has some coauthor credits with a mathematician crony of his that works in formal logic and theoretical computer science, I know that much.

I remember when I was looking through his older site, sysops or whatever on the wayback machine, he said that he programmed in some language, and that he even helped develop a programming language. Unfortunately I cannot remember the URL at all.

Yeah, it wouldn't shock me at all if he "designed a language" because he used to be one of those people who thought that the problem with creating a really smart AI was that current programming languages just aren't ~expressive~ enough. I see it all the time in college first-years and it's common enough even in real programmers that Randall Munroe makes fun of it sometimes. Of course I don't know for sure, but it strikes me as the sort of self-deception he'd be vulnerable to: self-aggrandizing and pointless.

The language you end up with is almost always either nonfunctional, useless, or exactly the same as something that already exists but slower. Unsurprisingly, it turns out that compilers are *hard*.

Adbot
ADBOT LOVES YOU

su3su2u1
Apr 23, 2014

SolTerrasa posted:

Not that he's published, not that I know of, but I personally wouldn't doubt him on it.

He has never contributed to any open source projects, nor has there ever been any publicly available code that he has written. He has never worked as a programmer, or taken a CS course. He has learned to 'signal' competence in order to bilk money out of CS people- its how he gets speaking engagements and how he solicits money.

When he talks about math, he manages to keep it together for long stretches, but occasionally a howler slips in and you realize he doesn't understand the first thing of what he is talking about. The same for physics. Similarly, he routinely confuses CS concepts (its very clear from his discussions of the busy beaver function and solomonoff induction that he doesn't understand what it means for something to be computable).

Here is a direct quote from Yudkowsky, where he very clearly has no idea about computational complexity (but that doesn't stop him from drawing sweeping conclusions about physics!)

BigYud posted:

Nothing that has physically happened on Earth in real life, such as proteins folding inside a cell, or the evolution of new enzymes, or hominid brains solving problems, or whatever, can have been NP-hard. Period. It could be a physical event that you choose to regard as a P-approximation to a theoretical problem whose optimal solution would be NP-hard, but so what, that wouldn't have anything to do with what physically happened. It would take unknown, exotic physics to have anything NP-hard physically happen. Anything that could not plausibly have involved black holes rotating at half the speed of light to produce closed timelike curves, or whatever, cannot have plausibly involved NP-hard problems. NP-hard = "did not physically happen". "Physically happened" = not NP-hard

As a simple, obvious counterexample, I once solved a 3 stop traveling salesman problem in my head. If any non-CS people want an explanation of how incredibly wrong this is, let me know and I'll try to go into more detail.

The proper model of Yudkowsky is con-man with a decent vocabulary. He has learned to fake it well enough to bilk money from the rubes, but nothing he says holds up to any real scrutiny.

Hate Fibration posted:

He has some coauthor credits with a mathematician crony of his that works in formal logic and theoretical computer science, I know that much.

Only one unpublished, but submitted manuscript (having read it, I'd be incredibly surprised if it gets through review. Most academic papers don't repeatedly use the phrase 'going meta'). His only actual (non-reviewed) publications have been through "transhumanist" vanity prints and through his own organization. The thing that kills me- if I donated money to a research institute and MORE THAN A DECADE LATER it had only even submitted one paper to review (ONE! EVER!) but the lead investigator had been able to write hundreds of pages of the worst Harry Potter fanfic ever, I'd be outraged. Instead, the Lesswrong crowd seems grateful.

su3su2u1 fucked around with this message at 05:18 on May 14, 2014

  • Locked thread