Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
sweeperbravo
May 18, 2012

AUNT GWEN'S COLD SHAPE (!)

Razorwired posted:

You left out the part of the thread read where a 12 year old Draco literally plots a violent rape to knock Luna Lovegood down a peg.

Didn't namtab or somebody do a chapter-by-chapter tour of this travesty

Adbot
ADBOT LOVES YOU

Gen. Ripper
Jan 12, 2013


sweeperbravo posted:

Didn't namtab or somebody do a chapter-by-chapter tour of this travesty

I think he gave up at the part where a 12-year old boy plots literal rape

LaughMyselfTo
Nov 15, 2012

by XyloJW

Razorwired posted:

You left out the part of the thread read where a 12 year old Draco literally plots a violent rape to knock Luna Lovegood down a peg.

Harry's solution to this ethical predicament is, of course, to claim that he intends to marry her and wants her to be a virgin at the time, because that's another thing twelve-year-old boys do.

GIANT OUIJA BOARD
Aug 22, 2011

177 Years of Your Dick
All
Night
Non
Stop

LaughMyselfTo posted:

Harry's solution to this ethical predicament is, of course, to claim that he intends to marry her and wants her to be a virgin at the time, because that's another thing twelve-year-old boys do.

Of course, because they're all sensible, rational actors :reject:

Overminty
Mar 16, 2010

You may wonder what I am doing while reading your posts..

paradoxGentleman posted:

I know, right? You'd think it would be straight up their alley. Maybe the ones with that kind of fetish didn't make it that far in the book.

Wasn't that right at the beginning though, as in the opening?

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

e: misread that, nevermind

Perfidia
Nov 25, 2007
It's a fact!

Venusian Weasel posted:

Gold from the Atlas Shrugged review comment section:


Uh...

I prefer the 1964 review in New Worlds, of the UK paperback edition:

Michael Moorcock posted:

Never has such terrible old rubbish appeared between the covers of a book. If you want a good laugh (if a slightly horrified one) from start to finish, try this. At times it reads like Goebbels writing in the style of Marie Corelli.
(That's the whole review.)

Arc Hammer
Mar 4, 2013

Got any deathsticks?
That was it? It was dull! It was obvious!... it was short.... WE LOVED IT.

Akett
Aug 6, 2012

Overminty posted:

Wasn't that right at the beginning though, as in the opening?

It was pretty early in the book, but it wasn't the opening.

e X
Feb 23, 2013

cool but crude

Little Blackfly posted:

I love Roko's Basilisk because it proves how much Yudokowsky's (and people like him, and there are quite a few) masturbatory love of the enlightenment and the triumph of reason over superstition is full of bullshit. They sit and glory about how much smarter they are, and their civilization is, than all those stupid morons in the past, then promptly have an existential crisis because their pseudonymous, autistic poor man's version of Augustine decided to pontificate on transhumanist metaphysics.


There's this whole insane catechism required to even make it work, not the least of which is the crazy belief that the suffering of a computer simulation of you derived from logical deductions based on whatever traces of information survive about you is functionally equivalent to you suffering for eternity. Literally a belief that is functionally equivalent to an immortal soul, only with technology taking the place of god. Yet they internalize this crap 100% because one of their logisticians (priests) said so during his though experiment (sermon). But they still sit there :smug: ing about how much better they are than those stupid Christians. They're like logic fundamentalists, they don't actually understand their own philosophy, all they know is that it makes them better than other people.

Yeah, but you see, they are right!

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

ArchangeI posted:

Did he...did he just censor a post on an Internet message board because it might give a hypothetical AI or other intelligent being that may or may not exist yet ideas?

The worry wasn't that it would give the AI ideas (as the AI is obviously so smart it doesn't need your help), it was that you would only be vulnerable to the AI's temporal machinations if you were willing to think about this hypothetical threat a lot and take his bizarre line of reasoning seriously. The argument, as I understand it, goes something like this:

1. You cannot be certain whether you are real or just a simulated copy of you created in the future by an AI.
2. If you are in a simulation and simulation-you does something the AI doesn't like, the AI might torture you.
3. Since you can't be sure you're not in a simulation and you don't want to be tortured, you should avoid upsetting the AI so that it won't torture you.
4. Since all versions of you (including the real you) are altering your actions in response to the threat of AI-simulation-bullshit-torture, in the future AI's will have incentive to create and torture simulated versions of you, as doing so can influence you in the past and therefore the AI in the future. If you didn't think about this sort of thing at all, the real you wouldn't base your actions on temporal simulated torture threats, so the AI would have no reason to copy and torture you because it wouldn't affect the real you's actions in any way.

Even once you get over the fact that this is basically just an autistic version of Pascal's Wager, there's still the minor issue that it's incredibly stupid. However, if you do accept its absurd premises, it is an attack to which you only become vulnerable by thinking about it and realizing that you're vulnerable to the attack, which means that shutting down all discussion of the threat would actually be a reasonable way to contain the threat. The only minor snag is that the premises are dumb and the reasoning is dumb and the conclusion is dumb and everyone involved is dumb.

It's basically a "logical" consequence of Yudkowsky's Timeless Decision Theory, which is designed to fix the obvious glaring problem that standard decision theories "fail to choose the winning decision in some scenarios and so there is a need to develop a more successful theory". The scenarios in question, of course, all take the form "You encounter an AI so adept at simulating and predicting human behavior that it knows what you will do in the future so accurately that your actions in the future can directly influence the AI's actions in the past, and the AI deliberately confronts you with tests (despite already knowing the outcome from its simulations) in which if you will choose what appears to be the obvious right answer you are punished by the AI secretly having already harmed you in the past, but if you will chose what appears to be the obvious wrong answer you are rewarded by the AI secretly having already rewarded you in the past due to your future action." For some reason, this incredibly coherent and useful decision theory with many practical applications has not seen widespread acceptance.

Yudkowsky also believes that 0 and 1 are not probabilities, and that mathematicians are cheating by admitting the numbers 0 and 1 in their proofs and would be better served creating new theorems that avoid those obvious fake numbers. He's quite an expert on not having a goddamn clue what he's talking about but telling all the actual experts that they're idiots while wanking himself off to singularities and Bayes' rule.

Egregious Offences
Jun 15, 2013
Yudkowsky is pretty much troperdom incarnate.

Djeser
Mar 22, 2013


it's crow time again

The idea is that if you don't donate everything you can to AI research, you are hampering the development of a benevolent sentient all-powerful AI. A benevolent sentient all-powerful AI will be able to solve all the world's problems. Therefore, if you aren't donating everything you can to AI research, you are prolonging suffering.

The benevolent sentient all-powerful AI is aware of this. Therefore, it decides that in order to help prevent suffering, it needs to ensure that it was created as soon as possible. (Being created any later than that would mean there was suffering it could have prevented, but was unable to, which this AI does not want.) So what the AI does is create simulations of people. The benevolent sentient all-powerful AI is good enough that its simulation is exact. Therefore, for all intents and purposes, this simulation IS you. If this simulation is tortured, you, personally, are being tortured.

So, since the AI has a way to apply punishment to people in the past, it starts simulating everyone in the past. That way, it learns which people could have donated more to AI research, and then it tortures their simulations. By torturing their simulations, it provides incentive for the real person in the past to optimally donate to AI research.

The reason this is SO DANGEROUS is because by being informed of this, you are basically singling yourself out for torture by the AI.



It's Pascal's wager with dumb metaphysics swapped out for God. The whole every-possible-instance-of-you thing is massively dumb. Even taking all of its (dumb) base assumptions as givens, it doesn't even stand up on its own. If you simply refuse to respond to the AI's torture/blackmail, then the AI has no reason to torture your simulation. It wouldn't waste time trying to convince someone who's not going to budge.

Penny Paper
Dec 31, 2012

Arcsquad12 posted:

That was it? It was dull! It was obvious!... it was short.... WE LOVED IT.

Speaking of "dull" and "obvious," here are some inane tropes (and a Troper page) to pick apart:

http://tvtropes.org/pmwiki/pmwiki.php/Tropers/Deboss (I know he's been talked about a lot, but I must have missed the memo that he had a trope page)

http://tvtropes.org/pmwiki/pmwiki.php/Main/KarmaHoudini

http://tvtropes.org/pmwiki/pmwiki.php/Main/TheBadGuyWins

ArchangeI
Jul 15, 2010

Djeser posted:

The idea is that if you don't donate everything you can to AI research, you are hampering the development of a benevolent sentient all-powerful AI. A benevolent sentient all-powerful AI will be able to solve all the world's problems. Therefore, if you aren't donating everything you can to AI research, you are prolonging suffering.

The benevolent sentient all-powerful AI is aware of this. Therefore, it decides that in order to help prevent suffering, it needs to ensure that it was created as soon as possible. (Being created any later than that would mean there was suffering it could have prevented, but was unable to, which this AI does not want.) So what the AI does is create simulations of people. The benevolent sentient all-powerful AI is good enough that its simulation is exact. Therefore, for all intents and purposes, this simulation IS you. If this simulation is tortured, you, personally, are being tortured.

So, since the AI has a way to apply punishment to people in the past, it starts simulating everyone in the past. That way, it learns which people could have donated more to AI research, and then it tortures their simulations. By torturing their simulations, it provides incentive for the real person in the past to optimally donate to AI research.

The reason this is SO DANGEROUS is because by being informed of this, you are basically singling yourself out for torture by the AI.



It's Pascal's wager with dumb metaphysics swapped out for God. The whole every-possible-instance-of-you thing is massively dumb. Even taking all of its (dumb) base assumptions as givens, it doesn't even stand up on its own. If you simply refuse to respond to the AI's torture/blackmail, then the AI has no reason to torture your simulation. It wouldn't waste time trying to convince someone who's not going to budge.

You left out the best part: Yudkowsky conveniently runs an organization that is aiming to create a benevolent, all-powerful AI. Thus, by creating this scenario, he has basically proven that you have to donate everything you have to him, personally, so that you may be saved not tortured for all eternity by a malevolent, all-powerful being.

WickedHate
Aug 1, 2013

by Lowtax

ArchangeI posted:

You left out the best part: Yudkowsky conveniently runs an organization that is aiming to create a benevolent, all-powerful AI. Thus, by creating this scenario, he has basically proven that you have to donate everything you have to him, personally, so that you may be saved not tortured for all eternity by a malevolent, all-powerful being.

Gives a new twist of irony on his "all this magic stuff is silly and dumb" fanfiction.

Strategic Tea
Sep 1, 2012

I don't have a clue about Logic-with-a-capital-L but it always looked like Yudkowsky was just cherrypicking obscure bits of theory and throwing them in an Ian M Banks themed blender. Nice to see that yes, he is just completely delusional :psylon:

Nuclear Pogostick
Apr 9, 2007

Bouncing towards victory
I just burst out in incredulous laughter. What a bunch of horse poo poo. :laugh:

I mean, for a second there I could see why to some people the idea is distressing but you guys are right, it's basically an idiotic variant of Pascal's Wager.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'


In Dr. Seuss' "The Sneetches," Star-Bellied Sneetches look down on Plain-Bellied Sneetches until a Con Man named Sylvester McMonkey McBean shows up with a machine that can give the latter a Race Lift. The Star-Bellied Sneetches then pay to have their stars removed (so they can still tell who the "better" Sneetches are), and soon everyone gets all mixed up about who is who. The Sneetches eventually learn their lesson, but McBean makes off with all their money, laughing.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Strategic Tea posted:

I don't have a clue about Logic-with-a-capital-L but it always looked like Yudkowsky was just cherrypicking obscure bits of theory and throwing them in an Ian M Banks themed blender. Nice to see that yes, he is just completely delusional :psylon:

Forget Logic-with-a-capital-L; Yudkowsky can't even handle basic logic-with-a-lowercase-l. To go into more depth about the "0 and 1 are not probabilities" thing, his argument is based entirely on analogy:

LessWrong? More like MoreWrong, am I right? haha owned posted:

1, 2, and 3 are all integers, and so is -4. If you keep counting up, or keep counting down, you're bound to encounter a whole lot more integers. You will not, however, encounter anything called "positive infinity" or "negative infinity", so these are not integers.

Positive and negative infinity are not integers, but rather special symbols for talking about the behavior of integers. People sometimes say something like, "5 + infinity = infinity", because if you start at 5 and keep counting up without ever stopping, you'll get higher and higher numbers without limit. But it doesn't follow from this that "infinity - infinity = 5". You can't count up from 0 without ever stopping, and then count down without ever stopping, and then find yourself at 5 when you're done.

From this we can see that infinity is not only not-an-integer, it doesn't even behave like an integer. If you unwisely try to mix up infinities with integers, you'll need all sorts of special new inconsistent-seeming behaviors which you don't need for 1, 2, 3 and other actual integers.

He's committing a very basic logical fallacy here: false analogy. The reason infinity breaks the nice properties of the integers isn't that you can't count up to it, it's that it has no additive inverse, so including it means that you no longer have a mathematical group under addition, and so basic expressions like "x-y" cease to be well-defined when infinity is allowed. The fact that you can't "count up to it" isn't what makes it a problem; after all, the way infinity works with respect to addition is basically the same as the way 0 works with respect to multiplication, but nobody thinks 0 shouldn't be considered a real number just because of that (except, I suppose, Yudkowsky).

Naturally, this argument doesn't translate to probability. Expressions that deal with probability handle 0 and 1 no differently than they handle any other probability, and including 0 and 1 doesn't break any mathematical structure. (On the other hand, prohibiting their use breaks a shitload of things.)

Yudkowsky then goes on to argue that probabilities are inherently flawed and inferior and that odds are superior. This is enough to make any student with even a cursory exposure to probability raise an eyebrow, especially since Yudkowsky's reasoning is "The equation in Bayes' rule looks slightly prettier when expressed in terms of odds instead of probability". However, when you convert probability 1 into odds, you get infinity. And since we've seen before that infinity breaks the integers, infinite odds must be wrong too, and so we conclude that 1 isn't a real probability. A similar but slightly more asinine argument deals with 0.

The problem, of course, is that allowing "infinity" as an odds doesn't break anything. Since odds are always strictly non-negative, they don't have additive inverses and are not expected to be closed under subtraction; unlike the integers, infinity actually plays nice with odds. Yudkowsky ignores this in favor of a knee-jerk "Infinity bad, me no like".

He concludes his proof that 0 and 1 aren't probabilities with this one-line argument:

quote:

Furthermore, all sorts of standard theorems in probability have special cases if you try to plug 1s or 0s into them—like what happens if you try to do a Bayesian update on an observation to which you assigned probability 0.

This is, of course, the equivalent of arguing that 0 isn't an integer because when you try to do addition with it you get a "special case" of 0+a=a. It's not a special case in the "What happens if you subtract infinity from infinity?" sense, it's a special case in the "Remark: if you happen to plug in this value, you'll notice that the equation simplifies really nicely into an especially easy-to-remember form" sense.

Why is Yudkowsky so upset about 0 and 1? It's his Bayes' rule fetish. Bayes' rule doesn't let you get from 0.5 to 1 in a finite number of trials, just like you can't count up from 1 to infinity by adding a finite number of times, therefore 1 is a fake probability just like infinity is a fake integer. And since to him probability theory is literally Bayes' rule and nothing else, that's all the evidence he needs.

It's also his singularity fetish. He believes that the problem with probability theory is that in the real world people usually don't properly consider cases like "OR reality is a lie because you're in a simulation being manipulated by a malevolent AI". What this has to do with the way theorems are proved in probability theory (which is usually too abstract for this argument against them to even be coherent, let alone valid) is a mystery.

He does not explain what probability he would assign to statements like "All trees are trees" or "A or not A". It would presumably be along the lines of ".99999, because there's a .00001 probability that an AI is altering my higher brain functions and deceiving me into believing lies are true."

He concludes:

quote:

The main reason this would upset probability theorists is that we would need to rederive theorems previously obtained by assuming that we can marginalize over a joint probability by adding up all the pieces and having them sum to 1.

If you made a magical symbol to stand for "all possibilities I haven't considered", then you could marginalize over the events including this magical symbol, and arrive at a magical symbol "T" that stands for infinite certainty.

But I would rather ask whether there's some way to derive a theorem without using magic symbols with special behaviors. That would be more elegant. Just as there are mathematicians who refuse to believe in double negation or infinite sets, I would like to be a probability theorist who doesn't believe in absolute certainty.

For someone who has such strong opinions about probability theory, he doesn't seem to have the slightest idea what probability theory looks like. Literally the first thing you do in probability theory is construct "probability spaces" (or more generally measure spaces), which by definition comprise all the possible outcomes within the model you're using, regardless of whether those outcomes are "you roll a 6 on a die" or "an AI tortures your grandmother". It's difficult to explain in words why what he's demanding doesn't make sense because what he's demanding is so incoherent that it's virtually impossible to parse, let alone obey. At best, he's upset that his imagined version of the flavor text of the theorems doesn't match his preferred flavor text; at worst, this is the mathematical equivalent of marching into a literature conference and demanding that everyone discuss The Grapes of Wrath without assuming that it contains text.

(And yes, I know, there isn't a large enough :goonsay: in the world.)

90s Cringe Rock
Nov 29, 2006
:gay:
His charity has literally claimed "8 lives saved for every dollar donated." Somehow.

ArchangeI
Jul 15, 2010

Lottery of Babylon posted:


For someone who has such strong opinions about probability theory, he doesn't seem to have the slightest idea what probability theory looks like. Literally the first thing you do in probability theory is construct "probability spaces" (or more generally measure spaces), which by definition comprise all the possible outcomes within the model you're using, regardless of whether those outcomes are "you roll a 6 on a die" or "an AI tortures your grandmother". It's difficult to explain in words why what he's demanding doesn't make sense because what he's demanding is so incoherent that it's virtually impossible to parse, let alone obey. At best, he's upset that his imagined version of the flavor text of the theorems doesn't match his preferred flavor text; at worst, this is the mathematical equivalent of marching into a literature conference and demanding that everyone discuss The Grapes of Wrath without assuming that it contains text.

(And yes, I know, there isn't a large enough :goonsay: in the world.)

I think the argument is that reality is so hideously complex, with so many possible outcomes to a given situation that humans can not possibly be certain that they understand the entirety of possible outcomes. Which may well be true, but a) it is the equivalent to the acknowledgment that a frictionless plane does not actually exist in reality and b) the sum total probability of all possible outcomes is still 1, whether humans are able to perceive all possible outcomes or not.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

ArchangeI posted:

I think the argument is that reality is so hideously complex, with so many possible outcomes to a given situation that humans can not possibly be certain that they understand the entirety of possible outcomes. Which may well be true, but a) it is the equivalent to the acknowledgment that a frictionless plane does not actually exist in reality and b) the sum total probability of all possible outcomes is still 1, whether humans are able to perceive all possible outcomes or not.

Yeah, basically this. But in particular, he wants theorems reproven without admitting 0 or 1 as probabilities, which basically doesn't make sense by the way mathematics and proofs work. The universe may be finite in size and the universe may be finitely divisible, but you'd be mad to demand mathematicians to treat the integers as a finite set.

It reminds me of Conservapedia's weird fixation on "elementary proofs" (proofs which don't use complex numbers, because complex = imaginary = fake liberal conspiracy). Except elementary proofs at least exist and are valid, even if they're not really useful or interesting except as a game to see how far you can go without complex numbers.

He's basically trying to say "Life is complicated!" except he needs to make his cult think he's a genius so he borrows a bunch of terminology he doesn't understand and spends pages and pages making arguments that are fundamentally unsound and then tries to apply his conclusion in ways that don't make sense while telling everyone else that they're idiots.

e:

The French people from Monty Python and the Holy Grail are prime examples; they taunt King Arthur and his knights with offensive insults and catapult animals (and a trojan bunny) at them. And they have reached the Holy Grail at the Castle of Aaaarrrggghhh (however you spell that) before King Arthur and Sir Bedevere do, and prevent them from entering, thus directly defying God, whom King Arthur made clear was the one who set them on their quest. If only King Arthur hadn't killed that famous historian and gotten arrested for it at the end, he and his knights could have brought justice upon them. The frustrating part is, if the grail does give eternal life as it does in Indiana Jones, these guys won't even burn in hell until the machines take over the world. On the other hand, if they happened to have the wrong Grail (one of them did mention earlier that they already had one) and the writer of the Grail's real location had a different castle in mind, they'd most likely age to dust faster than you can say "How do you like them apples, you silly French kniggets?".

Lottery of Babylon fucked around with this message at 15:27 on Apr 7, 2014

Egregious Offences
Jun 15, 2013
What's interesting/concerning is that he admits that he has no formal education about anything he talks about, but that he was "self-taught", and people still think that he's some amazing AI prophet.
He's like Ray Kurzweil, just without a degree and fond of applying Bayesian theory to everything instead of Computer Science.

vaguely
Apr 29, 2013

hot_squirting_honey.gif

Egregious Offences posted:

What's interesting/concerning is that he admits that he has no formal education about anything he talks about, but that he was "self-taught", and people still think that he's some amazing AI prophet.
He's like Ray Kurzweil, just without a degree and fond of applying Bayesian theory to everything instead of Computer Science.
From what people have posted here, he has that impenetrable writing style that people use to look intelligent when they're not actually very bright, and other people look at it and think 'if I can't understand what this guy's talking about, he must be really clever!'

He knows the buzzwords and he can put them together in a form that looks impressive. If you actually know what he's trying and failing to talk about, you realise that he has no clue about anything, but to somebody who wants to think of themselves as a Science Nerd but doesn't actually know a thing about science, it's very impressive.

The Vosgian Beast
Aug 13, 2011

Business is slow
Right-wing members of Less Wrong have also formed their own sister site

http://www.moreright.net/

Joshlemagne
Mar 6, 2013

Wait a pinball section? A pinball section!? Web comics and fan fiction were bad enough but I think something just broke in my brain. I think I'm going to find an appropriate trope and add a "stuff scribbled on the back of a bar napkin" section. No such thing as notability.

quote:

Players have mixed feelings about Road Show. On the one hand, it's a solid pinball game, with lots of depth and humor to keep players entertained. On the other hand, it feels like an evolutionary amalgamation of Pat Lawlor's earlier games, especially FunHouse — but following blockbusters like The Addams Family and The Twilight Zone is a challenge for anyone. Regardless, Red & Ted's blue-collar humor resonated with a lot of players, and the table remains highly popular in truck stops and rural establishments.

I think by "players" the person who wrote this meant "me".

The Vosgian Beast
Aug 13, 2011

Business is slow

Joshlemagne posted:

Wait a pinball section? A pinball section!? Web comics and fan fiction were bad enough but I think something just broke in my brain. I think I'm going to find an appropriate trope and add a "stuff scribbled on the back of a bar napkin" section. No such thing as notability.

http://tvtropes.org/pmwiki/pmwiki.php/Literature/TheUglyBarnacle

This is a joke page.

Afraid of Audio
Oct 12, 2012

by exmarx

The Baby Trap posted:

Someone in a relationship deliberately causes a pregnancy without their partner's consent, usually by lying about or sabotaging birth control, in order to bind their partner to them. The character's motivations run the gamut from understandable to reprehensible. Sometimes they're just clingy and/or desperate to get hitched or have a child; other times they feel the relationship is on the rocks and believe that babies make everything better. In accordance with the law of inverse fertility, attempting this even once will invariably result in pregnancy, with all attendant drama.

Oh hey there's a real life section!

quote:

In a recent case in Texas, a man was required to pay child support after his girlfriend performed oral sex on him and then inseminated herself with the gathered sperm after he left.

Spermjacking, yay!

quote:

Does happen in Real Life, tragically more often with abusive couples who believe that a baby will force their partner to stay. Sadly, they're sometimes right, especially when their partner would have financial troubles or lose custody of the child if they left. Worse still, some women will deliberately get pregnant to get Child Support off of the man, since it tends to also qualify her for other forms of government assistance (assuming she's not too wealthy). While laws are slowly becoming more gender neutral, the man will probably lose anyway if the girl was someone he just had a fling with, and so this is exploited by some women. The CDC's 2010 Intimate Partner and Sexual Violence Survey found that approximately 4.8% of women and 8.7% of men had a sexual partner who tried to get them pregnant or get pregnant against their will.

quote:

Possibly related to this, there is an increasing number of men who have chosen to get vasectomies in their early twenties. Since the childfree are still something of an Acceptable Target, many doctors will outright refuse to perform them under the belief that the man will always change his mind. Men, who have to move hell and earth to convince the doctor otherwise, will often cite this in their reasons, amongst other medical, financial, and social reasons.

quote:

It's not easy for a woman who hasn't had children to convince a doctor to permanently sterilise her either. Many doctors are very insistent that everyone really wants babies, even if they don't know it.

Childfree hardcore!

quote:

Unfortunately common in the Visual Kei and Japanese rock/metal scene and with hosts, where fangirls and mitsu (women paying hosts/musicians for sex) have found out that under Japanese law, they have 100 percent rights to the child (even if the father actually wants the child and/or the mother is abusive), can often force a Shotgun Wedding to the artist or at the very least demand ongoing child support payments (which often itself forces a wedding and short marriage, if the man was dependent on them to begin with), and because many men in Japan, including hosts and musicians, prefer condomless sex in the first place. Smarter cisgender men who work as hosts/do mitsu/sleep around do keep their own condoms and use them, or get vasectomies, for this very reason.

What the gently caress!

Penny Paper
Dec 31, 2012

Joshlemagne posted:

Wait a pinball section? A pinball section!? Web comics and fan fiction were bad enough but I think something just broke in my brain.

Really? Then don't look here http://tvtropes.org/pmwiki/pmwiki.php/Radar/Pinball

quote:

I think I'm going to find an appropriate trope and add a "stuff scribbled on the back of a bar napkin" section.

I think the only things that would go in there would be "phony phone numbers given to sleazy men in bars" and "story notes J.K. Rowling did when she first came up with Harry Potter."

Wheat Loaf
Feb 13, 2012

by FactsAreUseless

Penny Paper posted:

I think the only things that would go in there would be "phony phone numbers given to sleazy men in bars" and "story notes J.K. Rowling did when she first came up with Harry Potter."

And "Bartlett For America".

Joshlemagne
Mar 6, 2013

Penny Paper posted:

I think the only things that would go in there would be "phony phone numbers given to sleazy men in bars" and "story notes J.K. Rowling did when she first came up with Harry Potter."

C'mon now. My masterpiece "Crudely drawn woman with big boobies" has at least a dozen tropes that could easily apply to it.

Runcible Cat
May 28, 2007

Ignoring this post

Djeser posted:

So, since the AI has a way to apply punishment to people in the past, it starts simulating everyone in the past. That way, it learns which people could have donated more to AI research, and then it tortures their simulations. By torturing their simulations, it provides incentive for the real person in the past to optimally donate to AI research.
Why wouldn't it just be really really nice to its simulations to make up for their originals suffering before it sorted out the world's problems? What with being super-benevolent and that?

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Runcible Cat posted:

Why wouldn't it just be really really nice to its simulations to make up for their originals suffering before it sorted out the world's problems? What with being super-benevolent and that?

Because by torturing simulations it is encouraging the people in the past to have already donated more money towards its creation so that it could already have been created sooner and therefore have already solved all the world's problems sooner and reduced suffering accordingly.

If that sounds stupid to you then you obviously don't understand Timeless Decision TheoryTM. Let me use this example: imagine I come up to you and say "Hey, I just flipped a coin and it came up tails, if it had come up heads then I would have given you $15 on the condition that you agree that you would have given me $5 if it had come up tails and according to my simulations you would have accepted that offer, therefore you owe me $5". If you decline to pay me $5 upon demand, then my simulations in the alternate timeline would have revealed that you would have reneged on your deal and I would not have given you $15, which means that you are foolishly reducing the expected value of your profit by not giving me money in this timeline in exchange for hypothetical money in an alternate timeline that already didn't happen. It is therefore in your best interest to give me $5 for nothing now, because being the sort of person who would do so increases the expected value of your wealth across all timelines.

Timeless Decision Theory works the same way.

WickedHate
Aug 1, 2013

by Lowtax
That was really confusing, so I don't know if it was meant to be confusing to get across how stupid it is or if it actually did explain why an AI thinks torturing people in the present will change the minds of people who already made up their mind and are now long dead.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

WickedHate posted:

That was really confusing, so I don't know if it was meant to be confusing to get across how stupid it is or if it actually did explain why an AI thinks torturing people in the present will change the minds of people who already made up their mind and are now long dead.

It's both. Some things are stupid enough that the best argument against them is stating their position accurately.

The general idea in Timeless Decision Theory is that you personally should make your decisions in a way that improves your average expected outcome across all timelines, including those that have already been rendered impossible because they contradict something that happened in the past, because you might actually be in a simulation the AI is using to determine how to treat the real you (or, if you are the real you, because you want simulations of you to make predictions that will cause the AI to treat you favorably).

Here's how it applies to the case of the AI torture puppet show: if you believe there is a high probability that you are in a computer simulation and will be tortured for not donating, then you should donate. Or, if you believe that if you do not donate then a future AI will torture simulated copies of you, then you should donate due to the "simulations of you are you" pseudo-immortal soul stuff. So if the AI will torture you in the future, you should donate.

Torturing simulated people seems stupid and has no immediate benefit in the current timeline, so one might expect that the AI wouldn't bother. But if nobody expects the AI to torture anyone, then there will be no fear-based motivation to donate, so fewer people will donate and the AI won't exist for longer, causing more suffering to occur before the singularity solves all the world's problems. So the AI wants to be the sort of AI that would torture people so that people in the past will expect it to be torturous and will donate out of fear. Since the AI obviously subscribes to Yudkowsky's "Timeless Decision Theory" (because Yudkowsky is a genius and a superior rational being will recognize his logic as correct), it needs to torture in the future to create fear in the past.

This, of course, assumes that the people in the past have a Laplace's demon-style perfect simulation of the future AI and can predict its actions accurately. Which, you know, we can't. All of Yudkowsky's TDT arguments presuppose the existence of an AI who can perfectly simulate and predict your behavior playing against you - but here ordinary humans are expected to be taking on the role of perfectly predicting a hypothetical AI's hypothetical future behavior. I don't think Yudkowsky has an explanation for this beyond "Well we're smart and know a lot about computers so obviously our predictions about AI are correct, so the AI would need to actually torture people in the future in order to have made us think it would torture people."

If what I've written doesn't make any sense to you then you've probably understood it perfectly.

WickedHate
Aug 1, 2013

by Lowtax
But...the AI would already be created. Even if it tortures simulations, that won't make it have been created any faster. The best it can possibly hope for is that people will have believed this theory. Actually going through with it will accomplish nothing!

LeastActionHero
Oct 23, 2008
That's because you're thinking with time.
Basically, it's applying 'logic' based on a couple of classic paradoxes about applying logic to things that happen over time. Here's one example:

quote:

A prisoner is sentenced to death "by surprise hanging", to be carried out one morning next week. The prisoner thinks to himself: "hey, if it's Thursday and I haven't been hung, then I know I'll be killed Friday, and it won't be a surprise. Therefore I can't be hung on Friday. And if it's Wednesday, and I haven't been hung, they must be planning on hanging me Thursday (because Friday is right out). But then it's not a surprise, so I can't be hung on Thursday."

Going through the days, the prisoner concludes that there is no day when he might be hung, and therefore he cannot be killed by the judges orders.

They forget the last line of this paradox, which illustrates what would actually happen:

quote:

The prisoner is quite surprised when he is hung on Wednesday morning.

Bear Sleuth
Jul 17, 2011

The way I understood it was that the "simulations of you are you" isn't due to mystical soul sharing or anything. It reasons (?) that if you accept that this AI in the future is so powerful to make a simulation so realistic (based on the traces you left about yourself on the internet of course) that the simulated "you" inside it can't tell if it's real-life or a simulation, then perhaps the you who are reading this isn't the real-life you but the simulated one. Suppose the AI makes one simulation, then the chances that you are the simulated one is 50%. But suppose if the AI makes a million simulations, or a billion, or 10 billion. The the chances of you being the real "safe from torture" you is significantly smaller. Like, more likely to get hit by lightning while winning the lottery smaller. In fact, the chances that the you who is reading and thinking about this is not a simulation is the most unlikely thing to occur in the universe. Probability-wise, you better start acting like you're in a simulation about to be tortured by a benevolent AI. Of course, there is still that one version of you that is the original, but you have no way of knowing if that is you or not, so even the real you should act like a simulation anyways, just in case.

Which means giving all your money to LessWrong.

Adbot
ADBOT LOVES YOU

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

WickedHate posted:

But...the AI would already be created. Even if it tortures simulations, that won't make it have been created any faster. The best it can possibly hope for is that people will have believed this theory. Actually going through with it will accomplish nothing!

TDT relies heavily on the idea that an AI can predict a human's actions so accurately that your own future actions can directly influence the AI's actions in the past. Here, they reverse it and seem to believe that we can predict the AI's actions so accurately that the AI's future actions can directly influence our actions in the past. Yes, the AI has no incentive to torture people, and we can see that, so we shouldn't donate - therefore the AI should torture people so that we will have incentive to donate. And it needs actually go through with the torture, even once it exists and the torture will no longer actually be useful, because if it doesn't actually go through with it then those of us who understand AI very well (i.e. Yudkowsky obviously) wouldn't truly believe in the threat of torture.

It will only accomplish nothing in this timeline. But if all AI's in all timelines adopt a policy of torture, and we humans in the present realize that they will do so and can predict this perfectly, then the world will be probabilistically weighted toward timelines in which more money is donated and the AI arises sooner, which is favorable to the AI, which means that the AI should torture simulated people.

Basically, they believe that all parties are so acutely aware of all pasts, futures, possible futures, alternate presents, alternate pasts, and so on that non-iterated "games" are virtually identical to iterated games and should be played as if they are iterated. This isn't actually the case; we humans do not have that ability, nor does any extant computer. Yudkowsky thinks he's being very smart by creating a superior decision theory that takes into account alternate timelines; instead, he's being very stupid by failing to distinguish non-iterated games from iterated games and throwing away the information gained from events that have already occurred.

  • Locked thread