Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Chamale
Jul 11, 2010

I'm helping!



Roko's Basilisk is similar to the Monty Hall Problem. If you accept the premises, it is in fact exactly the same as the Monty Hall Problem with an extremely high number of doors. The problem is that the premise of Roko's Basilisk is stupid bullshit, but people who try to understand it from Yudkowsky's perspective find it as confusing as the counterintuitive solution to the Monty Hall Problem.

Adbot
ADBOT LOVES YOU

Hate Fibration
Apr 8, 2013

FLÄSHYN!

AATREK CURES KIDS posted:

Roko's Basilisk is similar to the Monty Hall Problem. If you accept the premises, it is in fact exactly the same as the Monty Hall Problem with an extremely high number of doors. The problem is that the premise of Roko's Basilisk is stupid bullshit, but people who try to understand it from Yudkowsky's perspective find it as confusing as the counterintuitive solution to the Monty Hall Problem.

Is it? I'm not a statistician, I mostly like geometry and PDEs, but if so, then it does a pretty good job of illustrating just how important a sensible prior probability distribution is in Bayesian analysis.

AlbieQuirky
Oct 9, 2012

Just me and my 🌊dragon🐉 hanging out

SALT CURES HAM posted:

At the risk of playing devil's advocate, is there any real reason not to have your body frozen if the idea interests you? I'm having a hard time thinking of any negative consequences other than possibly getting fleeced out of some money if it's horseshit.

Complete horseshit, there is a non-zero chance of your body getting unfrozen or never having been frozen properly, even if the freezing is perfect and perfectly maintained there's no particular reason to believe in future thawing-and-reviving tech being a thing ever, why would a future society even want to thaw out oldsicles?

Shangri-Law School
Feb 19, 2013

Reality TV, of course. https://www.youtube.com/watch?v=g7Lzr3cwaPs

Forums Barber
Jan 5, 2011
Seriously, apart from the fact that current tech freezes you in such a way that every cell explodes into hamburger, you're relying on a bunch of California futurists to keep a company running for centuries. People have been out to visit those kinds of facilities and found one bored night watchman and a bunch of partially thawed heads. It's loving absurd. Just learn to deal with your mortality, people, loving hell.

e: wow, from that article things are even worse than my hyperbole -- and that's at the company Yuddy chose as the best one!

Forums Barber fucked around with this message at 03:22 on Apr 30, 2014

ArchangeI
Jul 15, 2010

Forums Barber posted:

Seriously, apart from the fact that current tech freezes you in such a way that every cell explodes into hamburger, you're relying on a bunch of California futurists to keep a company running for centuries. People have been out to visit those kinds of facilities and found one bored night watchman and a bunch of partially thawed heads. It's loving absurd. Just learn to deal with your mortality, people, loving hell.

e: wow, from that article things are even worse than my hyperbole -- and that's at the company Yuddy chose as the best one!

I just noticed that Alcor is based in loving Arizona. Yeah, if you have a facility in which you store the frozen bodies of people so that they may one day be revived, you should certainly build it in the middle of a loving desert.

Forums Barber
Jan 5, 2011
My favorite thing about cryogenics is that they store you upside down so in case they forget to top up the containers with liquid nitrogen for a couple of weeks they can just go "oh well, after the singularity they can just make him new feet anyway".

Hate Fibration
Apr 8, 2013

FLÄSHYN!
Yudkowsky has very strong opinions on what makes a good scientist for a high school dropout.

quote:

A creationist AGI researcher? Why not? Sure, you can't really be enough of an expert on thinking to build an AGI, or enough of an expert at thinking to find the truth amidst deep dark scientific chaos, while still being, in this day and age, a creationist. But to think that his creationism is an anomaly, is should-universe thinking, as if desirable future outcomes could structure the present. Most scientists have the meme that a scientist's religion doesn't have anything to do with their research. Someone who thinks that it would be cool to solve the "human-level" AI problem and create a little voice in a box that answers questions, and who dreams they have a solution, isn't going to stop and say: "Wait! I'm a creationist! I guess that would make it pretty silly for me to try and build an AGI."


The creationist is only an extreme example. A much larger fraction of AGI wannabes would speak with reverence of the "spiritual" and the possibility of various fundamental mentals. If someone lacks the whole cognitive edifice of reducing mental events to nonmental constituents, the edifice that decisively indicts the entire supernatural, then of course they're not likely to be expert on cognition to the degree that would be required to synthesize true AGI. But neither are they likely to have any particular idea that they're missing something. They're just going with the flow of the memetic water in which they swim. They've got friends who talk about spirituality, and it sounds pretty appealing to them. They know that Artificial General Intelligence is a big important problem in their field, worth lots of applause if they can solve it. They wouldn't see anything incongruous about an AGI researcher talking about the possibility of psychic powers or Buddhist reincarnation. That's a separate matter, isn't it?

Is someone gonna tell him about Schrodinger? David Bohm? Heck, even Penrose, with his views on consciousness and the universe? They aren't AGI researchers, but they were great scientists. And I find it hard to believe to that having spiritual views prevents a person from performing the conceptualization needed to develop AGI.

Penrose posted:

"I think I would say that the universe has a purpose, it's not somehow just there by chance ... some people, I think, take the view that the universe is just there and it runs along – it's a bit like it just sort of computes, and we happen somehow by accident to find ourselves in this thing. But I don't think that's a very fruitful or helpful way of looking at the universe, I think that there is something much deeper about it."

Ooooh, I forgot the best quote:

Yudowsky posted:

Just normal people with no notion that it's wrong for an AGI researcher to be normal.

Whelp.

Hate Fibration fucked around with this message at 04:22 on Apr 30, 2014

That Old Tree
Jun 24, 2012

nah


Forums Barber posted:

e: wow, from that article things are even worse than my hyperbole -- and that's at the company Yuddy chose as the best one!

quote:

Williams' severed head was then frozen, and even used for batting practice by a technician trying to dislodge it from a tuna fish can.

The chief operating officer of Alcor for eight months before becoming a whistleblower in 2003, Johnson wrote his book while in hiding, fearful for his life.

I would bold the grimly hilarious parts, but it's kind of ugly to bold an entire quote.

These are the people who will tenderly care for the broken bodies of our libertarian saints. Eating their tuna as they wait patiently to revive them to negotiate with the cartoonishly sadisticbenevolent AI that will rule the blasted Mad Max hellscapethe glorious libertopia.

EDIT:

quote:

Johnson writes that holes were drilled in Williams' severed head for the insertion of microphones, then frozen in liquid nitrogen while Alcor employees recorded the sounds of Williams' brain cracking 16 times as temperatures dropped to -321 degrees Fahrenheit.

Johnson writes that the head was balanced on an empty can of Bumble Bee tuna to keep it from sticking to the bottom of its case.

Really, everyone should just read the whole thing because holy poo poo.

That Old Tree fucked around with this message at 06:21 on Apr 30, 2014

NewMars
Mar 10, 2013
About this whole cryonics thing. Transmetropolitan (which is an excellent comic in general) had a pretty good take on it all. They do get unfrozen, yes and are completely unable to handle the future, breaking down pretty much the instant they get outside and often staying that way. They're considered completely useless cast-offs, revived out of some strange sense of duty more then anything. It's just one of the many ways that it takes a sledgehammer to all that singularitarian ideology that LessWrong glorifies.

grate deceiver
Jul 10, 2009

Just a funny av. Not a redtext or an own ok.

NewMars posted:

About this whole cryonics thing. Transmetropolitan (which is an excellent comic in general) had a pretty good take on it all. They do get unfrozen, yes and are completely unable to handle the future, breaking down pretty much the instant they get outside and often staying that way. They're considered completely useless cast-offs, revived out of some strange sense of duty more then anything. It's just one of the many ways that it takes a sledgehammer to all that singularitarian ideology that LessWrong glorifies.

This. I can see a person maye making a 30 year or so jump and adjusting to a new life, but anything over a hundred and you end up as a complete outcast. Even a millionaire industry captain would find themselves with no contacts, no knowledge of the markets and trends, outdated skills and probably having trouble with basic communication. But hey, maybe a magical computer jesus will just shoot all the information you need directly into your brain and give you a body of a hot young supermodel. It could happen, right guys? Right?

Mr. Sunshine
May 15, 2008

This is a scrunt that has been in space too long and become a Lunt (Long Scrunt)

Fun Shoe
Yeah, that Transmetropolitan side story is awesome. Cryogenically frozen people are thawed out by some second-rate firm, not out of some sense of altruism, but because it's what they're being payed to do. They end up living in shelters or run-down hovels, unable to make a living or even function in the future society.

Transmetropolitan is over all a very good comic, and it paints a believable, grim future where "post-scarcity" hasn't solved a single problem. They have all this wonderful technology that the LW crowd envisions, like sentient clouds of nano-machines and assemblers that can create any object you want out of old trash, but instead of ushering in a shining utopia it just means that the upper class can now eat genetically engineered baby seals and inject themselves with ebola for laughs.

potatocubed
Jul 26, 2012

*rathian noises*
Reminds me of a quote from a different forum I'm quite fond of: "Transhumanism is the story of how advancing technology will solve all the problems of being human; cyberpunk is the story of how it won't."

GOTTA STAY FAI
Mar 24, 2005

~no glitter in the gutter~
~no twilight galaxy~
College Slice

Mr. Sunshine posted:

Yeah, that Transmetropolitan side story is awesome. Cryogenically frozen people are thawed out by some second-rate firm, not out of some sense of altruism, but because it's what they're being payed to do. They end up living in shelters or run-down hovels, unable to make a living or even function in the future society.

Transmetropolitan is over all a very good comic, and it paints a believable, grim future where "post-scarcity" hasn't solved a single problem. They have all this wonderful technology that the LW crowd envisions, like sentient clouds of nano-machines and assemblers that can create any object you want out of old trash, but instead of ushering in a shining utopia it just means that the upper class can now eat genetically engineered baby seals and inject themselves with ebola for laughs.

There are some practical applications of cryogenics technology: Spider's ex-wife had herself frozen voluntarily, not to be awoken until he was dead. :laugh:

Dabir
Nov 10, 2012

AlbieQuirky posted:

Complete horseshit, there is a non-zero chance of your body getting unfrozen or never having been frozen properly, even if the freezing is perfect and perfectly maintained there's no particular reason to believe in future thawing-and-reviving tech being a thing ever, why would a future society even want to thaw out oldsicles?

If historians could bring back and interview people who'd gone to the Globe Theater to see Shakespeare's original performances, or heard Bach performing in Vienna, or who'd studied under Socrates or Plato, they'd probably do it without a second's hesitation.

Djeser
Mar 22, 2013


it's crow time again

This is a very good point.

Unfortunately, the historically-relevant information gleaned from oldsicles would have some selection bias in the "these are the people who would have signed up for cryo" area. I doubt a ton of them would have experienced a lot of relevant culture first-hand.

Though that would be an interesting way to select yourself for ideal rejuvenation: be a cultured and interesting individual who was present for major historical and cultural events.

But considering Yuddites, like hell that's going to happen. They're just gonna tell the future historians about how deep Magical Girl Lyrical Nanoha and the third season of Supernatural are.

90s Cringe Rock
Nov 29, 2006
:gay:

Djeser posted:

Though that would be an interesting way to select yourself for ideal rejuvenation: be a cultured and interesting individual who was present for major historical and cultural events.
That's actually a thing people who spend time thinking about how the universe might be a simulation have said - you don't want your simulator to get bored, so try to hang out near important events. If you're part of the background noise, the simulation might just run a rough simulation of what more or less happened that won't have enough simulation detail to be really alive.

Or something.

They didn't suggest that being boring would result in torture though, and freezing yourself seems pretty boring to me.

Captain Fargle
Feb 16, 2011

ArchangeI posted:

I just noticed that Alcor is based in loving Arizona. Yeah, if you have a facility in which you store the frozen bodies of people so that they may one day be revived, you should certainly build it in the middle of a loving desert.

Hey now! Antarctica is a desert! :v:

gipskrampf
Oct 31, 2010
Nap Ghost
I still don't get how the basilisk is supposedy to work without some kind of time travel or backwards causation. Let's assume that at some point in the future the omnipotent AI comes into existence. Depending on how many people were afraid of being tortured, this may happen sooner or later. But a soon as the AI exists, its creation process and causes for its existance are fixed in time. Basically the AI can be thankfull that a lot of long dead Less Wrong crazies believed in their own argument in the past, so that its creation happened earlier, but it has no possibility to change the brain processes in the past. So torturing simulations are just a waste of effort.

Another problem with the argument is that it is just as likely that an evil AI will come into existence who likes to torture people who donated to AI research. Or some capricious god/space cuthulu/advanced aliens who gonna torture people for posting in this forum. Whatever your doing in your life, there is some chance that someone or something gonna torture you for it. I guess the smartest thing is to kill ourselves right now...

Djeser
Mar 22, 2013


it's crow time again

gipskrampf posted:

I still don't get how the basilisk is supposedy to work without some kind of time travel or backwards causation.

Good thing that Timeless Decisison Theory is literally backwards causation!

90s Cringe Rock
Nov 29, 2006
:gay:

gipskrampf posted:

I still don't get how the basilisk is supposedy to work without some kind of time travel or backwards causation. Let's assume that at some point in the future the omnipotent AI comes into existence. Depending on how many people were afraid of being tortured, this may happen sooner or later. But a soon as the AI exists, its creation process and causes for its existance are fixed in time. Basically the AI can be thankfull that a lot of long dead Less Wrong crazies believed in their own argument in the past, so that its creation happened earlier, but it has no possibility to change the brain processes in the past. So torturing simulations are just a waste of effort.
The fear of eternal torture at the robotic hands of a friendly and loving AI can't work if the AI won't actually torture people, so the AI has to torture people just in case. The future AI is defined such that it will torture people; any more lenient AI wouldn't convince people of the truth of Roko's Basilisk. If you're convinced of Roko's Basilisk, the AI that you imagine must torture you. If you're initially convinced by dodgy arguments and meditation on the sequences, you're screwed, because once you accept it it becomes self-reinforcing!

Or I guess you could just say "I will not engage in acausal trade with things that will torture simulations of me, what the gently caress" and the AI will sigh and let you go, because there's no point in torturing you. You have to say it out loud in case the simulation the AI's monitoring has privacy protections that stop it peering inside your thoughts and it's just monitoring the air around you for speech and staring creepily at you from all possible angles. Yes, I have been told this by a... friend who is into the cult but hasn't actually donated.

gipskrampf posted:

Another problem with the argument is that it is just as likely that an evil AI will come into existence who likes to torture people who donated to AI research. Or some capricious god/space cuthulu/advanced aliens who gonna torture people for posting in this forum. Whatever your doing in your life, there is some chance that someone or something gonna torture you for it. I guess the smartest thing is to kill ourselves right now...
Suicide is a sin, and worse, deathism. Pay the lowest bidder to freeze your brain instead!

AlbieQuirky
Oct 9, 2012

Just me and my 🌊dragon🐉 hanging out

Dabir posted:

If historians could bring back and interview people who'd gone to the Globe Theater to see Shakespeare's original performances, or heard Bach performing in Vienna, or who'd studied under Socrates or Plato, they'd probably do it without a second's hesitation.

That's true. But still doesn't suggest that mass unfreezing would be a social goal. Anybody who got unfrozen as a "window to the past" would be likely to have a lovely life in any case---Ishi, the last of the Yahi, seems like a possible example.

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

chrisoya posted:

Or I guess you could just say "I will not engage in acausal trade with things that will torture simulations of me, what the gently caress" and the AI will sigh and let you go, because there's no point in torturing you. You have to say it out loud in case the simulation the AI's monitoring has privacy protections that stop it peering inside your thoughts and it's just monitoring the air around you for speech and staring creepily at you from all possible angles.
No sooner read than done. Phew, this crisis was surprisingly easy to avert. :sweatdrop:

gipskrampf
Oct 31, 2010
Nap Ghost

chrisoya posted:

The fear of eternal torture at the robotic hands of a friendly and loving AI can't work if the AI won't actually torture people, so the AI has to torture people just in case.

But its quite possible to be afraid of something illusory. Whatever simulations will happen in the future will not decide how afraid I am at this moment. Basically my decision to fund AI research is caused by my fear of torture (a thing in the present) and not by the future torture itself.

Wait, what if I refuse the donate money, so the AI will not come into existence and will not torture me? Checkmate AI, your plan will backfire spectacularly. :viggo:

Djeser
Mar 22, 2013


it's crow time again

"An all powerful sentient AI emerges in the future" is not a yes or no question to Less Wrong, it is a when and how question.

Jenny Angel
Oct 24, 2010

Out of Control
Hard to Regulate
Anything Goes!
Lipstick Apathy

Djeser posted:

Though that would be an interesting way to select yourself for ideal rejuvenation: be a cultured and interesting individual who was present for major historical and cultural events.

But considering Yuddites, like hell that's going to happen. They're just gonna tell the future historians about how deep Magical Girl Lyrical Nanoha and the third season of Supernatural are.

This is the same idea that drives a lot of libertarians: in a truly free market, they assume that THEY'D be the heroic, beloved, all-powerful captains of industry, rather than one of seven billion wage slaves. Yudkowsky assumes not only that he'll get woken up hundreds of years from now, but that his new life in the future will be a desirable one.

How many useful skills would somebody from 1800 have in today's job market? Yudkowsky just assumes that he's either going to be able to autodidactically bootstrap himself up, or that the technology will exist to instantly beam all necessary information into his brain, or that it'll be a post-scarcity society that doesn't operate on capitalist assumptions of wage-for-work anymore... or SOMETHING that's going to keep his post-freezing life from being "creepy homeless guy who rants to you about the 21st century on your way t work".

90s Cringe Rock
Nov 29, 2006
:gay:
If we don't do it, aliens will. Our AI god must be the first one created in the entire universe, or we're doomed.

Jonny Angel posted:

Yudkowsky just assumes that he's either going to be able to autodidactically bootstrap himself up
He is a Bayesian rationalist, and maybe this will be when he triggers his one-shot superpower of "actually doing something."

Lightanchor
Nov 2, 2012
Has Yudkowsky heard of our lord and savior Jesus Christ? He is Reason made flesh after all, and might fill the hole that Yudkowsky obviously feels in himself

gipskrampf
Oct 31, 2010
Nap Ghost
Afraid of basilisks and other benevolent torture-monsters? Less Wrong forums users offer helpfull tips how to kill yourself.

Less Wrong posted:


So assuming you have good evidence of eldritch abominations what is the best suicide method? I'm guessing anything that really scrambles your information right. Please keep in mind practicality. Really powerful explosives seem hard to obtain. Having someone dispose of your body after suicide seems an ok but risky option.


http://lesswrong.com/lw/fq3/open_thread_december_115_2012/80v8#20121230

ArchangeI
Jul 15, 2010

Jonny Angel posted:

This is the same idea that drives a lot of libertarians: in a truly free market, they assume that THEY'D be the heroic, beloved, all-powerful captains of industry, rather than one of seven billion wage slaves. Yudkowsky assumes not only that he'll get woken up hundreds of years from now, but that his new life in the future will be a desirable one.

How many useful skills would somebody from 1800 have in today's job market? Yudkowsky just assumes that he's either going to be able to autodidactically bootstrap himself up, or that the technology will exist to instantly beam all necessary information into his brain, or that it'll be a post-scarcity society that doesn't operate on capitalist assumptions of wage-for-work anymore... or SOMETHING that's going to keep his post-freezing life from being "creepy homeless guy who rants to you about the 21st century on your way t work".

Option one: compound interest on investments made in the here and now. Of course, that'd require you to accurately predict the market for an undetermined period of the future after your death, but for an intellectual dinosaur like Yudkowsky that shouldn't be too hard.

Option two: become trophy husband for rich woman (or man, we're not judging here) who wants to get with a guy who is an old school gentleman and knows how to treat a woman right.

Polybius91
Jun 4, 2012

Cobrastan is not a real country.

Jonny Angel posted:

How many useful skills would somebody from 1800 have in today's job market? Yudkowsky just assumes that he's either going to be able to autodidactically bootstrap himself up, or that the technology will exist to instantly beam all necessary information into his brain, or that it'll be a post-scarcity society that doesn't operate on capitalist assumptions of wage-for-work anymore... or SOMETHING that's going to keep his post-freezing life from being "creepy homeless guy who rants to you about the 21st century on your way t work".
I do recall there was a kid from an African tribe who came to America early in the 20th century and did pretty okay working at a factory for awhile. I specifically remember that being able to climb without riggings proved useful to him.

Of course, for someone with Yudkowsky's caliber of smug, I can hardly imagine "well, to be honest we don't have much need for you, but here's some low-wage, low-skill work you can do probably for the rest of your life" being an acceptable outcome :v:

Djeser
Mar 22, 2013


it's crow time again

Polybius91 posted:

I do recall there was a kid from an African tribe who came to America early in the 20th century and did pretty okay working at a factory for awhile. I specifically remember that being able to climb without riggings proved useful to him.

Of course, for someone with Yudkowsky's caliber of smug, I can hardly imagine "well, to be honest we don't have much need for you, but here's some low-wage, low-skill work you can do probably for the rest of your life" being an acceptable outcome :v:

That would be Ota Benga, who also had a stint in the Bronx Zoo. So, y'know, work in a factory or live in a zoo.

Mors Rattus
Oct 25, 2007

FATAL & Friends
Walls of Text
#1 Builder
2014-2018

Has Yudkowsky or any of his buddies ever explained how the AI is supposed to get enough information to perfectly reconstruct the lives and thought processes of an arbitrarily large number of humans?

I mean, to perfectly simulate me, it has to be able to perfectly simulate everyone I've ever interacted with at least well enough to provide the same environment, plus everything I have ever done, plus every electrochemical interaction within my body.

And for all that the world is currently significantly more surveilled and recorded than it was, say, 60 years ago, most of that information still isn't being recorded. I don't have an EEG monitoring my brain constantly. I don't have chemical analysis going on in my lower GI tract whenever I eat something. And this has to be a perfect simulation, because any imperfection will imply that Timeless Decision Theory is incorrect and that the simulation is, in fact, not exactly the same as I am and cannot be used to accurately predict or model behavior.

So, even assuming infinite processing power and infinite time in which to process, where the gently caress is this information coming from?

I mean, if it can perfectly reverse entropy and reconstruct the past, why didn't it pull this gambit on rich people throughout history?

E: Like not even just the infinite simulation torture gambit but any acausal time bullshit interactions whatsoever.

Mors Rattus fucked around with this message at 19:06 on Apr 30, 2014

90s Cringe Rock
Nov 29, 2006
:gay:
Maybe it just tortures every possible being that could conceivably be called human. Sure, that's a lot, but I bet it's smaller than 3^^^3 beings! As an added bonus, it would simulate you at every instant of your life.

gipskrampf posted:

Afraid of basilisks and other benevolent torture-monsters? Less Wrong forums users offer helpfull tips how to kill yourself.


http://lesswrong.com/lw/fq3/open_thread_december_115_2012/80v8#20121230
That is a surprisingly sensible response to accepting many of Yudkowsky's claims and assumptions. Maximise your local entropy, ideally having minimal impact on other people so you don't get studied and investigated. Blow yourself up somewhere nice and quiet where no-one will find you.

Or get therapy.

Edit:

quote:

army1987 16 December 2012 01:02:44PM 6 points [-]
I'm starting to wonder whether one of the reasons why Roko deleted all of his comments was that he didn't want to leave too many horcruxes behind.
Is this thread aiding and abetting AI god torture? :tinfoil:

gipskrampf
Oct 31, 2010
Nap Ghost
:siren: Mods, gas this thread, remove all vestiges of it from the archives and permaban everybody posting in it. Only then the forums can be saved. :siren:

Mors Rattus
Oct 25, 2007

FATAL & Friends
Walls of Text
#1 Builder
2014-2018

quote:

"how generalization from fictional evidence is bad"

I don't think this is a universal rule. I think this is very often true because humans tend to generalize so poorly, tend to have harmful biases based on evolution, and tend to write and read bad (overly emotional, irrational, poorly-mapped-to-reality) fiction.

Concepts can come from anywhere. However, most fiction maps poorly to reality. If you're writing nonfiction, at least if you're trying to map to reality itself, you're likely to succeed in at least getting a few data points from reality correct. Then again, if you're writing nonfiction, you might be highly adept at "lying with facts" (getting all the most granular "details" of a hierarchical structure correct, while getting the entire hierarchical structure wrong at greater levels of abstraction).

As one example of a piece of fiction that maps very closely to reality, and to certain known circumstances, I cite "Unintended Consequences" by John Ross. It's a novel about gun rights that is chock-full of factual information, because the man who wrote it is something of a renaissance man, and an engineer, who comprehends material reality. As an example of a piece of fiction that maps poorly to reality in some of its details, I cite "Atlas Shrugged," by Ayn Rand (the details may be entertaining, and may often illustrate a principle really well, but they often could not happen, --such as "a small band of anti-government people are sheltered from theft by a 'ray screen'"). The "ray screen" plot device was written before modern technology (such as GPS, political "radar" and escalation, etc.) ruled it out as a plot device.

John Ross knows a lot more about organizational strategy, firearms, and physics than Rand did. Also, he wrote his novel at a later date, when certain trends in technological history had already come into existence, and others had died out as possible. Ross is also a highly logical guy. (Objectivist John Hospers, clearly an Ayn Rand admirer, compares the two novels here.)

You can attack some of the ideas in Unintended Consequences for not mapping to reality closely, or for being isolated incidences of something that's possible, but highly unlikely. But you can attack far fewer such instances in his novel than you can in Rand's.

Now, take the "Rich Dad, Poor Dad" books. Such books are "nonfiction" but they are low in hierarchical information, and provide a lot of obvious and redundant information.

So "beware using non fiction as evidence, not only because it's deliberately wr ong in particular ways to make it more interesting" but more importantly "because it does not provide a probabilistic model of what happened" (especially if the author is an idiot whose philosophy doesn't map closely to reality) "and gives at best a bit or two of evidence that looks like a hundred or more bits of evidence."

I think nonfiction written by humans is far more damaging than fiction is. In fact, human language (according to Ray Kurzweil, in "The Singularity is Near" and "The Age of Spiritual Machines," and those, such as Hans Moravec, who agree with him) is "slow, serial, and imprecise" in the extreme. Perhaps humans should just stop trying to explain things to each other, unless they can use a chart or a graph, and get a verbal confirmation that the essential portions of the material have been learned. (Of course, it's better to have 10% understanding, than 0%, so human language does serve that purpose. Moreover, when engineers talk, they have devised tricks to get more out of human language by relying on human language to "connect data sets." --All of this simply says that human language is grossly sub-optimal compared to better forms of theoretically possible communication, not that human language shouldn't be used for what it's worth.)

In this way, STEM teachers slowly advance the cause of humanity, by teaching those who are smart enough to be engineers, in spite of the immense volumes of redundant, mostly-chatter pontification from low-level thinkers.

Most nonfiction = fiction, due to the low comprehension of reality by most humans. All the same caveats apply to concepts from fiction and nonfiction both.

In fact, if one wishes to illustrate a concept, and one claims that concept is nonfiction, then that concept can be challenged successfully based on inessentials. Fiction often clarifies a philosophical subject, such as in Rand's "Atlas Shrugged" that "right is independent of might, and nothing rules out the idea that those who are right might recognize that they have the right to use force, carefully considered as retaliatory only" and "simply because the government presently has more might than individuals, the majority vote doesn't lend morality to the looting of those individuals." The prior philosophical concepts could be challenged as "not actually existing as indicated" if they appeared in a book that claimed to be "nonfiction."

But, as concepts, they're useful to consider. Fiction is the fastest way to think through _likely implications.

The criticisms of basing one's generalizations from fictional evidence here are valid. Unfortunately, they are (1) less valid when applied to careful philosophical thinkers (but those careful philosophical thinkers themselves are very rare) (2) equally applicable to most nonfiction, because humans understand very little of importance, unless it's an expert talking about a very narrow area of specialization. (And hence, not really "generalization.")

Very little of reality is represented, even in nonfiction in clean gradations or visual models that directly correspond to reality. Very little is represented as mathematical abstraction. There's a famous old line in a book "Mathematical Mysteries" by Calvin Clawson, and Pi by Petr Beckmann that claims "for every equation in a book, sales of the book are cut in half." This is more of a commentary on the readership than the authorship: a tiny minority of people in the general domain of "true human progress" are doing the "heavy lifting."

...The rest of humanity can't wait to tell you about an exciting new political movement they've just discovered... ...(insert contemporary variant of mindless power-worshipping state collectivism).

Just my .02.

I think I will let this stand for itself, except to note that there was a Let's Read of Unintended Consequences and it is hilarious. (All emphasis in the above is mine.)

potatocubed
Jul 26, 2012

*rathian noises*
Ooo, ooo, are we doing 'rationalists misunderstand fiction?'

Robin Hanson posted:

“The 38 most common fiction writing mistakes” offers advice to writers. But the rest of us can also learn useful details on how fiction can bias our thinking. Here are my summary of key ways it says fiction differs from reality (detailed quotes below) [cut by me - Ed]:

Features of fictional folk are more extreme than in reality; real folks are boring by comparison. Fictional folks are more expressive, and give off clearer signs about their feelings and intentions. Their motives are simpler and clearer, and their actions are better explained by their motives and local visible context. Who they are now is better predicted by their history. Compared to real people, they are more likely to fight for what they want, especially when they encounter resistance. Their conversations are mostly pairwise, more logical, and to the point. In fiction, events are determined more by motives and plans, relative to random chance and larger social forces. Overt conflict between people is more common than in real life.

And I’ll add that stories tend to affirm standard moral norms. Good guys, who do good acts, have more other virtuous features than in reality, and and good acts are rewarded more often than in reality.

A lot of our biases come, I think, from expecting real life to be like fiction. For example, when we have negative opinions on important subjects, we tend too much to expect that we should explicitly and directly express those negative opinions in a dramatic conversation scene. We should speak our mind, make it clear, talk it through, etc. This usually a bad idea. We also tend to feel bad about ourselves when we notice that we avoid confrontation, and back off when from things we want when we encounter resistance. But such retreat is usually for the best.

And the comments!

Guy 1 posted:

Are you saying that exposure to stories wipes out our innate understanding of Bayesian probability?

Guy 2 posted:

Yes, i think that would be a reasonable way of putting it. Perhaps the cognitive load of understanding the story has the effect of lowering our intellectual standards in general, so that the big winning gamble each story encodes is consistently accepted uncritically, until a distorted sense of probability becomes habitual. Or perhaps rather than a lowering of intellectual standards, it is simply an issue of focus, and the implicit gamble is the unseen gorilla.

More generally, my hunch is that cognitive biases have an as yet unseen social purpose. By that i don't mean a higher purpose, rather in the sense of society as a self-organizing system. This possibility should be considered before overcoming bias.

And further down the page...

Eliezer Yudkowsky, for it is he posted:

And that, in a nutshell, is why I can't read real books anymore.

Guy 3, in reply posted:

Is Harry Potter a real book or an alt. book?

Mr. Sunshine
May 15, 2008

This is a scrunt that has been in space too long and become a Lunt (Long Scrunt)

Fun Shoe

gipskrampf posted:

Afraid of basilisks and other benevolent torture-monsters? Less Wrong forums users offer helpfull tips how to kill yourself.


http://lesswrong.com/lw/fq3/open_thread_december_115_2012/80v8#20121230

It figures that they'd stumble upon the Atrocity Archives, since they seem to be nicking most of their other ideas from Sci-Fi.

leftist heap
Feb 28, 2013

Fun Shoe

potatocubed posted:

Ooo, ooo, are we doing 'rationalists misunderstand fiction?'


And the comments!



And further down the page...

So... are these people admitting that they don't read or are afraid of fiction because they essentially can't discern it from reality? I mean, it's couched in the argument that too much fiction could subconsciously impart or affect existing cognitive biases and render them unable -- or at least less able -- to recognize ~*The Truth*~, but it sure sounds like they're afraid of fiction. But hey, Buffy the Vampire Slayer is A-OK because...

Adbot
ADBOT LOVES YOU

SolTerrasa
Sep 2, 2011

Mr. Sunshine posted:

It figures that they'd stumble upon the Atrocity Archives, since they seem to be nicking most of their other ideas from Sci-Fi.

Indeed, the whole "AIs resurrect you by looking at your output over the course of your life, for inscrutable reasons, entropy be damned" comes from an earlier book by the same author.

  • Locked thread