Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
You could probably make a very involved argument about whether a complex system is really all that chaotic when it will end up with, more or less, the same statistical spread every time due to the scales involved, but nevermind. Either way I'm not terribly convinced that those random atom-scale events really do much to influence the cognitive process as a whole.

Adbot
ADBOT LOVES YOU

The Vosgian Beast
Aug 13, 2011

Business is slow

Lottery of Babylon posted:

This is what Yudkowsky thinks science looks like.

Yudkowsky's attempts to be playful are really wince-worthy

Night10194
Feb 13, 2012

We'll start,
like many good things,
with a bear.

Look, the reason he thinks you can completely predict what a human will do if you're smart enough is because that's how it worked in Dune and the guy thinks he's a Mentat. Yud, I mean.

SubG
Aug 19, 2004

It's a hard world for little things.

Cardiovorax posted:

The point is that it isn't actually about anyone's behaviour, the mere fact that the contents of the predicting entity's mind change require a repeat of the prediction process with the new data set to remain perfectly accurate. I know this doesn't make a lot of intuitive sense, but since this is a formal thought experiment, it doesn't have to. It makes sense in terms of computational theory.
It isn't that it doesn't make intuitive sense or it's a thought experiment, it's that human minds don't work that way. It's a spherical cow argument.

Cardiovorax posted:

That is precisely how Yudkowsky proposes his AI god will make its predictions. It's also what would be necessary to make genuinely 100% accurate predictions of behaviour, because you need to be able to simulate with perfect every-single-loving-quark fidelity for that, but that doesn't really matter for most practical purposes, which you are of course right about.
It isn't clear that that's true, and there are lots of reasons to believe it's false. Again, if you're talking about a general simulation of arbitrary behaviour maybe. But I don't know how you get from actual observation to the ideological claim that literally 100% of the resources of the human mind are devoted to making every single decision.

Spoilers Below posted:

It could be as simple as that, but it could also be infinitely more complex
False. Unless you're using `infinitely' metaphorically or something. Or at least I assume we're not ascribing magical powers to the human brain (to allow it to solve infinitely complex problems to make decisions).

Spoilers Below posted:

The ordinary personality test, "works with 99% accuracy", type stuff I agree you could probably refine to. It's the 100% accurate "retroactively affects the past" that I quibble with.
I have no stake in arguing Yud's crazy-rear end retroactive causality. My point is that you and Cardiovorax are assuming that human decision-making in a specific problem either a) is something that might entail infinite recursion or otherwise require infinite resources, or b) is a completely irreducible process which therefore requires reproducing with perfect fidelity an entire human mind in order to predict the outcome. Neither of these are contentions which appear to be supported by the evidence. Indeed, both of them seem to radically diverge from our current understanding of how human minds in fact work.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
I'm really not sure what your point is. I have not been talking about a human mind at any point. This is purely a problem of computability theory.

SubG
Aug 19, 2004

It's a hard world for little things.

Cardiovorax posted:

I'm really not sure what your point is. I have not been talking about a human mind at any point. This is purely a problem of computability theory.
Well, we're talking about a problem involving a human and an AI, and you started out saying `[p]redicting yourself accurately would require you to have a perfect mental model of your own mind'. I guess the `yourself' here is some algorithm trying to model its `mind' that just happened to wander into the thread?

But if you want to walk away that's cool. It's worth pointing out that precisely the same line of argument still applies regardless of what the decision agent is---predicting outcome only requires modelling the entire agent to arbitrary fidelity if the size of the decision schema is the same as the size of the agent. However you want to phrase that.

Like we can imagine a genetic/evolutionary programming experiment where you slap together arbitrary instructions in arbitrary order, with a supervisor that passes it three numbers and gives a pass or fail based on whether or not the output is the sum of the first two numbers multiplied by the third. Eventually you'd get one or more algorithms for doing the computation in question. But because of the way the algorithms were built, they almost certainly include all kinds of useless poo poo---it will shift a register left, then back right, include meaningless delay loops, whatever, sky's the limit. So it turns out you can predict with output of one of these algorithms a lot more efficiently than just running through the algorithm. Indeed, the only time when this will not be true is the case in which your evolutionary process just lucked into a provably optimal solution for the problem.

This is a different problem than actually trying to perfectly simulate every instruction in the evolved code, which is the problem you keep trying to solve. The point being that if you just want to predict the output you don't have to simulate the algorithm unless the algorithm is already the optimal way of producing the output.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
I don't think we're even talking about the same thing anymore.

potatocubed
Jul 26, 2012

*rathian noises*
That's the joy of Timeless Decision Theory - it's so wrong that people can get lost arguing about precisely how wrong it is, and how.

1337JiveTurkey
Feb 17, 2005

Dumb question: If the computer is dedicated to maximizing human happiness and welfare, why not just offer sex and drugs so awesome that they violate the laws of physics because simulation instead of threatening torture?

SubG
Aug 19, 2004

It's a hard world for little things.

1337JiveTurkey posted:

Dumb question: If the computer is dedicated to maximizing human happiness and welfare, why not just offer sex and drugs so awesome that they violate the laws of physics because simulation instead of threatening torture?
Because in the post-Singularity future all of the right-thinking people will be showered in glory anyway. That's the point of the Singularity in the first place. And since naturally everything is so wonderful in the best-of-all-possible technological futures we have to go out of our way to make wrong-thinking people suffer. Because they should be made to suffer. For thinking wrongly.

I'm pretty sure that's why.

Pf. Hikikomoriarty
Feb 15, 2003

RO YNSHO


Slippery Tilde
Actually the whole 'Quantum Physics Series' is worth reading, it has a lot of Yudkowsky's weird ideas about science:

http://lesswrong.com/lw/qc/when_science_cant_help/

quote:

Evolutionary psychology is another example of a case where rationality has to take over from science. While theories of evolutionary psychology form a connected whole, only some of those theories are readily testable experimentally. But you still need the other parts of the theory, because they form a connected web that helps you to form the hypotheses that are actually testable—and then the helper hypotheses are supported in a Bayesian sense, but not supported experimentally. Science would render a verdict of "not proven" on individual parts of a connected theoretical mesh that is experimentally productive as a whole. We'd need a new kind of verdict for that, something like "indirectly supported".

Hmm yes I am totally prepared to accept the truth of a bunch of just-so stories that happen to support current cultural prejudices in the absence of empirical evidence :biotruths:

Edit: His next example of a subject where Bayesian reasoning should trump (lack of) scientific evidence is cryonics, of course.

Pf. Hikikomoriarty fucked around with this message at 02:39 on Oct 30, 2014

bewilderment
Nov 22, 2007
man what



Question about all these silly AI-torture scenarios - in each of these scenarios, isn't the AI forced to simulate itself? Like that "do you free the boxed AI that threatens to torture simulated you", question. Well, if it's simulating the situation, isn't it also simulating itself simulating the situation, etc. etc. recursively?

Or does Yud handwave the simulations as just being 'good enough'?

SubG
Aug 19, 2004

It's a hard world for little things.

Mort posted:

Edit: His next example of a subject where Bayesian reasoning should trump (lack of) scientific evidence is cryonics, of course.
It takes a special kind of delusion to a) think up something like the idea `that consciousness was caused by closed timelike curves hiding in quantum gravity', b) pretend that nobody in science will tell you that this is nonsense (but instead will apparently encourage you to dump years of research into the idea), and therefore conclude that c) science is broken as a result.

bewilderment posted:

Question about all these silly AI-torture scenarios - in each of these scenarios, isn't the AI forced to simulate itself? Like that "do you free the boxed AI that threatens to torture simulated you", question. Well, if it's simulating the situation, isn't it also simulating itself simulating the situation, etc. etc. recursively?

Or does Yud handwave the simulations as just being 'good enough'?
I've actually discussed this at some length in past page or two.

1337JiveTurkey
Feb 17, 2005

SubG posted:

Because in the post-Singularity future all of the right-thinking people will be showered in glory anyway. That's the point of the Singularity in the first place. And since naturally everything is so wonderful in the best-of-all-possible technological futures we have to go out of our way to make wrong-thinking people suffer. Because they should be made to suffer. For thinking wrongly.

I'm pretty sure that's why.

I'd hope that whatever omnipotent and omnibenevolent AI overlord would be better than me at coming up with ideas, but maybe getting to try out all the new mirth-inducing drugs before everyone else would sweeten the pot enough to make the earlier apotheosis cancel out everyone else getting the Million Man Orgy or whatever floats the person's boat a bit later than usual.


Mort posted:

Actually the whole 'Quantum Physics Series' is worth reading, it has a lot of Yudkowsky's weird ideas about science:

http://lesswrong.com/lw/qc/when_science_cant_help/


Hmm yes I am totally prepared to accept the truth of a bunch of just-so stories that happen to support current cultural prejudices in the absence of empirical evidence :biotruths:

Edit: His next example of a subject where Bayesian reasoning should trump (lack of) scientific evidence is cryonics, of course.

Speaking as someone with years of experience being a confident idiot using pure reason to go where science hasn't yet dared is perilous to say the least.

Tunicate
May 15, 2012

1337JiveTurkey posted:

Speaking as someone with years of experience being a confident idiot using pure reason to go where science hasn't yet dared is perilous to say the least.

It really irritates me when tests like this have 'correct' answers that aren't technically right.

quote:

“Evolution cannot cause an organism’s traits to change during its lifetime": True or False

While they say the answer is 'true', it's pretty easy to justify a 'false' answer being the correct one.

False: A butterfly's traits change dramatically during its lifetime, implying evolution cannot create that level of complexity is dumb.

False: Cancer cells mutate to be able to outcompete other cells within an organism's body, resulting in a reproductive advantage. They cause dramatic changes to the organism, especially if you end up with a big blob of HeLa.

False: Epigenomic traits (which can induced by body weight, for example) can cause significant changes to body function, and can be passed down to descendents.


It's kind of ironic, actually, since they're making unfounded assumptions in order to try to find unfounded assumptions.

SubG
Aug 19, 2004

It's a hard world for little things.

Tunicate posted:

While they say the answer is 'true', it's pretty easy to justify a 'false' answer being the correct one.

False: A butterfly's traits change dramatically during its lifetime, implying evolution cannot create that level of complexity is dumb.

False: Cancer cells mutate to be able to outcompete other cells within an organism's body, resulting in a reproductive advantage. They cause dramatic changes to the organism, especially if you end up with a big blob of HeLa.

False: Epigenomic traits (which can induced by body weight, for example) can cause significant changes to body function, and can be passed down to descendents.
That would be a better pedantic argument if any of those things were examples of evolution instead of things produced by evolution.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
Evolution happens to species, not individuals. There is really no context in which false is the correct answer to that question, unless you willfully misunderstand it.

RPATDO_LAMD
Mar 22, 2013

🐘🪠🍆

1337JiveTurkey posted:

Dumb question: If the computer is dedicated to maximizing human happiness and welfare, why not just offer sex and drugs so awesome that they violate the laws of physics because simulation instead of threatening torture?

Simulations aren't people. It doesn't care about making the simulations happy.

Tunicate
May 15, 2012

Cardiovorax posted:

Evolution happens to species, not individuals. There is really no context in which false is the correct answer to that question, unless you willfully misunderstand it.

Only if you choose to define what constitutes evolution based on a 'species' instead of 'population'. Since 'species' is a loaded term once you start looking at microbiology, a lot of microbiologists prefer the latter definition.

Using that definition, you end up with scientists like Leigh Van Valen arguing that HeLa cells can be classified as an entirely differnt species. Contagious cancers (like the one killing off the Tasmanian devils) definitely are a different organism than their host (just as a zooid like the Portuguese man-of-war is composed of multiple genetically distinct organisms).

While I wouldn't go that far, it seems strange to argue against cancer being an example of evolution through natural selection among cells. An individual cell gaining a mutation that grants a reproductive advantage. It passes this mutation on to its daughter cells, which also experience more reproductive success, resulting in a dramatic change in the overall population genetics.

Sucks for the organism, but nobody said evolution was a long-sided process.

And hey, if cancer researchers and Berkley are willing to say it's a useful perspective, and it gets published in Nature Reviews, I think there's quite a lot of context where you can say evolution happens within an individual organism.

Tunicate fucked around with this message at 08:07 on Oct 30, 2014

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
This is the part where "willfully misunderstanding the question" applies. Say population instead of species if you prefer that, the point is that evolution occurs over multiple generations. Yes, natural selection applies to everything that is alive and reproduces in some fashion, but a clump of cancer cells isn't qualitatively different from a bacterial colony, even if it resides within a host body. Individual organisms do not spontaneously undergo evolution, which is clearly what the question was asking for. That is simply how the word is defined.

Political Whores
Feb 13, 2012

Cardiovorax posted:

This is the part where "willfully misunderstanding the question" applies. Say population instead of species if you prefer that, the point is that evolution occurs over multiple generations. Yes, natural selection applies to everything that is alive and reproduces in some fashion, but a clump of cancer cells isn't qualitatively different from a bacterial colony, even if it resides within a host body. Individual organisms do not spontaneously undergo evolution, which is clearly what the question was asking for. That is simply how the word is defined.

Well, technically the world evolution doesn't meaning anything more than a process of growth or development; it was in use centuries before Darwin. But yeah the definition of evolution in the biology sense has it over successive generations. Mutation can definitely happen in an individual, and that mutation can confer an advantage, but that's not evolution. Evolution is when something like that occurs and leads to a change in the composition of the population as a whole over successive generations.

SubG
Aug 19, 2004

It's a hard world for little things.

Tunicate posted:

While I wouldn't go that far, it seems strange to argue against cancer being an example of evolution through natural selection among cells. An individual cell gaining a mutation that grants a reproductive advantage. It passes this mutation on to its daughter cells, which also experience more reproductive success, resulting in a dramatic change in the overall population genetics.
Modeled this way the it is the population of cells which is evolving, not the individual with cancer. So you're still not seeing `evolution within an individual' unless you're trying to commit some rhetorical slight-of-hand in which you change context halfway through the statement and expect us to come along with you.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Political Whores posted:

Well, technically the world evolution doesn't meaning anything more than a process of growth or development; it was in use centuries before Darwin.

Cardiovorax posted:

This is the part where "willfully misunderstanding the question" applies.

Pavlov
Oct 21, 2012

I've long been fascinated with how the alt-right develops elaborate and obscure dog whistles to try to communicate their meaning without having to say it out loud
Stepan Andreyevich Bandera being the most prominent example of that

Cardiovorax posted:

Individual organisms do not spontaneously undergo evolution, which is clearly what the question was asking for.

Then how did I get my Charizard? Q.E.D.

And don't give me any of that 'metamorphosis' poo poo.

Toph Bei Fong
Feb 29, 2008



SubG posted:

False. Unless you're using `infinitely' metaphorically or something.

Obviously.

quote:

I have no stake in arguing Yud's crazy-rear end retroactive causality. My point is that you and Cardiovorax are assuming that human decision-making in a specific problem either a) is something that might entail infinite recursion or otherwise require infinite resources, or b) is a completely irreducible process which therefore requires reproducing with perfect fidelity an entire human mind in order to predict the outcome. Neither of these are contentions which appear to be supported by the evidence. Indeed, both of them seem to radically diverge from our current understanding of how human minds in fact work.

I'm not talking about human decision making or the "miracle" of the human brain. I'm talking about the predictive capabilities of a "perfect" computer. We're definitely talking past one another at this point.

I'm saying:

1) If the computer is 100% accurate, it will never be incorrect about which box to put the money in.

2) If the computer is wrong, even once, it is not 100% accurate.

3) If the computer does not manage to account for something that fucks with the box choice, something which might not be accounted for in the initial brain scan or whatever, the computer cannot be called 100% accurate.

4) There are a poo poo ton of things that the computer cannot account for based purely on a brain scan or a personality test which might cause a person to act out of character, leading to less than 100% accuracy. (i.e. In between the quiz and the box choice, the person gets a phone call stating that their significant other has been kidnapped, and if they don't pay $1,001,000 to the kidnappers, their SO will be killed. Fantastical, but not beyond the realm of possibility. Exactly the kind of prank a dick rival researcher would pull)

This is what I'm talking about. The problem is with the computer and with the claim of 100% accuracy, not with the conceptual claim that certain predictive markers exist which strongly correlate with other behaviors (i.e. those Buzzfeed "I can guess when you lost your virginity based on your favorite Disney movie and which picture of a mountain you like" quizzes). Obviously sometimes there is no correlation causation, but the basic premise is sound, otherwise OK Cupid and the like wouldn't be doing so well.

vvvv edit: typo, thanks vvvv

Toph Bei Fong fucked around with this message at 19:04 on Oct 30, 2014

Sham bam bamina!
Nov 6, 2012

ƨtupid cat
Those actually are correlations, hence the site's name. It's causation that's absent.

su3su2u1
Apr 23, 2014

Spoilers Below posted:

I'm saying:

1) If the computer is 100% accurate, it will never be incorrect about which box to put the money in.

2) If the computer is wrong, even once, it is not 100% accurate.

3) If the computer does not manage to account for something that fucks with the box choice, something which might not be accounted for in the initial brain scan or whatever, the computer cannot be called 100% accurate.

4) There are a poo poo ton of things that the computer cannot account for based purely on a brain scan or a personality test which might cause a person to act out of character, leading to less than 100% accuracy. (i.e. In between the quiz and the box choice, the person gets a phone call stating that their significant other has been kidnapped, and if they don't pay $1,001,000 to the kidnappers, their SO will be killed. Fantastical, but not beyond the realm of possibility. Exactly the kind of prank a dick rival researcher would pull)

There are several easy ways to show that a 100% predictor can't exist:
1. A human can decide what to do based on a coin flip or some other random bit (I do this all the time when deciding what to do for lunch, for instance).
2. If you have a 100% accurate super-intelligence, you can build another one that attempts to do the opposite of whatever the first 100% accurate intelligence decides it will do. This seems like a fake "gotcha" thing but I deal with predictions that change the outcomes of whats being predicted all the time at work (predict an insurance claim will be expensive, and the incentive to settle it early goes up. Which means it ends up not being expensive...)

SubG
Aug 19, 2004

It's a hard world for little things.

Spoilers Below posted:

4) There are a poo poo ton of things that the computer cannot account for based purely on a brain scan or a personality test which might cause a person to act out of character, leading to less than 100% accuracy. (i.e. In between the quiz and the box choice, the person gets a phone call stating that their significant other has been kidnapped, and if they don't pay $1,001,000 to the kidnappers, their SO will be killed. Fantastical, but not beyond the realm of possibility. Exactly the kind of prank a dick rival researcher would pull)

su3su2u1 posted:

There are several easy ways to show that a 100% predictor can't exist:
1. A human can decide what to do based on a coin flip or some other random bit (I do this all the time when deciding what to do for lunch, for instance).
Both of these strike me, as I said, as not particularly interesting quibbles. Or at least if what we're actually talking about is the scope of the problem of predicting the outcome of a specific A/B decision.

I mean okay, maybe a phone call saying grandma died would gently caress up the results. So quarantine the subject between the scan and the test, or do the scan immediately prior to the test. Or whatever. And of course it is trivially true that the outcome of a truly random coin toss can't be predicted (except in aggregate). So prohibit truly random coins in the test facility.

I guess if you honestly think that human behaviour is determined by random coin flips or whatever then I guess barring them from the experiment then this is cheating. But if you don't think they're determinative then it's just an implementation wart for the design of the experiment.

The Vosgian Beast
Aug 13, 2011

Business is slow
Unrelated to this discussion, here is something amazing http://lesswrong.com/lw/298/more_art_less_stink_taking_the_pu_out_of_pua/

SubG
Aug 19, 2004

It's a hard world for little things.
Literally someone who looked at the PUA community and decided that the only thing they'd change is all that icky stuff with girls.

Night10194
Feb 13, 2012

We'll start,
like many good things,
with a bear.


I guess it makes sense for a cult leader to be pretty interested in programming, though.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
Even more so considering his beliefs that everybody has just the exact right thing you need to say to them to get them to do whatever you want. It's half the justification for how his AI-takes-over-the-world delusions even work, after all.

Telarra
Oct 9, 2012

The linked blog post was not written, nor commented on, by Yud.

Night10194
Feb 13, 2012

We'll start,
like many good things,
with a bear.

I wonder how much of that mindset is honestly caused by videogames. They often feature picking the 'right' dialogue option to unlock the person's willingness to do what you want, etc.

The Vosgian Beast
Aug 13, 2011

Business is slow

Moddington posted:

The linked blog post was not written, nor commented on, by Yud.

Yeah, in the future, remember that not everyone on Less Wrong is Big Yud! There's a whole site full of Dunning Krugerites! :science:

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
Still applies, though. He does actually believe that.

SubG
Aug 19, 2004

It's a hard world for little things.

Night10194 posted:

I wonder how much of that mindset is honestly caused by videogames. They often feature picking the 'right' dialogue option to unlock the person's willingness to do what you want, etc.
The Singularity crowd usually talk about technology as if it works like a Civ tech tree as well.

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

The Vosgian Beast posted:

Unrelated to this discussion
Thanks.

Lightanchor
Nov 2, 2012

clippy posted:

I want to join this so I can learn how to better convince humans to help me.

wedrifid posted:

So are we to expect anecdotes of Clippy negging HBs and getting "clip closes"? :P

clippy posted:

Paperclips shouldn't "close" in the sense of the metal wire forming a closed curve; they should be open curves.

Adbot
ADBOT LOVES YOU

SolTerrasa
Sep 2, 2011


That's a gimmick account, pretending to be a paperclip maximizer, as discussed earlier in the thread. It looks like a huge sperg because it's a regular sperg trying to pretend to be a sperg about paperclips.

Way too long story short: paperclip maximizers are AI entities which desire some goal which seems incoherent to humanity. They are one of Yud's terror scenarios for unFriendly AI. The nominal one seeks singlemindedly to increase the number of paperclips in the universe; the quote goes "the AI does not love you, nor does it hate you, but you are made of atoms it could use for something else." Yes, Yud is actually scared of this and spends a million bucks or more a year trying to stop it from happening.

  • Locked thread