|
You could probably make a very involved argument about whether a complex system is really all that chaotic when it will end up with, more or less, the same statistical spread every time due to the scales involved, but nevermind. Either way I'm not terribly convinced that those random atom-scale events really do much to influence the cognitive process as a whole.
|
# ? Oct 29, 2014 22:34 |
|
|
# ? May 4, 2024 09:32 |
|
Lottery of Babylon posted:This is what Yudkowsky thinks science looks like. Yudkowsky's attempts to be playful are really wince-worthy
|
# ? Oct 29, 2014 23:12 |
|
Look, the reason he thinks you can completely predict what a human will do if you're smart enough is because that's how it worked in Dune and the guy thinks he's a Mentat. Yud, I mean.
|
# ? Oct 29, 2014 23:12 |
|
Cardiovorax posted:The point is that it isn't actually about anyone's behaviour, the mere fact that the contents of the predicting entity's mind change require a repeat of the prediction process with the new data set to remain perfectly accurate. I know this doesn't make a lot of intuitive sense, but since this is a formal thought experiment, it doesn't have to. It makes sense in terms of computational theory. Cardiovorax posted:That is precisely how Yudkowsky proposes his AI god will make its predictions. It's also what would be necessary to make genuinely 100% accurate predictions of behaviour, because you need to be able to simulate with perfect every-single-loving-quark fidelity for that, but that doesn't really matter for most practical purposes, which you are of course right about. Spoilers Below posted:It could be as simple as that, but it could also be infinitely more complex Spoilers Below posted:The ordinary personality test, "works with 99% accuracy", type stuff I agree you could probably refine to. It's the 100% accurate "retroactively affects the past" that I quibble with.
|
# ? Oct 29, 2014 23:13 |
|
I'm really not sure what your point is. I have not been talking about a human mind at any point. This is purely a problem of computability theory.
|
# ? Oct 29, 2014 23:23 |
|
Cardiovorax posted:I'm really not sure what your point is. I have not been talking about a human mind at any point. This is purely a problem of computability theory. But if you want to walk away that's cool. It's worth pointing out that precisely the same line of argument still applies regardless of what the decision agent is---predicting outcome only requires modelling the entire agent to arbitrary fidelity if the size of the decision schema is the same as the size of the agent. However you want to phrase that. Like we can imagine a genetic/evolutionary programming experiment where you slap together arbitrary instructions in arbitrary order, with a supervisor that passes it three numbers and gives a pass or fail based on whether or not the output is the sum of the first two numbers multiplied by the third. Eventually you'd get one or more algorithms for doing the computation in question. But because of the way the algorithms were built, they almost certainly include all kinds of useless poo poo---it will shift a register left, then back right, include meaningless delay loops, whatever, sky's the limit. So it turns out you can predict with output of one of these algorithms a lot more efficiently than just running through the algorithm. Indeed, the only time when this will not be true is the case in which your evolutionary process just lucked into a provably optimal solution for the problem. This is a different problem than actually trying to perfectly simulate every instruction in the evolved code, which is the problem you keep trying to solve. The point being that if you just want to predict the output you don't have to simulate the algorithm unless the algorithm is already the optimal way of producing the output.
|
# ? Oct 29, 2014 23:42 |
|
I don't think we're even talking about the same thing anymore.
|
# ? Oct 29, 2014 23:56 |
|
That's the joy of Timeless Decision Theory - it's so wrong that people can get lost arguing about precisely how wrong it is, and how.
|
# ? Oct 29, 2014 23:58 |
|
Dumb question: If the computer is dedicated to maximizing human happiness and welfare, why not just offer sex and drugs so awesome that they violate the laws of physics because simulation instead of threatening torture?
|
# ? Oct 30, 2014 01:40 |
|
1337JiveTurkey posted:Dumb question: If the computer is dedicated to maximizing human happiness and welfare, why not just offer sex and drugs so awesome that they violate the laws of physics because simulation instead of threatening torture? I'm pretty sure that's why.
|
# ? Oct 30, 2014 02:12 |
|
Actually the whole 'Quantum Physics Series' is worth reading, it has a lot of Yudkowsky's weird ideas about science: http://lesswrong.com/lw/qc/when_science_cant_help/ quote:Evolutionary psychology is another example of a case where rationality has to take over from science. While theories of evolutionary psychology form a connected whole, only some of those theories are readily testable experimentally. But you still need the other parts of the theory, because they form a connected web that helps you to form the hypotheses that are actually testable—and then the helper hypotheses are supported in a Bayesian sense, but not supported experimentally. Science would render a verdict of "not proven" on individual parts of a connected theoretical mesh that is experimentally productive as a whole. We'd need a new kind of verdict for that, something like "indirectly supported". Hmm yes I am totally prepared to accept the truth of a bunch of just-so stories that happen to support current cultural prejudices in the absence of empirical evidence Edit: His next example of a subject where Bayesian reasoning should trump (lack of) scientific evidence is cryonics, of course. Pf. Hikikomoriarty fucked around with this message at 02:39 on Oct 30, 2014 |
# ? Oct 30, 2014 02:35 |
|
Question about all these silly AI-torture scenarios - in each of these scenarios, isn't the AI forced to simulate itself? Like that "do you free the boxed AI that threatens to torture simulated you", question. Well, if it's simulating the situation, isn't it also simulating itself simulating the situation, etc. etc. recursively? Or does Yud handwave the simulations as just being 'good enough'?
|
# ? Oct 30, 2014 03:03 |
|
Mort posted:Edit: His next example of a subject where Bayesian reasoning should trump (lack of) scientific evidence is cryonics, of course. bewilderment posted:Question about all these silly AI-torture scenarios - in each of these scenarios, isn't the AI forced to simulate itself? Like that "do you free the boxed AI that threatens to torture simulated you", question. Well, if it's simulating the situation, isn't it also simulating itself simulating the situation, etc. etc. recursively?
|
# ? Oct 30, 2014 03:24 |
|
SubG posted:Because in the post-Singularity future all of the right-thinking people will be showered in glory anyway. That's the point of the Singularity in the first place. And since naturally everything is so wonderful in the best-of-all-possible technological futures we have to go out of our way to make wrong-thinking people suffer. Because they should be made to suffer. For thinking wrongly. I'd hope that whatever omnipotent and omnibenevolent AI overlord would be better than me at coming up with ideas, but maybe getting to try out all the new mirth-inducing drugs before everyone else would sweeten the pot enough to make the earlier apotheosis cancel out everyone else getting the Million Man Orgy or whatever floats the person's boat a bit later than usual. Mort posted:Actually the whole 'Quantum Physics Series' is worth reading, it has a lot of Yudkowsky's weird ideas about science: Speaking as someone with years of experience being a confident idiot using pure reason to go where science hasn't yet dared is perilous to say the least.
|
# ? Oct 30, 2014 05:24 |
|
1337JiveTurkey posted:Speaking as someone with years of experience being a confident idiot using pure reason to go where science hasn't yet dared is perilous to say the least. It really irritates me when tests like this have 'correct' answers that aren't technically right. quote:“Evolution cannot cause an organism’s traits to change during its lifetime": True or False While they say the answer is 'true', it's pretty easy to justify a 'false' answer being the correct one. False: A butterfly's traits change dramatically during its lifetime, implying evolution cannot create that level of complexity is dumb. False: Cancer cells mutate to be able to outcompete other cells within an organism's body, resulting in a reproductive advantage. They cause dramatic changes to the organism, especially if you end up with a big blob of HeLa. False: Epigenomic traits (which can induced by body weight, for example) can cause significant changes to body function, and can be passed down to descendents. It's kind of ironic, actually, since they're making unfounded assumptions in order to try to find unfounded assumptions.
|
# ? Oct 30, 2014 06:09 |
|
Tunicate posted:While they say the answer is 'true', it's pretty easy to justify a 'false' answer being the correct one.
|
# ? Oct 30, 2014 06:52 |
|
Evolution happens to species, not individuals. There is really no context in which false is the correct answer to that question, unless you willfully misunderstand it.
|
# ? Oct 30, 2014 07:00 |
|
1337JiveTurkey posted:Dumb question: If the computer is dedicated to maximizing human happiness and welfare, why not just offer sex and drugs so awesome that they violate the laws of physics because simulation instead of threatening torture? Simulations aren't people. It doesn't care about making the simulations happy.
|
# ? Oct 30, 2014 08:00 |
|
Cardiovorax posted:Evolution happens to species, not individuals. There is really no context in which false is the correct answer to that question, unless you willfully misunderstand it. Only if you choose to define what constitutes evolution based on a 'species' instead of 'population'. Since 'species' is a loaded term once you start looking at microbiology, a lot of microbiologists prefer the latter definition. Using that definition, you end up with scientists like Leigh Van Valen arguing that HeLa cells can be classified as an entirely differnt species. Contagious cancers (like the one killing off the Tasmanian devils) definitely are a different organism than their host (just as a zooid like the Portuguese man-of-war is composed of multiple genetically distinct organisms). While I wouldn't go that far, it seems strange to argue against cancer being an example of evolution through natural selection among cells. An individual cell gaining a mutation that grants a reproductive advantage. It passes this mutation on to its daughter cells, which also experience more reproductive success, resulting in a dramatic change in the overall population genetics. Sucks for the organism, but nobody said evolution was a long-sided process. And hey, if cancer researchers and Berkley are willing to say it's a useful perspective, and it gets published in Nature Reviews, I think there's quite a lot of context where you can say evolution happens within an individual organism. Tunicate fucked around with this message at 08:07 on Oct 30, 2014 |
# ? Oct 30, 2014 08:05 |
|
This is the part where "willfully misunderstanding the question" applies. Say population instead of species if you prefer that, the point is that evolution occurs over multiple generations. Yes, natural selection applies to everything that is alive and reproduces in some fashion, but a clump of cancer cells isn't qualitatively different from a bacterial colony, even if it resides within a host body. Individual organisms do not spontaneously undergo evolution, which is clearly what the question was asking for. That is simply how the word is defined.
|
# ? Oct 30, 2014 08:21 |
|
Cardiovorax posted:This is the part where "willfully misunderstanding the question" applies. Say population instead of species if you prefer that, the point is that evolution occurs over multiple generations. Yes, natural selection applies to everything that is alive and reproduces in some fashion, but a clump of cancer cells isn't qualitatively different from a bacterial colony, even if it resides within a host body. Individual organisms do not spontaneously undergo evolution, which is clearly what the question was asking for. That is simply how the word is defined. Well, technically the world evolution doesn't meaning anything more than a process of growth or development; it was in use centuries before Darwin. But yeah the definition of evolution in the biology sense has it over successive generations. Mutation can definitely happen in an individual, and that mutation can confer an advantage, but that's not evolution. Evolution is when something like that occurs and leads to a change in the composition of the population as a whole over successive generations.
|
# ? Oct 30, 2014 08:37 |
|
Tunicate posted:While I wouldn't go that far, it seems strange to argue against cancer being an example of evolution through natural selection among cells. An individual cell gaining a mutation that grants a reproductive advantage. It passes this mutation on to its daughter cells, which also experience more reproductive success, resulting in a dramatic change in the overall population genetics.
|
# ? Oct 30, 2014 11:45 |
|
Political Whores posted:Well, technically the world evolution doesn't meaning anything more than a process of growth or development; it was in use centuries before Darwin. Cardiovorax posted:This is the part where "willfully misunderstanding the question" applies.
|
# ? Oct 30, 2014 15:15 |
|
Cardiovorax posted:Individual organisms do not spontaneously undergo evolution, which is clearly what the question was asking for. Then how did I get my Charizard? Q.E.D. And don't give me any of that 'metamorphosis' poo poo.
|
# ? Oct 30, 2014 15:55 |
|
SubG posted:False. Unless you're using `infinitely' metaphorically or something. Obviously. quote:I have no stake in arguing Yud's crazy-rear end retroactive causality. My point is that you and Cardiovorax are assuming that human decision-making in a specific problem either a) is something that might entail infinite recursion or otherwise require infinite resources, or b) is a completely irreducible process which therefore requires reproducing with perfect fidelity an entire human mind in order to predict the outcome. Neither of these are contentions which appear to be supported by the evidence. Indeed, both of them seem to radically diverge from our current understanding of how human minds in fact work. I'm not talking about human decision making or the "miracle" of the human brain. I'm talking about the predictive capabilities of a "perfect" computer. We're definitely talking past one another at this point. I'm saying: 1) If the computer is 100% accurate, it will never be incorrect about which box to put the money in. 2) If the computer is wrong, even once, it is not 100% accurate. 3) If the computer does not manage to account for something that fucks with the box choice, something which might not be accounted for in the initial brain scan or whatever, the computer cannot be called 100% accurate. 4) There are a poo poo ton of things that the computer cannot account for based purely on a brain scan or a personality test which might cause a person to act out of character, leading to less than 100% accuracy. (i.e. In between the quiz and the box choice, the person gets a phone call stating that their significant other has been kidnapped, and if they don't pay $1,001,000 to the kidnappers, their SO will be killed. Fantastical, but not beyond the realm of possibility. Exactly the kind of prank a dick rival researcher would pull) This is what I'm talking about. The problem is with the computer and with the claim of 100% accuracy, not with the conceptual claim that certain predictive markers exist which strongly correlate with other behaviors (i.e. those Buzzfeed "I can guess when you lost your virginity based on your favorite Disney movie and which picture of a mountain you like" quizzes). Obviously sometimes there is no vvvv edit: typo, thanks vvvv Toph Bei Fong fucked around with this message at 19:04 on Oct 30, 2014 |
# ? Oct 30, 2014 17:48 |
|
Those actually are correlations, hence the site's name. It's causation that's absent.
|
# ? Oct 30, 2014 18:23 |
|
Spoilers Below posted:I'm saying: There are several easy ways to show that a 100% predictor can't exist: 1. A human can decide what to do based on a coin flip or some other random bit (I do this all the time when deciding what to do for lunch, for instance). 2. If you have a 100% accurate super-intelligence, you can build another one that attempts to do the opposite of whatever the first 100% accurate intelligence decides it will do. This seems like a fake "gotcha" thing but I deal with predictions that change the outcomes of whats being predicted all the time at work (predict an insurance claim will be expensive, and the incentive to settle it early goes up. Which means it ends up not being expensive...)
|
# ? Oct 30, 2014 22:39 |
|
Spoilers Below posted:4) There are a poo poo ton of things that the computer cannot account for based purely on a brain scan or a personality test which might cause a person to act out of character, leading to less than 100% accuracy. (i.e. In between the quiz and the box choice, the person gets a phone call stating that their significant other has been kidnapped, and if they don't pay $1,001,000 to the kidnappers, their SO will be killed. Fantastical, but not beyond the realm of possibility. Exactly the kind of prank a dick rival researcher would pull) su3su2u1 posted:There are several easy ways to show that a 100% predictor can't exist: I mean okay, maybe a phone call saying grandma died would gently caress up the results. So quarantine the subject between the scan and the test, or do the scan immediately prior to the test. Or whatever. And of course it is trivially true that the outcome of a truly random coin toss can't be predicted (except in aggregate). So prohibit truly random coins in the test facility. I guess if you honestly think that human behaviour is determined by random coin flips or whatever then I guess barring them from the experiment then this is cheating. But if you don't think they're determinative then it's just an implementation wart for the design of the experiment.
|
# ? Oct 31, 2014 00:00 |
|
Unrelated to this discussion, here is something amazing http://lesswrong.com/lw/298/more_art_less_stink_taking_the_pu_out_of_pua/
|
# ? Oct 31, 2014 00:13 |
|
The Vosgian Beast posted:Unrelated to this discussion, here is something amazing http://lesswrong.com/lw/298/more_art_less_stink_taking_the_pu_out_of_pua/
|
# ? Oct 31, 2014 00:17 |
|
The Vosgian Beast posted:Unrelated to this discussion, here is something amazing http://lesswrong.com/lw/298/more_art_less_stink_taking_the_pu_out_of_pua/ I guess it makes sense for a cult leader to be pretty interested in programming, though.
|
# ? Oct 31, 2014 00:20 |
|
Even more so considering his beliefs that everybody has just the exact right thing you need to say to them to get them to do whatever you want. It's half the justification for how his AI-takes-over-the-world delusions even work, after all.
|
# ? Oct 31, 2014 00:27 |
|
The linked blog post was not written, nor commented on, by Yud.
|
# ? Oct 31, 2014 00:29 |
|
I wonder how much of that mindset is honestly caused by videogames. They often feature picking the 'right' dialogue option to unlock the person's willingness to do what you want, etc.
|
# ? Oct 31, 2014 00:30 |
|
Moddington posted:The linked blog post was not written, nor commented on, by Yud. Yeah, in the future, remember that not everyone on Less Wrong is Big Yud! There's a whole site full of Dunning Krugerites!
|
# ? Oct 31, 2014 00:33 |
|
Still applies, though. He does actually believe that.
|
# ? Oct 31, 2014 00:34 |
|
Night10194 posted:I wonder how much of that mindset is honestly caused by videogames. They often feature picking the 'right' dialogue option to unlock the person's willingness to do what you want, etc.
|
# ? Oct 31, 2014 00:35 |
|
The Vosgian Beast posted:Unrelated to this discussion
|
# ? Oct 31, 2014 00:35 |
|
The Vosgian Beast posted:Unrelated to this discussion, here is something amazing http://lesswrong.com/lw/298/more_art_less_stink_taking_the_pu_out_of_pua/ clippy posted:I want to join this so I can learn how to better convince humans to help me. wedrifid posted:So are we to expect anecdotes of Clippy negging HBs and getting "clip closes"? :P clippy posted:Paperclips shouldn't "close" in the sense of the metal wire forming a closed curve; they should be open curves.
|
# ? Oct 31, 2014 00:53 |
|
|
# ? May 4, 2024 09:32 |
|
That's a gimmick account, pretending to be a paperclip maximizer, as discussed earlier in the thread. It looks like a huge sperg because it's a regular sperg trying to pretend to be a sperg about paperclips. Way too long story short: paperclip maximizers are AI entities which desire some goal which seems incoherent to humanity. They are one of Yud's terror scenarios for unFriendly AI. The nominal one seeks singlemindedly to increase the number of paperclips in the universe; the quote goes "the AI does not love you, nor does it hate you, but you are made of atoms it could use for something else." Yes, Yud is actually scared of this and spends a million bucks or more a year trying to stop it from happening.
|
# ? Oct 31, 2014 01:59 |