|
Djeser posted:
Even if you somehow except his stupid argument of perfect information this still just a variant of the freaking grandfather paradox/self negating prophecy. I'm sure to his mind it sound all deep and revolutionary but its basically just a load of deliberately contrived bollocks.
|
# ¿ Apr 20, 2014 06:31 |
|
|
# ¿ May 18, 2024 04:59 |
|
There's a delicious irony in someone with such a smug sense of superiority about their rationality engaging in such obvious magical thinking. His obsessive focus on rational decision making blinds him to the fact that at the end of the day human nature means that 90% of his decisions are going to be based on emotion and unconscious processes. In fact his inability to reflect critically on his own theories and his dogmatic belief in them makes him more vulnerable to magical thinking then the average person.
|
# ¿ Apr 20, 2014 16:42 |
|
There's also the problem that if hear about the future AI torturing people and they dont have the decision making process they may decide to deliberately not donate to AI research out of spite. If enough people do this then even by stupid moon logic it would be in the A Is best interest to not torture people so they wont delay its creation Therefore your best bet to avoid being tortured is to not donate. That the fun with this story of stupid causality defying logic, it works both ways.
|
# ¿ Apr 20, 2014 20:46 |
|
The funny thing about this whole timeless decision making AI, is its literally just Pascal's wager. "You may not think your in an AI simulation that will torturer you for all eternity if you displease it, but you might. So give me all your money just incase."
|
# ¿ Apr 20, 2014 20:56 |
|
Krotera posted:Yes, with the exception that Yudkowsky's come up with an absurd probabilistic justification for why it's extremely likely (instead of 'faith' or the other things normal people religions come up with). You think the chance that the AI will come into being is 1/1,000? Fine, he'll simulate 1,000,000 of you -- now the odds are even! You think the chance is 1/1,000,000? Fine, it's 1,000,000,000 now! Besides, we've got infinite time so logically an artificial superintelligence that could have been brought into being sooner by your donations is bound to come into being. But what if evil space Buda reincarnated me into a helish nightmare because by giving to the AI I was caring too much about worldly things? Also space Buda will torturer twice as many copies of me as the AI will. What's that you say, you think evil space Buda is less likely then an AI? He will just torture more of you to make up the difference. He has an infinite amount of time in which to keep reincarnating you after all.
|
# ¿ Apr 20, 2014 21:34 |
|
Djeser posted:Actually, allowing for non-positive probabilities also shows that you know more about math than Yudkowsky, because he claims there's no such thing as a certain impossibility (0 probability) because like, anything could happen dude, because of like, quantum stuff. And then he gos an conveniently forgets that this also means that making a perfect simulation of future events is impossible so his entire timeless decision making theory is bollocks.
|
# ¿ Apr 23, 2014 03:42 |
|
Actually why would a captive AI even be allowed the kind of computing power needed for even one "perfect" simulation of a human being, and where would it get the necessary data about someone to make said simulation. I feel like I'm missing some of the dozens of stupid assumptions being made here in this stupid hypothetical.
Vorpal Cat fucked around with this message at 04:34 on Apr 23, 2014 |
# ¿ Apr 23, 2014 04:31 |
|
Phobophilia posted:Roko's Basilisk is a reference to the Langford Basilisk. It's less effective because it only crashes the brains of autists. Didn't they try to do that to the Borg in a star trek next gen episode? Exploiting a flaw in how the Borg process images to make an image that would take a near infinite amount of computing power to process, thus shutting down the hive mind as it wasted all its computing power on an unsolvable problem.
|
# ¿ Apr 23, 2014 06:16 |
|
|
# ¿ May 18, 2024 04:59 |
|
Tracula posted:Harry Kim getting promoted to anything above Ensign Honestly though, I seem to recall it was some impossible and paradoxical shape or somesuch. It was either a 2d shape that was impossible to render in 3d, or the script of Threshold, its been a while so I cant be sure which. Vorpal Cat fucked around with this message at 07:17 on Apr 23, 2014 |
# ¿ Apr 23, 2014 07:14 |