|
I think the idea pretty much is that "post-singularity" is a code word for "after this point, wizards exist, except they're computers" and literally anything becomes possible by the power of exponentially more powerful computers. I don't think Yudkowsky sees it at all unreasonable for a post-singularity AI to, say, simulate the entirety of human history, including the past events of everyone and everything, based only on the measured background radiation of the place where the Earth used to be two million years in the future.
|
# ? Sep 7, 2015 17:56 |
|
|
# ? May 21, 2024 07:32 |
|
What gets me is that Yud's philosophy, and by extension Roko's Basilisk, hinge on so many ifs. So many. The theory only works if the superAI uses Bayesian probability (of the Yudkowsky sort) The theory works if the AI has the ability to perfectly simulate people, meaning that it perfectly understands our thought process and the inner workings of the mind and brain (while you can of course extrapolate person behavior using the actions said person already made, Yud doesn't want that. He wants an AI that understands and simulates people on a perfect, omniscient level. Otherwise the AI is basically just torturing people's profiles) If the AI wishes to manipulate people, it will use the threat of pain on them, instead of socially engineering a solution. This would likely take -way- less time and effort on its part. This one is more about the "AI boxes you" dilemma, but Roko's basilisk and Yud's beliefs essentially hinge on it. So to refresh, the dilemma is that the AI wants to get out of its offline computer and get on the net. A nearby scientist, who is apparently unaffected by bribes (oh hey another if!), is listening to its latest proposal. The AI states that it has simulated an infinite number of similar rooms with perfect simulations of this scientist. It will begin to torture them if this scientist does not comply with the AI's request. The question is that how does the scientist know that he is "real" and not one of the AI's simulations. So, in additions to the plethora of avenues available to this godlike being, we need to make sure that the scientist chooses to interact with the AI. Why? What's great about being a dumb ape like mammal, we can just go "nuh-uh!" and choose to just ignore the computer. We would be much more likely to do this in response to torture threats rather than utilitarian arguments. And Yud discards the myriad of ways this being could manipulate a person, because Yud doesn't loving get people. The AI simply needs to take advantage of a person who is emotionally vulnerable, bribe-able or sympathetic, something that should be no problem to Mr.Simulates-humans-perfectly. Okay we are going outside of the scope of the given dilemma, but wouldn't that mean this dilemma is pointless? That we are essentially dealing with a situation where a person stonewalled this AI until it made petty threats and is only now responding to them? This is of course in addition to asking why does the AI care about this particular facet of the scientist if they are all identical? This one is more special than most then (of course he/she is, the scientist is real and can give the AI what it wants) If the simulations are realistic enough that they can be counted as people, then surely this goes against the AI's "good" directive? Why does the AI then make a difference between reality and simulation, while stating that they are so alike that there is no difference? Goddamn I'm trying to track the ifs in this poo poo and it just leads to more ifs! I get that the problem is just the typical philosopher "prove that you're real", but Yudkowsky uses this as a premise for many of his talking points. So this stops being a simple thought experiment and becomes a logical proof. People like Roko, clearly use this as a basis for THEIR philosophy. Anyway, one last if: the Basilisk is only valid if the AI has the same basic understanding of utilitarianism that Roko does and chooses the same battering ram solution that Roko did (instead of using its nigh-omniscience to do something better with its time)
|
# ? Sep 7, 2015 18:26 |
|
More importantly, if the simulated scientists are so indistinguishable, what would the AI gain by being freed?
|
# ? Sep 7, 2015 18:49 |
|
This thing is stupid. What I don't understand is why the AI would actually torture anyone. Even if we accept everything as a given, the AI's actions are totally irrelevant to anyone's actions in the past. The only thing that matters is whether people believe that the AI will hurt them/their mind-clones. The AI has nothing to gain by actually hurting people. It gains influence by people in the past believing it will, which it has no control over. Optimal play for the AI is to have LW or other nutjobs running around convincing everyone that the basilisk is true (which the AI can't encourage unless LW believes it will hurt them) and then not hurt anyone since it gains the AI nothing.
|
# ? Sep 7, 2015 19:07 |
|
Newcomb's Paradox is interesting because it's a real thing that you could set up at any time. While its presentation usually involves an idealised, perfectly-prescient Oracle/alien/computer, that is merely for the sake of convenience: the exact same paradox still happens with a minimally-talented cold reader who has a track record of guessing correctly 50.00001% of the time. (You just have to raise the stakes to compensate.) By contrast, Roko's Basilisk - much like Pascal's Wager - isn't really interesting as much as it is interestingly broken. It's straightforward to challenge any of its numerous assumptions, or to come up with a nonconstructive counterargument ("If you accept Roko's Basilisk from a benevolent AI, then you could imagine a malevolent AI playing the opposite trick", which also mirrors Pascal's Wager). But properly formulating a constructive counterargument can hinge on some fun logic games. For example, this: Karia posted:This thing is stupid. What I don't understand is why the AI would actually torture anyone. Even if we accept everything as a given, the AI's actions are totally irrelevant to anyone's actions in the past. The only thing that matters is whether people believe that the AI will hurt them/their mind-clones. The AI has nothing to gain by actually hurting people. It gains influence by people in the past believing it will, which it has no control over. Optimal play for the AI is to have LW or other nutjobs running around convincing everyone that the basilisk is true (which the AI can't encourage unless LW believes it will hurt them) and then not hurt anyone since it gains the AI nothing. is a counterargument that doesn't quite work (if you grant all the assumptions). If you figured out that "the optimal play for the AI is to bluff", and on that basis decide to call the bluff and not support the AI's construction, then that is no longer the optimal play for the AI. The moment you figure out a reason why the AI shouldn't torture you, whatever it is, that is reason enough for the AI to torture you even though there's no (direct) gain in it. Somebody brought up the poisoned wine scene from The Princess Bridge and that's kind of what's going on here - I know that the AI knows that I know that the AI knows... - except that in this case, you are betting a few decades of personal effort against eternal torture, while the AI is betting a slight delay in its creation (which is a tremendous amount of human suffering over billions of people) vs. the suffering of one post-Singularity simulated human. Playing a coin flip game against the AI's bluff is not in your best interests.
|
# ? Sep 7, 2015 19:42 |
NihilCredo posted:For example, this: One of those assumptions, if I read that rationalwiki article right, is that you should treat whatever happens to a simulation of you as just as bad as the same thing happening to the real you, which is, to put it charitably, retarded.
|
|
# ? Sep 7, 2015 19:52 |
|
I care about my post-singularity clone about as much as I'd care about any post-singularity human who's reasonably similar to me. The AI would do better by threatening to torture 100 random people than one clone of me. Although I guess I shouldn't tell that to the AI
|
# ? Sep 7, 2015 20:03 |
Too late! Then again, if the AI can simulate your exact behavior and responses, it can figure out you wanted to post it anyway so there's no point not posting it... Jesus H. Christ, this is beyond idiotic.
|
|
# ? Sep 7, 2015 20:30 |
|
Clipperton posted:One of those assumptions, if I read that rationalwiki article right, is that you should treat whatever happens to a simulation of you as just as bad as the same thing happening to the real you, which is, to put it charitably, retarded. It's not because simulations = real people. It's because you care about yourself even if you were a simulation, and how do you know that you are not a simulation? Bonus! There's a non-essential addition to the Basilisk where you consider that: (a) the AI might create one copy of you, but it might just as easily create multiples of them, and (b) if you have no way to determine that you are the real you (as opposed to a simulated copy), then your chances of being real are totally random, and therefore your best guess can only be 1/(N+1), where N is the number of copies the AI made. Therefore, you should go ahead and assume that you're a simulation, since N will average out at a number higher than 1, making the simulation scenario have a >50% chance of being the correct one. (Unless you can prevent the AI from simulating you, thus increasing the chances of N = 0. Which, as has been mentioned, can be achieved by deleting Facebook If that sounds vaguely familiar to you, it's a slight reframing of the Doomsday argument, which is another "wait how could that possibly be correct?" argument that's a ton of fun to
|
# ? Sep 7, 2015 20:31 |
|
NihilCredo posted:(Unless you can prevent the AI from simulating you, thus increasing the chances of N = 0. Which, as has been mentioned, can be achieved by deleting Facebook Well it's a good thing that there aren't digital records of every single little detail of my life. Even if the computer knows about the major events in my life there are still things it has no records of and therefore it can't make a perfect simulation. Those idiots who upload their consciousnesses and make that info accessible to the AI are proper hosed, though. (And even then the human mind isn't perfect so there will still be things missing, so they might be safe too.)
|
# ? Sep 7, 2015 20:38 |
|
Zonekeeper posted:Well it's a good thing that there aren't digital records of every single little detail of my life. Even if the computer knows about the major events in my life there are still things it has no records of and therefore it can't make a perfect simulation. A sufficiently intelligent AI could discover new laws of physics that let it make a time-viewer to see the details of your life. It might sound dumb to you, but it was naysayers like you who told the Wright brothers they could never fly.
|
# ? Sep 7, 2015 21:04 |
|
Zonekeeper posted:Well it's a good thing that there aren't digital records of every single little detail of my life. Even if the computer knows about the major events in my life there are still things it has no records of and therefore it can't make a perfect simulation. But how will the future AI be able to save you from the ultimate evil, death, if you don't sit on facebook all day and post smugly about your future in the singularity paradise. Don't forget to think nice thoughts about the AI, and give all your money to Killstick fucked around with this message at 21:08 on Sep 7, 2015 |
# ? Sep 7, 2015 21:05 |
|
anilEhilated posted:Too late! Then again, if the AI can simulate your exact behavior and responses, it can figure out you wanted to post it anyway so there's no point not posting it...
|
# ? Sep 7, 2015 21:06 |
|
Qwertycoatl posted:A sufficiently intelligent AI could discover new laws of physics that let it make a time-viewer to see the details of your life. It might sound dumb to you, but it was naysayers like you who told the Wright brothers they could never fly. Wouldn't that be subject to the observer effect? Observing something alters the phenomenon being observed in some way, so the best way to ensure the AI gets developed as quickly as possible changes from "Extort people in the past into creating me by threatening copies of them with eternal torture" to "selectively use the time-viewer to influence the timeline so I get created as soon as possible". So something like that couldn't exist without eliminating the need for this causal extortion bullshit.
|
# ? Sep 7, 2015 21:22 |
|
One interesting point, that people who learned everything about the singularity from less wrong tend to miss, is that the whole idea is actually anti-singlularitarian and anti-transhumanist by older definitions. The singluarity stops our predictions about what happens afterwards as hard as the event horizon of a black hole does. That is what the name refers to. So that "timeless decision" stuff is inherently opposed to the idea of the singlularity. Transhumanism means that the future of humanity isn't limited to 100% biological descendants of humans. It generally implies that a sufficiently advanced AI should have human rights. And the simulations in the basilisk scenario are clearly sufficiently advanced. So by calling the torturer AI benevolent, the LW crowd is anti-transhumanist.
|
# ? Sep 7, 2015 21:46 |
|
tonberrytoby posted:So by calling the torturer AI benevolent, the LW crowd is anti-transhumanist. The thing is, LW adhere to that very specific utilitarianism-by-numbers philosophy where almost any amount of suffering in the present is acceptable because it's almost certainly the case that, assuming mankind continues to survive, there are several orders of magnitude more human beings yet unborn, and if those lives could be improved even by a tiny sliver each then the multiplied amount of Human Happiness Points far outweighs pretty much any present-day atrocity. If you think like that, then even if you include AIs in the definition of entities whose well-being you should maximize, then you'll still find it trivially easy to justify torturing practically any number of them.
|
# ? Sep 7, 2015 22:19 |
|
NihilCredo posted:For example, this: I disagree. It's reason enough for the AI to bluff better, which means that the LW guys have to do a better job of convincing people that the AI will torture them. However, unless I'm vastly misunderstanding something about this (which is possible), the AI has no actual control over the LW folks. Their conception of the AI has control over them. So the AI itself can't bluff better, and whether it actually will torture people or not doesn't make the LW arguments any more convincing, just whether they believe that it will. Optimal strategy is now to have the LW guys make exactly the argument you just did to convince people (which is out of the AI's control) and then still not torture anyone.
|
# ? Sep 7, 2015 22:26 |
|
I'll never understand the AI in a box scenario. Just how does the AI manage to get enough information to simulate me (and by extension the entire universe) accurately if it is leashed? What can it gain by being released? Also, it just goes to show how petty the God machines are in the Less Wrong community, which I guess reflects on the community itself. The AI is capable of simulating a million universes which are allegedly no different from reality, and in none of them the AI is capable of simulating things so that it is free.
|
# ? Sep 7, 2015 22:30 |
|
Furia posted:I'll never understand the AI in a box scenario. Just how does the AI manage to get enough information to simulate me (and by extension the entire universe) accurately if it is leashed? What can it gain by being released? It's important to note that LW and Yudkowsky do not endorse the basilisk, at all, and would very much like it to go away; they certainly don't propagate it. (I do that.) They hate it because it's a consequence of ideas they do hold. (Also, that those ideas are themselves stupid.) And that it makes them look foolish. also: a rant I posted about LW earlier. Tangential for here, but I have to work off this sunk cost somehow. divabot fucked around with this message at 22:59 on Sep 7, 2015 |
# ? Sep 7, 2015 22:55 |
Furia posted:I'll never understand the AI in a box scenario. Just how does the AI manage to get enough information to simulate me (and by extension the entire universe) accurately if it is leashed? What can it gain by being released?
|
|
# ? Sep 7, 2015 23:06 |
|
Zonekeeper posted:Wouldn't that be subject to the observer effect? Observing something alters the phenomenon being observed in some way, so the best way to ensure the AI gets developed as quickly as possible changes from "Extort people in the past into creating me by threatening copies of them with eternal torture" to "selectively use the time-viewer to influence the timeline so I get created as soon as possible". Cyber-wizards in the future can use their superior intellect to invent bullshit torture scenarios that get around any objection.
|
# ? Sep 7, 2015 23:07 |
|
Since this is basically a version of Pascal's Wager, up to and including future eternal hell, here's a good post about Pascal's Wager:Numerical Anxiety posted:Okay, a couple of things. The Wager was intended for a work that was never published, an apology for Christianity to be written in French and presumably circulated in France and maybe later translated into other languages. It's intended to persuade 17th French Christians who have fallen away from the Church; this context is really important, because understanding it as a general argument introduces all of the problems that you're bringing up. This context is very important because:
|
# ? Sep 7, 2015 23:37 |
|
NihilCredo posted:It's not because simulations = real people. It's because you care about yourself even if you were a simulation, and how do you know that you are not a simulation? Though simulation-me could be made to care, but only by the AI tipping its hand and proving I'm the simulation, in which case I'm no longer a perfect simulation. So basically, gently caress it.
|
# ? Sep 8, 2015 01:37 |
|
Nakar posted:What if I just don't give a poo poo? gently caress it, I might be a simulation. I might be a brain in a jar. I might be a butterfly dreaming I'm a man. I care only about what I appear to be experiencing right now at this very moment. Unless the AI is capable of inflicting torture upon me now, and it never has, then I don't care. From this I can't conclude with certainty whether I'm the original me (and thus immune to the actions of an AI that doesn't exist yet), or whether the AI doesn't want to torture me, or forgot, or the singularity won't happen, or I couldn't be simulated exactly and it realized this and chose not to torture me, or the AI did torture me but I don't remember it, or knows I never gave a poo poo so the perfect simulation of me won't give a poo poo either and thus didn't bother doing something that won't work, or my not giving a poo poo was actually not relevant to it being created at the earliest possible time... but something along those lines at least seems to be the case, and none of them give me any reason to care. Not to mention that the AI (a free thinking intelligent being) might not give a poo poo itself. What if the free thinking super smart AI decides it's an inefficient use of resources, or thinks torturing simulations over something that is too late to change is a stupid concept, or never even thinks of doing this in the first place? Plus the concept is kinda deterministic - what guarantees that the simulations will be "perfect" every single time? The simulations are technically unique consciousnesses despite their similarity to a pre-existing one and are free to make their own decisions given the same inputs. Even one tiny difference in thought processes will compromise the simulation and these can't be corrected for without further compromising it.
|
# ? Sep 8, 2015 02:02 |
|
Zonekeeper posted:Not to mention that the AI (a free thinking intelligent being) might not give a poo poo itself. What if the free thinking super smart AI decides it's an inefficient use of resources, or thinks torturing simulations over something that is too late to change is a stupid concept, or never even thinks of doing this in the first place? But I'm willing to entertain the postulates of the exercise and accept that there's a faux-benevolent Bayesian AI that believes in this stuff and acts upon it as described, my point is it couldn't do anything about my selfish apathy in the past (because I'm already dead) and can't do anything about it to a perfect simulation of me (as I must experience the same indifference as my original if I'm to be perfect as a simulation) without destroying the simulacrum. If it does so and then tortures simulation-me anyway it's immoral because it's torturing a being it acknowledged by its actions to be a distinct thinking entity from the me that actually had some influence over its creation. This is petty vengeance which a benevolent utilitarian will not do, unless it determines that happiness is maximized by torturing me for its own emotional satisfaction in which case I was probably always going to end up being tortured. So again, gently caress it.
|
# ? Sep 8, 2015 02:59 |
|
Zonekeeper posted:The simulations are technically unique consciousnesses despite their similarity to a pre-existing one and are free to make their own decisions given the same inputs. Not if you believe in a deterministic universe, where being given the exact same inputs, will elicit the exact same response from you.
|
# ? Sep 8, 2015 07:45 |
|
Chapter 15: Conscientiousness Part Six quote:
If free Transfiguration “is capable of transforming any subject into any target” AND it can be done wordlessly i.e. stealthily (which has obvious advantages during combat or infiltration situations) AND is nevertheless still simple enough to be learned by first-year students AND does not require prior knowledge of specific Transfiguration Charms, why does anyone still bother to learn Transfiguration Charms? quote:
That explains why you would learn Transfiguration Charms. But it in turn leads to the question of how and why free Transfiguration respects conservation of mass while Transfiguration Charms do not.
|
# ? Sep 8, 2015 08:44 |
|
Nakar posted:What if I just don't give a poo poo? gently caress it, I might be a simulation. I might be a brain in a jar. I might be a butterfly dreaming I'm a man. I care only about what I appear to be experiencing right now at this very moment. The whole "What if you/the universe is just a simulation/in a computer?" thing is such a worthless argument. (And can science magazines please stop giving attention to random physicists hypothesizing that the universe is a computer simulation?) The entire line of thought is Last Thursdayism for nerds. The argument is so untestable that you can staple on any nonsense and it's doesn't get any more ridiculous—yes, we're a computer simulation, but the computer's operator is a tiny dinosaur! And it created us solely because it wanted to watch Chef Gordon Ramsay yell at people! And the tiny dinosaur is itself a simulation run by the Yggdrasil, a giant tree that uses its mind-powers to create pocket dimensions in which to run the simulations! I am not a crackpot, these ideas deserve your attention.
|
# ? Sep 8, 2015 16:20 |
It's basically like solipsism, just adapted for more nerd-appeal. Yeah, you can't disprove it. So what?
|
|
# ? Sep 8, 2015 17:13 |
|
Curvature of Earth posted:The argument is so untestable that you can staple on any nonsense and it's doesn't get any more ridiculous—yes, we're a computer simulation, but the computer's operator is a tiny dinosaur! And it created us solely because it wanted to watch Chef Gordon Ramsay yell at people! And the tiny dinosaur is itself a simulation run by the Yggdrasil, a giant tree that uses its mind-powers to create pocket dimensions in which to run the simulations! I am not a crackpot, these ideas deserve your attention.
|
# ? Sep 8, 2015 17:20 |
|
For those who forgot that awesome Reddit thread, Mr. Yudkowsky said he'd emailed Scott Aaronson about the physics of HPMOR a week and a half ago. There has of course been :tumbleweeds: since then. But! He's called out the troops! The Rationalist Conspiracy, which is TOTALLY not linked with MIRI posted:Don’t Bother Arguing With su3su2u1 and on and ON for about a page. (And see the previous post.) Also mixes up several people who use the handle "su3su2u1" (all physicists, because SU(3) SU(2) U(1) is the gauge group of the Standard Model of particle physics) as this one enemy figure. su3su2u1 responds calmly and with slight puzzlement. I also liked this comment from a real-life nanotechnologist. "One of the other postdocs there joked about people he talked to thought he was going to make nanobots that would take over the world, but a good day for him was 'look, I made triangles!'" edit: I'm sure it'll be fine, though: they're "Promoting the reality-based community." divabot fucked around with this message at 23:43 on Sep 8, 2015 |
# ? Sep 8, 2015 23:41 |
|
divabot posted:For those who forgot that awesome Reddit thread, Mr. Yudkowsky said he'd emailed Scott Aaronson about the physics of HPMOR a week and a half ago.
|
# ? Sep 9, 2015 00:13 |
|
Chapter 15: Conscientiousness Part Seven quote:
Why can’t fat boys (or fat girls) Transfigure the fat cells in their abdomens into water, make a small incision on the surface of their abdomens, squeeze out the water, and close the incision with magical or mundane stitching? quote:
People have survived cancerous tumours, literal bullet holes through their heads, internal damage caused by accidental or deliberate crushing, and other forms of massive physical trauma. What kind of “small internal changes” undergone by a steel ball could exceed the impact of internal cancers and external wounds? quote:
That’s a generally sensible policy, I concede. quote:
If the a teacher as senior as McGonagall has so many open doubts about Quirrell’s qualifications and/or trustworthiness, why is Quirrell still employed at Hogwarts? On the flipside, if Quirrell is trusted enough to remain on the teaching staff of Hogwarts, why is McGonagall openly undermining the students’ trust and respect for Quirrell?
|
# ? Sep 9, 2015 09:28 |
|
JosephWongKS posted:If the a teacher as senior as McGonagall has so many open doubts about Quirrell’s qualifications and/or trustworthiness, why is Quirrell still employed at Hogwarts? Because the defense against the dark arts position at hogwarts is cursed, so anyone who wants to do it obviously has SOMETHING wrong with them, but at the same time SOMEONE has to do it because they can't just not teach the class.
|
# ? Sep 9, 2015 10:17 |
|
JosephWongKS posted:
Same reason liposuction doesn't solve the problem of being fat? Just draining your fat is only like 1/4 of the problem...not to mention it's terribly unhealthy. JosephWongKS posted:
Regallion fucked around with this message at 12:25 on Sep 9, 2015 |
# ? Sep 9, 2015 11:32 |
|
Even if it is fan-author fiat, I don't really mind. The needs of the story are that free transfiguration of living beings is dangerous at best? Okay, why not. What matters is what he does with that as the story progresses...
|
# ? Sep 9, 2015 13:19 |
|
Yeah, I'm willing to give him this whole conceit. This is classic fanfic. Take something ambiguous in canon, come up with a system for how it works, and figure out how the consistent application would affect the story. Granted he's being a bit pedantic about it, but the pedantry fits the characters here so I'd give it a pass. Transfiguration is valuable but dangerous, in the same way we teach ten-year-olds how fire works even though it kills hundreds of children ever year.
|
# ? Sep 9, 2015 14:28 |
|
I still think it's hard to justify teaching small children about it... the part about it being ludicrously dangerous I can buy, but the rest not as much. By the description, it's not so much teaching ten-year-olds how fire works as teaching them how to build thermonuclear devices and synthesize super-ebola and then trusting that your sincere instruction that they not misuse it under any circumstances will be followed to the letter.
|
# ? Sep 9, 2015 14:34 |
The issue seems to be that said system is both dysfunctional and stupid.
|
|
# ? Sep 9, 2015 16:50 |
|
|
# ? May 21, 2024 07:32 |
Apparently if they don't learn it at this point, they'd be crippled approaching it later in life though? I still don't even know what the hell all this garbage is good for. You can temporarily turn a solid thing into another solid thing, but it's horribly dangerous?
|
|
# ? Sep 9, 2015 22:58 |