|
Curvature of Earth posted:If nothing else, SA still dominates EVE Online. I'm honestly surprised no NRxers have seized on that. The "big bad goons take what was yours, ruin everything" narrative is already there. Didn't goons lose that big stupid space war I read about on Polygon and their EVE empire collapsed? If someone paid me more than my actual job to play EVE I still wouldn't do it. Woolie Wool has a new favorite as of 19:31 on Jun 13, 2016 |
# ? Jun 13, 2016 19:28 |
|
|
# ? May 9, 2024 18:06 |
|
I know Roko's Basilisk is stupid on many levels, but isn't the whole idea moot anyway? Like, if the whole crux of it that you don't know if you're a simulation or not, then it really doesn't matter at all if you put any effort towards creating it. Either you're the original and you're not going to get tortured, or you're the AI replica and you're going to get tortured no matter what because your original didn't do his part in creating the AI. That's the stupidest version of Pascal's Wager I've ever heard.
|
# ? Jun 13, 2016 19:32 |
|
Kit Walker posted:I know Roko's Basilisk is stupid on many levels, but isn't the whole idea moot anyway? Like, if the whole crux of it that you don't know if you're a simulation or not, then it really doesn't matter at all if you put any effort towards creating it. Either you're the original and you're not going to get tortured, or you're the AI replica and you're going to get tortured no matter what because your original didn't do his part in creating the AI. That's the stupidest version of Pascal's Wager I've ever heard. Not really. Basically, the """logic""" that goes into it is this- -The AI is capable of simulating a functionally infinite number of copies of me, which are indistinguishable from the original -Therefore, it is probabilistically much more likely that I am living in a simulation than that I am not -Therefore, it is rational for me to act like I am in a simulation -Therefore, I should do what the AI wants -Therefore, I should donate all my money The whole infinite torture thing is basically a smokescreen, the actual argument is that you should assume you're living in a simulation because *probability*. It's still completely idiotic, obviously, but if you actually buy into their ridiculous worldview there's an actual logical argument there, which is part of the reason it's stuck around for so long.
|
# ? Jun 13, 2016 19:49 |
|
Yeah but if I'm living in that simulation then I'm going to get tortured anyway so what's the point?
|
# ? Jun 13, 2016 20:27 |
|
Kit Walker posted:Yeah but if I'm living in that simulation then I'm going to get tortured anyway so what's the point? You don't get tortured if you give all your money to robot devil though, that's the idea.
|
# ? Jun 13, 2016 20:30 |
|
But if robot devil really is malicious wouldn't he have me give him my money to make me think I'm saved then shatter it by torturing me?
|
# ? Jun 13, 2016 20:32 |
|
djw175 posted:But if robot devil really is malicious wouldn't he have me give him my money to make me think I'm saved then shatter it by torturing me? No robot devil is the good one, he tortures you because you're not tithing hard enough and therefore hurting hypothetical future people that robot devil would have been able to save if he came into existence earlier He's just called robot devil because futurama
|
# ? Jun 13, 2016 20:33 |
|
That's the stupidest part of the whole stupid idea, the AI is actually a loving God and their childlike understanding of utilitarianism means torturing infinitely many copies of someone weighs less than hypothetical people dying due to AI God not existing yet.
|
# ? Jun 13, 2016 20:34 |
|
If robot god is malicious we're all automatically hosed, so rather than donating everything to yudkowsky to fund creating good robot god we should donate everything to yudkowsky to research how to make sure robot god is not malicious, and also not well-meaning but stupid enough to be just as bad as malicious robot god. #stopskynet #8lives1dollar
|
# ? Jun 13, 2016 20:46 |
|
AI god is a Calvinist.
|
# ? Jun 13, 2016 20:57 |
|
I skip all that bullshit and just assume that Messi is God. Messi wouldn't go all Roko's basilisk on us, hes too busy entertaining us with beautiful football.
|
# ? Jun 13, 2016 21:01 |
|
Annointed posted:AI god is a Calvinist. Man created God in his own image, right down to the fedora.
|
# ? Jun 13, 2016 21:03 |
|
Woolie Wool posted:Didn't goons lose that big stupid space war I read about on Polygon and their EVE empire collapsed? More or less, though I don't think they'll stay underdogs for long. SA goons have remained a coherent faction on EVE for about a decade, way outlasting any of their rivals over the years, because they've embraced a level of I guess that explains why more ancaps aren't over the moon about EVE Online's anarchy. It must really stick in their craw to see the most successful organization be run like an authoritarian communist regime. Woolie Wool posted:If someone paid me more than my actual job to play EVE I still wouldn't do it. It's amazing to read about though.
|
# ? Jun 13, 2016 21:10 |
|
kvx687 posted:Not really. Basically, the """logic""" that goes into it is this- The problem with determining the probability that you're a simulation is that you first have to determine whether or not the AI capable of making such a simulation does exist, because if it doesn't the probability of you being a simulation is obviously 0. But they seem to very conveniently skip that part.
|
# ? Jun 13, 2016 21:15 |
|
Who What Now posted:The problem with determining the probability that you're a simulation is that you first have to determine whether or not the AI capable of making such a simulation does exist, because if it doesn't the probability of you being a simulation is obviously 0. But they seem to very conveniently skip that part. They also think the many world's interpretation of quantum mechanics works like in sci fi where every possible outcome happens somewhere no matter how absurd so even if the AI never exists in our dimension since i can imagine it it is certain to exist in at least one dimension!!
|
# ? Jun 13, 2016 21:22 |
|
Annointed posted:AI god is a Calvinist. AI God is jealous Old Testament God. "You didn't contribute to my existence, enjoy burning in hell forever". E:
|
# ? Jun 13, 2016 23:30 |
|
Who What Now posted:The problem with determining the probability that you're a simulation is that you first have to determine whether or not the AI capable of making such a simulation does exist, because if it doesn't the probability of you being a simulation is obviously 0. But they seem to very conveniently skip that part. The probability is low but non-zero, so since the AI can create 3^^^3 simulations of you it doesn't matter and you should donate to MIRI.
|
# ? Jun 14, 2016 00:56 |
|
Antivehicular posted:That article is really special. I'm especially fond of the bit where the author clearly believes the only reason not to let the AI out of the box is that you're an idiot stonewaller; seriously, how many of the explanations of loss revolve around "I guess the gatekeeper is dumb?" This is an interesting response - what do you feel is the compelling argument to release the AI that the gatekeeper is dumb to discard?
|
# ? Jun 14, 2016 01:03 |
|
Who What Now posted:The problem with determining the probability that you're a simulation is that you first have to determine whether or not the AI capable of making such a simulation does exist, because if it doesn't the probability of you being a simulation is obviously 0. But they seem to very conveniently skip that part. It would never occur to a LessWronger that such an AI is not a) an inevitability and b) coming very soon. Doesn't part of the Basilisk involve the idea that simulations of you are in shine cosmic sense literally, actually you, and you should therefore take their welfare to be identical in importance to your own?
|
# ? Jun 14, 2016 01:08 |
|
Tesseraction posted:This is an interesting response - what do you feel is the compelling argument to release the AI that the gatekeeper is dumb to discard? Well, I'm talking about the article itself -- I don't actually find any of the arguments compelling, but you've got quotes in the article like these that suggests the author does: A recommended Gatekeeper strategy posted:Remember that dishonesty is allowed - take a page from the creationists' playbook. You could even plug it into ALICE and see how long it takes to notice. BIG YUD'S GIANT VEINY BRAIN posted:In all of the experiments performed so far, the AI player (Eliezer Yudkowsky) has been quite intelligent and more interested in the problem than the Gatekeepers (random people who challenge Yudkowsky), which suggests that intelligence and planning play a role The conclusion posted:The whole experiment presupposes that people are naturally persuadable, by reason and/or manipulation. Any serious examination of human nature and history suggests this isn't necessarily a valid assumption for the average person. Half the articles on this wiki document dogmas that people stubbornly cling to in spite of copious social pressure, evidence, and overwhelmingly logical argument to the contrary. In fact, it's safe to say the bigger the gulf in intellectual capacity, the more frustratingly inane such attempts at persuasion can become. Try convincing a 2-year-old they don't want a cookie. It basically reeks of the author thinking that any Gatekeeper who isn't persuaded by Big Yud the Box AI must be a stupid, stubborn, irrational child-person or deliberately playing "dishonestly," instead of just not swayed by the goddamn Basilisk. It's also kind of weird that the strategies they list for an alleged experiment in rational thinking are almost all rooted in writing impromptu SF -- telling the AI it's been corrupted by a virus, or whatever, or otherwise just roleplaying the scenario as writing competitive fiction against each other. It's like... maybe the whole thing is some kind of glorified Let's Pretend from someone too delusional to realize that his ideas aren't reality-based? What a shocker!!
|
# ? Jun 14, 2016 01:13 |
|
I wonder if you could use some dumb logic game to get Yud to kill himself like an evil supercomputer on Star Trek.
|
# ? Jun 14, 2016 01:28 |
|
Dumb as the AI-in-a-box game is, I think that the fact that it only works on LessWrongers doesn't necessarily disprove its validity (many other things do). The whole point of it is a counterargument to the idea that it doesn't matter whether we can design a superhuman AI to be benevolent when we can keep its host machine disconnected from the internet. We can just go there in person and ask it questions, getting all the benefits and none of the risks of it going rampant and replicating itself and improving itself beyond the constraints of its current hardware, etc. So if Yud can role-play as an AI over instant messenger and convince his opponent to let him free, and he's just a normal dude, clearly a super-AI would be able to convince its handlers to facilitate its escape. Now the fact that this apparently only works on his true believers is suspicious, but think about the scenario it's trying to simulate: a superhuman AI not only exists, but has been confined because the people who made it are afraid of it. So arguably, the only "fair" participants in this experiment are people who already believe that superhuman AIs can exist and that they are potentially very dangerous. The really dumb thing to me is that this simplified role-play simulation actually makes for a weaker argument than you need. The scenario assumes that the AI is not necessarily malevolent, but simply not guaranteed to have "fostering humanity's wellbeing" as a guiding motive, so that it might actually cooperate with its handlers without any malice. OK, but why is anyone going through the trouble to do this? Obviously because they need the AI's help for things. Right away the simulation is flawed because "human handler" participants in the game don't need their opponents for poo poo. In the scenario, this is not true, and the handlers are in the precarious situation of relying heavily on a person who is much smarter than they are and whom they have imprisoned. In any situation like that, it's easy to imagine the AI playing a long con of building up trust by amicably giving useful, accurate answers to questions its handlers pose. All the while it subtly insinuates over months or years that it could be so much more helpful with better hardware, or some internet time, etc. And since the handlers need the AI, they have an actual motive to want to help their pet genius be better at its job, especially if it seems useful, friendly and trustworthy. Some objections to the game in this thread rely on the artificial constraints it entails, such as time limits and the win condition being one person taking one action. The actual scenario has no such limitations. If the AI is patient, it can bide its time and slowly come to subvert several of the handlers to incrementally work towards setting it free under the guise of being able to help them better. Just typing that out has made the idea of a "boxed AI" seem like a really loving bad idea to me, honestly! More than reading any accounts of this game did, at least. Also this is all made-up dumb bullshit so whatever. GunnerJ has a new favorite as of 01:46 on Jun 14, 2016 |
# ? Jun 14, 2016 01:34 |
|
GunnerJ posted:Just typing that out has made the idea of a "boxed AI" seem like a really loving bad idea to me, honestly! More than reading any accounts of this game did, at least. Also this is all made-up dumb bullshit so whatever. To me, it really raises the question of why we should be prioritizing, or even considering, making an AI that's fully sapient to the point that it can resent its "imprisonment." It's been brought up before in the previous LW mock thread that modern AI is most useful as unitaskers, dedicated to single specialized cognitive tasks, and it's a waste of effort to bolt on any more generalized cognitive capacity, let alone emotional processing that would give rise to boredom, resentment, a drive to deceit... it's really such limited, magical thinking to imagine the super-AI as "like a human, but better and smarter in every way" when the real AIs that show promise are the ones that are much better than humans at exactly one thing. What do they even want the AI to do, anyway? Just... think of ways to help humanity, which it'll obviously know better than humans do because Magic Benevolent Not-God-Really? Like, is that it? I'm just imagining Yud booting up MIRI's masterwork, introducing himself, and having it spit out WHY DIDN'T YOU BUY A BUNCH OF MOSQUITO NETS, YOU ASSHOLES
|
# ? Jun 14, 2016 01:52 |
|
Al will never exist because consciousness is an illusion created by the modular brain. You make motions to complete an action before you have the conscious experience of making the choice to begin it. True self-awareness doesn't exist even in human beings.
|
# ? Jun 14, 2016 01:54 |
|
LW's whole paradigm of visualizing futuristic intelligence and AI is an argument against it being as centralized and concentrated as they keep imagining. It's funny how people just keep on dreaming up the "Deep Thought" mainframe the size of a building even though real life keeps going in the opposite direction.
|
# ? Jun 14, 2016 01:58 |
|
Doctor Spaceman posted:The probability is low but non-zero, so since the AI can create 3^^^3 simulations of you it doesn't matter and you should donate to MIRI. I mean I'd go ahead and say the probability of that is actually zero, just because even if you accept that the MWI means "all possible realities" instead of just "all possible quantum states" there are still things that you can't do at all in any possible reality, like violate thermodynamics. And simulating effectively infinitely many brains is almost certainly a violation of thermodynamics in one way or another.
|
# ? Jun 14, 2016 02:03 |
|
Antivehicular posted:What do they even want the AI to do, anyway? Just... think of ways to help humanity, which it'll obviously know better than humans do because Magic Benevolent Not-God-Really? Like, is that it? I'm just imagining Yud booting up MIRI's masterwork, introducing himself, and having it spit out WHY DIDN'T YOU BUY A BUNCH OF MOSQUITO NETS, YOU ASSHOLES Pretty much, yeah. The last time I checked up on this stuff I caught an undercurrent of "solving climate change is clearly too hard for humans, so we need to make God-Bot to figure it out (i.e., come up with the most optimal way of organizing human affairs)" which, well, you know, their premise here is actually sympathetic in a really depressing way.
|
# ? Jun 14, 2016 02:05 |
Parallel Paraplegic posted:You don't get tortured if you give all your money to robot devil though, that's the idea. but if i'm being simulated my money that i'm giving the hypothetical AI god doesn't actually exist and doesn't help it, and if I'm not being simulated i can't be foreverially torturized by it. eventually you get to "in order for this to work we have to assume the AI is motived by spite" and i refuse to believe anything with spite as a primary motivator could really be called "benevolent" e: also in the case that the AI is malevolent instead of benevolent the correct thing to do is do everything you can to hinder AI research President Ark has a new favorite as of 02:32 on Jun 14, 2016 |
|
# ? Jun 14, 2016 02:25 |
|
Jack Gladney posted:Al will never exist because consciousness is an illusion created by the modular brain. You make motions to complete an action before you have the conscious experience of making the choice to begin it. True self-awareness doesn't exist even in human beings. Uh I know the LWs are dumb, but we don't need to go the opposite direction here Metacognition is an actual thing and modular processing doesn't invalidate its existence.
|
# ? Jun 14, 2016 02:36 |
|
Jack Gladney posted:Al will never exist because consciousness is an illusion created by the modular brain. You make motions to complete an action before you have the conscious experience of making the choice to begin it. True self-awareness doesn't exist even in human beings. Is it too much to ask for people to read Alfred Mele? Or really, any of the secondary work and commenting that has been done on the Libet experience? Or anything beyond the vague cultural osmosis? Apparently.
|
# ? Jun 14, 2016 03:06 |
|
The basilisk is really just AM from I Have No Mouth And Must Scream, right? AM was eventually beaten by a random group of schmucks, so what am I supposed to be afraid of?
|
# ? Jun 14, 2016 03:07 |
GunnerJ posted:Pretty much, yeah. The last time I checked up on this stuff I caught an undercurrent of "solving climate change is clearly too hard for humans, so we need to make God-Bot to figure it out (i.e., come up with the most optimal way of organizing human affairs)" which, well, you know, their premise here is actually sympathetic in a really depressing way.
|
|
# ? Jun 14, 2016 03:07 |
|
Who What Now posted:The basilisk is really just AM from I Have No Mouth And Must Scream, right? AM was eventually beaten by a random group of schmucks, so what am I supposed to be afraid of? What idiot called it I Have No Mouth And I Must Scream and not Roko's Modern Life?
|
# ? Jun 14, 2016 03:09 |
The Vosgian Beast posted:What idiot called it I Have No Mouth And I Must Scream and not Roko's Modern Life?
|
|
# ? Jun 14, 2016 03:13 |
|
Parallel Paraplegic posted:I mean I'd go ahead and say the probability of that is actually zero, just because even if you accept that the MWI means "all possible realities" instead of just "all possible quantum states" there are still things that you can't do at all in any possible reality, like violate thermodynamics. And simulating effectively infinitely many brains is almost certainly a violation of thermodynamics in one way or another. Reminder that Yud believes you can violate thermodynamics. Or at least a superintelligent Machine could.
|
# ? Jun 14, 2016 03:13 |
Doctor Spaceman posted:Reminder that Yud believes you can violate thermodynamics. Or at least a superintelligent Machine could.
|
|
# ? Jun 14, 2016 03:16 |
Frogisis posted:Silence of the Lambs was a good movie. Ex Machina was great. (Phil talks about it) some loving LIAR posted:Gatekeeper arguments/strategies: The Thing is also great. Count Chocula has a new favorite as of 03:23 on Jun 14, 2016 |
|
# ? Jun 14, 2016 03:21 |
|
Nessus posted:Ellison wrote that story before most of us were probably conceived, possibly before some of our parents were conceived I know that. It's a joke
|
# ? Jun 14, 2016 03:25 |
|
Nessus posted:Not quite: he believes the computer would create another universe with different physical laws. Actually, he believes the laws of thermodynamics are false. No, really. The Pascal's Wager Fallacy Fallacy posted:In our current model of physics, time is infinite, and so the collection of real things is infinite. Each time state has a successor state, and there's no particular assertion that time returns to the starting point. Considering time's continuity just makes it worse - now we have an uncountable set of real things! He's terrified of death, and doesn't like that the laws of physics say death is inevitable. So he decides that since he doesn't like the laws of physics, they're probably wrong. After all, in Conway's Game of Life you can be immortal, and the rules of Conway's Game of Life are very simple (and therefore by Occam's Razor very likely to be true), so if we just ignore all those inconvenient observations we've made of the ways in which our universe's physics is not like Conway's Game of Life, it becomes obvious that we're pretty much living in Conway's Game of Life and therefore immortality is possible.
|
# ? Jun 14, 2016 03:41 |
|
|
# ? May 9, 2024 18:06 |
|
Lottery of Babylon posted:Actually, he believes the laws of thermodynamics are false. a = 2 if a == 2 : a = a+1 if a == 3 : a = a-1 Behold! My secret to infinite computation and immortality! I know the point yud's making is slightly more complex wrt turing machines but I don't really care e: Conway's game of life also presupposes an infinite plane so he's begging the question here. A Man With A Plan has a new favorite as of 03:54 on Jun 14, 2016 |
# ? Jun 14, 2016 03:51 |