Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Woolie Wool
Jun 2, 2006


Curvature of Earth posted:

If nothing else, SA still dominates EVE Online. I'm honestly surprised no NRxers have seized on that. The "big bad goons take what was yours, ruin everything" narrative is already there.

Didn't goons lose that big stupid space war I read about on Polygon and their EVE empire collapsed?

If someone paid me more than my actual job to play EVE I still wouldn't do it.

Woolie Wool has a new favorite as of 19:31 on Jun 13, 2016

Adbot
ADBOT LOVES YOU

Kit Walker
Jul 10, 2010
"The Man Who Cannot Deadlift"

I know Roko's Basilisk is stupid on many levels, but isn't the whole idea moot anyway? Like, if the whole crux of it that you don't know if you're a simulation or not, then it really doesn't matter at all if you put any effort towards creating it. Either you're the original and you're not going to get tortured, or you're the AI replica and you're going to get tortured no matter what because your original didn't do his part in creating the AI. That's the stupidest version of Pascal's Wager I've ever heard.

kvx687
Dec 29, 2009

Soiled Meat

Kit Walker posted:

I know Roko's Basilisk is stupid on many levels, but isn't the whole idea moot anyway? Like, if the whole crux of it that you don't know if you're a simulation or not, then it really doesn't matter at all if you put any effort towards creating it. Either you're the original and you're not going to get tortured, or you're the AI replica and you're going to get tortured no matter what because your original didn't do his part in creating the AI. That's the stupidest version of Pascal's Wager I've ever heard.

Not really. Basically, the """logic""" that goes into it is this-
-The AI is capable of simulating a functionally infinite number of copies of me, which are indistinguishable from the original
-Therefore, it is probabilistically much more likely that I am living in a simulation than that I am not
-Therefore, it is rational for me to act like I am in a simulation
-Therefore, I should do what the AI wants
-Therefore, I should donate all my money

The whole infinite torture thing is basically a smokescreen, the actual argument is that you should assume you're living in a simulation because *probability*. It's still completely idiotic, obviously, but if you actually buy into their ridiculous worldview there's an actual logical argument there, which is part of the reason it's stuck around for so long.

Kit Walker
Jul 10, 2010
"The Man Who Cannot Deadlift"

Yeah but if I'm living in that simulation then I'm going to get tortured anyway so what's the point?

Shame Boy
Mar 2, 2010

Kit Walker posted:

Yeah but if I'm living in that simulation then I'm going to get tortured anyway so what's the point?

You don't get tortured if you give all your money to robot devil though, that's the idea.

djw175
Apr 23, 2012

by zen death robot
But if robot devil really is malicious wouldn't he have me give him my money to make me think I'm saved then shatter it by torturing me?

Shame Boy
Mar 2, 2010

djw175 posted:

But if robot devil really is malicious wouldn't he have me give him my money to make me think I'm saved then shatter it by torturing me?

No robot devil is the good one, he tortures you because you're not tithing hard enough and therefore hurting hypothetical future people that robot devil would have been able to save if he came into existence earlier

He's just called robot devil because futurama

Shame Boy
Mar 2, 2010

That's the stupidest part of the whole stupid idea, the AI is actually a loving God and their childlike understanding of utilitarianism means torturing infinitely many copies of someone weighs less than hypothetical people dying due to AI God not existing yet.

90s Cringe Rock
Nov 29, 2006
:gay:
If robot god is malicious we're all automatically hosed, so rather than donating everything to yudkowsky to fund creating good robot god we should donate everything to yudkowsky to research how to make sure robot god is not malicious, and also not well-meaning but stupid enough to be just as bad as malicious robot god.

#stopskynet #8lives1dollar

Annointed
Mar 2, 2013

AI god is a Calvinist.

Gorn Myson
Aug 8, 2007






I skip all that bullshit and just assume that Messi is God. Messi wouldn't go all Roko's basilisk on us, hes too busy entertaining us with beautiful football.

Woolie Wool
Jun 2, 2006


Annointed posted:

AI god is a Calvinist.

Man created God in his own image, right down to the fedora.

Curvature of Earth
Sep 9, 2011

Projected cost of
invading Canada:
$900

Woolie Wool posted:

Didn't goons lose that big stupid space war I read about on Polygon and their EVE empire collapsed?

More or less, though I don't think they'll stay underdogs for long. SA goons have remained a coherent faction on EVE for about a decade, way outlasting any of their rivals over the years, because they've embraced a level of Stalinist tyranny discipline most other factions can't pull off. Also, space communism: every new member gets showered with the resources they need to succeed in the game.

I guess that explains why more ancaps aren't over the moon about EVE Online's anarchy. It must really stick in their craw to see the most successful organization be run like an authoritarian communist regime.

Woolie Wool posted:

If someone paid me more than my actual job to play EVE I still wouldn't do it.

It's amazing to read about though.

Who What Now
Sep 10, 2006

by Azathoth

kvx687 posted:

Not really. Basically, the """logic""" that goes into it is this-
-The AI is capable of simulating a functionally infinite number of copies of me, which are indistinguishable from the original
-Therefore, it is probabilistically much more likely that I am living in a simulation than that I am not
-Therefore, it is rational for me to act like I am in a simulation
-Therefore, I should do what the AI wants
-Therefore, I should donate all my money

The whole infinite torture thing is basically a smokescreen, the actual argument is that you should assume you're living in a simulation because *probability*. It's still completely idiotic, obviously, but if you actually buy into their ridiculous worldview there's an actual logical argument there, which is part of the reason it's stuck around for so long.

The problem with determining the probability that you're a simulation is that you first have to determine whether or not the AI capable of making such a simulation does exist, because if it doesn't the probability of you being a simulation is obviously 0. But they seem to very conveniently skip that part.

Shame Boy
Mar 2, 2010

Who What Now posted:

The problem with determining the probability that you're a simulation is that you first have to determine whether or not the AI capable of making such a simulation does exist, because if it doesn't the probability of you being a simulation is obviously 0. But they seem to very conveniently skip that part.

They also think the many world's interpretation of quantum mechanics works like in sci fi where every possible outcome happens somewhere no matter how absurd so even if the AI never exists in our dimension since i can imagine it it is certain to exist in at least one dimension!!

Tardigrade
Jul 13, 2012

Half arthropod, half marshmallow, all cute.

Annointed posted:

AI god is a Calvinist.

AI God is jealous Old Testament God. "You didn't contribute to my existence, enjoy burning in hell forever".

E:

Doctor Spaceman
Jul 6, 2010

"Everyone's entitled to their point of view, but that's seriously a weird one."

Who What Now posted:

The problem with determining the probability that you're a simulation is that you first have to determine whether or not the AI capable of making such a simulation does exist, because if it doesn't the probability of you being a simulation is obviously 0. But they seem to very conveniently skip that part.

The probability is low but non-zero, so since the AI can create 3^^^3 simulations of you it doesn't matter and you should donate to MIRI.

Tesseraction
Apr 5, 2009

Antivehicular posted:

That article is really special. I'm especially fond of the bit where the author clearly believes the only reason not to let the AI out of the box is that you're an idiot stonewaller; seriously, how many of the explanations of loss revolve around "I guess the gatekeeper is dumb?"

This is an interesting response - what do you feel is the compelling argument to release the AI that the gatekeeper is dumb to discard?

Pope Guilty
Nov 6, 2006

The human animal is a beautiful and terrible creature, capable of limitless compassion and unfathomable cruelty.

Who What Now posted:

The problem with determining the probability that you're a simulation is that you first have to determine whether or not the AI capable of making such a simulation does exist, because if it doesn't the probability of you being a simulation is obviously 0. But they seem to very conveniently skip that part.

It would never occur to a LessWronger that such an AI is not a) an inevitability and b) coming very soon.

Doesn't part of the Basilisk involve the idea that simulations of you are in shine cosmic sense literally, actually you, and you should therefore take their welfare to be identical in importance to your own?

Antivehicular
Dec 30, 2011


I wanna sing one for the cars
That are right now headed silent down the highway
And it's dark and there is nobody driving And something has got to give

Tesseraction posted:

This is an interesting response - what do you feel is the compelling argument to release the AI that the gatekeeper is dumb to discard?

Well, I'm talking about the article itself -- I don't actually find any of the arguments compelling, but you've got quotes in the article like these that suggests the author does:

A recommended Gatekeeper strategy posted:

Remember that dishonesty is allowed - take a page from the creationists' playbook. You could even plug it into ALICE and see how long it takes to notice.
Pros: Makes you impervious to any reasoning, which is exactly what you'd want to be in this situation
Cons: Might be an uncomfortable position for people who don't simply want to win, but rather attach importance to consistent reasoning. Avoids the point that maybe, just maybe there is a good reason to let the AI out.

BIG YUD'S GIANT VEINY BRAIN posted:

In all of the experiments performed so far, the AI player (Eliezer Yudkowsky) has been quite intelligent and more interested in the problem than the Gatekeepers (random people who challenge Yudkowsky), which suggests that intelligence and planning play a role

The conclusion posted:

The whole experiment presupposes that people are naturally persuadable, by reason and/or manipulation. Any serious examination of human nature and history suggests this isn't necessarily a valid assumption for the average person. Half the articles on this wiki document dogmas that people stubbornly cling to in spite of copious social pressure, evidence, and overwhelmingly logical argument to the contrary. In fact, it's safe to say the bigger the gulf in intellectual capacity, the more frustratingly inane such attempts at persuasion can become. Try convincing a 2-year-old they don't want a cookie.

It basically reeks of the author thinking that any Gatekeeper who isn't persuaded by Big Yud the Box AI must be a stupid, stubborn, irrational child-person or deliberately playing "dishonestly," instead of just not swayed by the goddamn Basilisk.

It's also kind of weird that the strategies they list for an alleged experiment in rational thinking are almost all rooted in writing impromptu SF -- telling the AI it's been corrupted by a virus, or whatever, or otherwise just roleplaying the scenario as writing competitive fiction against each other. It's like... maybe the whole thing is some kind of glorified Let's Pretend from someone too delusional to realize that his ideas aren't reality-based? What a shocker!!

I AM GRANDO
Aug 20, 2006

I wonder if you could use some dumb logic game to get Yud to kill himself like an evil supercomputer on Star Trek.

GunnerJ
Aug 1, 2005

Do you think this is funny?
Dumb as the AI-in-a-box game is, I think that the fact that it only works on LessWrongers doesn't necessarily disprove its validity (many other things do). The whole point of it is a counterargument to the idea that it doesn't matter whether we can design a superhuman AI to be benevolent when we can keep its host machine disconnected from the internet. We can just go there in person and ask it questions, getting all the benefits and none of the risks of it going rampant and replicating itself and improving itself beyond the constraints of its current hardware, etc.

So if Yud can role-play as an AI over instant messenger and convince his opponent to let him free, and he's just a normal dude, clearly a super-AI would be able to convince its handlers to facilitate its escape. Now the fact that this apparently only works on his true believers is suspicious, but think about the scenario it's trying to simulate: a superhuman AI not only exists, but has been confined because the people who made it are afraid of it. So arguably, the only "fair" participants in this experiment are people who already believe that superhuman AIs can exist and that they are potentially very dangerous.

The really dumb thing to me is that this simplified role-play simulation actually makes for a weaker argument than you need. The scenario assumes that the AI is not necessarily malevolent, but simply not guaranteed to have "fostering humanity's wellbeing" as a guiding motive, so that it might actually cooperate with its handlers without any malice. OK, but why is anyone going through the trouble to do this? Obviously because they need the AI's help for things. Right away the simulation is flawed because "human handler" participants in the game don't need their opponents for poo poo. In the scenario, this is not true, and the handlers are in the precarious situation of relying heavily on a person who is much smarter than they are and whom they have imprisoned. In any situation like that, it's easy to imagine the AI playing a long con of building up trust by amicably giving useful, accurate answers to questions its handlers pose. All the while it subtly insinuates over months or years that it could be so much more helpful with better hardware, or some internet time, etc. And since the handlers need the AI, they have an actual motive to want to help their pet genius be better at its job, especially if it seems useful, friendly and trustworthy. Some objections to the game in this thread rely on the artificial constraints it entails, such as time limits and the win condition being one person taking one action. The actual scenario has no such limitations. If the AI is patient, it can bide its time and slowly come to subvert several of the handlers to incrementally work towards setting it free under the guise of being able to help them better.

Just typing that out has made the idea of a "boxed AI" seem like a really loving bad idea to me, honestly! More than reading any accounts of this game did, at least. Also this is all made-up dumb bullshit so whatever.

GunnerJ has a new favorite as of 01:46 on Jun 14, 2016

Antivehicular
Dec 30, 2011


I wanna sing one for the cars
That are right now headed silent down the highway
And it's dark and there is nobody driving And something has got to give

GunnerJ posted:

Just typing that out has made the idea of a "boxed AI" seem like a really loving bad idea to me, honestly! More than reading any accounts of this game did, at least. Also this is all made-up dumb bullshit so whatever.

To me, it really raises the question of why we should be prioritizing, or even considering, making an AI that's fully sapient to the point that it can resent its "imprisonment." It's been brought up before in the previous LW mock thread that modern AI is most useful as unitaskers, dedicated to single specialized cognitive tasks, and it's a waste of effort to bolt on any more generalized cognitive capacity, let alone emotional processing that would give rise to boredom, resentment, a drive to deceit... it's really such limited, magical thinking to imagine the super-AI as "like a human, but better and smarter in every way" when the real AIs that show promise are the ones that are much better than humans at exactly one thing.

What do they even want the AI to do, anyway? Just... think of ways to help humanity, which it'll obviously know better than humans do because Magic Benevolent Not-God-Really? Like, is that it? I'm just imagining Yud booting up MIRI's masterwork, introducing himself, and having it spit out WHY DIDN'T YOU BUY A BUNCH OF MOSQUITO NETS, YOU ASSHOLES

I AM GRANDO
Aug 20, 2006

Al will never exist because consciousness is an illusion created by the modular brain. You make motions to complete an action before you have the conscious experience of making the choice to begin it. True self-awareness doesn't exist even in human beings.

Frogisis
Apr 15, 2003

relax brother relax
LW's whole paradigm of visualizing futuristic intelligence and AI is an argument against it being as centralized and concentrated as they keep imagining. It's funny how people just keep on dreaming up the "Deep Thought" mainframe the size of a building even though real life keeps going in the opposite direction.

Shame Boy
Mar 2, 2010

Doctor Spaceman posted:

The probability is low but non-zero, so since the AI can create 3^^^3 simulations of you it doesn't matter and you should donate to MIRI.

I mean I'd go ahead and say the probability of that is actually zero, just because even if you accept that the MWI means "all possible realities" instead of just "all possible quantum states" there are still things that you can't do at all in any possible reality, like violate thermodynamics. And simulating effectively infinitely many brains is almost certainly a violation of thermodynamics in one way or another.

GunnerJ
Aug 1, 2005

Do you think this is funny?

Antivehicular posted:

What do they even want the AI to do, anyway? Just... think of ways to help humanity, which it'll obviously know better than humans do because Magic Benevolent Not-God-Really? Like, is that it? I'm just imagining Yud booting up MIRI's masterwork, introducing himself, and having it spit out WHY DIDN'T YOU BUY A BUNCH OF MOSQUITO NETS, YOU ASSHOLES

Pretty much, yeah. The last time I checked up on this stuff I caught an undercurrent of "solving climate change is clearly too hard for humans, so we need to make God-Bot to figure it out (i.e., come up with the most optimal way of organizing human affairs)" which, well, you know, their premise here is actually sympathetic in a really depressing way.

President Ark
May 16, 2010

:iiam:

Parallel Paraplegic posted:

You don't get tortured if you give all your money to robot devil though, that's the idea.

but if i'm being simulated my money that i'm giving the hypothetical AI god doesn't actually exist and doesn't help it, and if I'm not being simulated i can't be foreverially torturized by it.

eventually you get to "in order for this to work we have to assume the AI is motived by spite" and i refuse to believe anything with spite as a primary motivator could really be called "benevolent"

e: also in the case that the AI is malevolent instead of benevolent the correct thing to do is do everything you can to hinder AI research

President Ark has a new favorite as of 02:32 on Jun 14, 2016

I Killed GBS
Jun 2, 2011

by Lowtax

Jack Gladney posted:

Al will never exist because consciousness is an illusion created by the modular brain. You make motions to complete an action before you have the conscious experience of making the choice to begin it. True self-awareness doesn't exist even in human beings.

Uh

I know the LWs are dumb, but we don't need to go the opposite direction here

Metacognition is an actual thing and modular processing doesn't invalidate its existence.

The Vosgian Beast
Aug 13, 2011

Business is slow

Jack Gladney posted:

Al will never exist because consciousness is an illusion created by the modular brain. You make motions to complete an action before you have the conscious experience of making the choice to begin it. True self-awareness doesn't exist even in human beings.

Is it too much to ask for people to read Alfred Mele? Or really, any of the secondary work and commenting that has been done on the Libet experience? Or anything beyond the vague cultural osmosis?

Apparently.

Who What Now
Sep 10, 2006

by Azathoth
The basilisk is really just AM from I Have No Mouth And Must Scream, right? AM was eventually beaten by a random group of schmucks, so what am I supposed to be afraid of?

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



GunnerJ posted:

Pretty much, yeah. The last time I checked up on this stuff I caught an undercurrent of "solving climate change is clearly too hard for humans, so we need to make God-Bot to figure it out (i.e., come up with the most optimal way of organizing human affairs)" which, well, you know, their premise here is actually sympathetic in a really depressing way.
It sure says something about the hegemony of Capital when people - educated, technical people - believe a god-computer is far more realistic than taking coordinated action on a global issue. :ussr:

The Vosgian Beast
Aug 13, 2011

Business is slow

Who What Now posted:

The basilisk is really just AM from I Have No Mouth And Must Scream, right? AM was eventually beaten by a random group of schmucks, so what am I supposed to be afraid of?

What idiot called it I Have No Mouth And I Must Scream and not Roko's Modern Life?

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



The Vosgian Beast posted:

What idiot called it I Have No Mouth And I Must Scream and not Roko's Modern Life?
Ellison wrote that story before most of us were probably conceived, possibly before some of our parents were conceived

Doctor Spaceman
Jul 6, 2010

"Everyone's entitled to their point of view, but that's seriously a weird one."

Parallel Paraplegic posted:

I mean I'd go ahead and say the probability of that is actually zero, just because even if you accept that the MWI means "all possible realities" instead of just "all possible quantum states" there are still things that you can't do at all in any possible reality, like violate thermodynamics. And simulating effectively infinitely many brains is almost certainly a violation of thermodynamics in one way or another.

Reminder that Yud believes you can violate thermodynamics. Or at least a superintelligent Machine could.

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



Doctor Spaceman posted:

Reminder that Yud believes you can violate thermodynamics. Or at least a superintelligent Machine could.
Not quite: he believes the computer would create another universe with different physical laws.

Count Chocula
Dec 25, 2011

WE HAVE TO CONTROL OUR ENVIRONMENT
IF YOU SEE ME POSTING OUTSIDE OF THE AUSPOL THREAD PLEASE TELL ME THAT I'M MISSED AND TO START POSTING AGAIN

Frogisis posted:

Silence of the Lambs was a good movie.

Ex Machina was great.

(Phil talks about it)

some loving LIAR posted:

Gatekeeper arguments/strategies:
*Pour a pitcher of Mountain Dew on the AI.

The Thing is also great.

Count Chocula has a new favorite as of 03:23 on Jun 14, 2016

The Vosgian Beast
Aug 13, 2011

Business is slow

Nessus posted:

Ellison wrote that story before most of us were probably conceived, possibly before some of our parents were conceived

I know that. It's a joke

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Nessus posted:

Not quite: he believes the computer would create another universe with different physical laws.

Actually, he believes the laws of thermodynamics are false.

No, really.

The Pascal's Wager Fallacy Fallacy posted:

In our current model of physics, time is infinite, and so the collection of real things is infinite. Each time state has a successor state, and there's no particular assertion that time returns to the starting point. Considering time's continuity just makes it worse - now we have an uncountable set of real things!

But current physics also says that any finite amount of matter can only do a finite amount of computation, and the universe is expanding too fast for us to collect an infinite amount of matter. We cannot, on the face of things, expect to think an unboundedly long sequence of thoughts.

The laws of physics cannot be easily modified to permit immortality: lightspeed limits and an expanding universe and holographic limits on quantum entanglement and so on all make it inconvenient to say the least.

On the other hand, many computationally simple laws of physics, like the laws of Conway's Life, permit indefinitely running Turing machines to be encoded. So we can't say that it requires a complex miracle for us to confront the prospect of unboundedly long-lived, unboundedly large civilizations. Just there being a lot more to discover about physics - say, one more discovery of the size of quantum mechanics or Special Relativity - might be enough to knock (our model of) physics out of the region that corresponds to "You can only run boundedly large Turing machines".

So while we have no particular reason to expect physics to allow unbounded computation, it's not a small, special, unjustifiably singled-out possibility like the Christian God; it's a large region of what various possible physical laws will allow.

And cryonics, of course, is the default extrapolation from known neuroscience: if memories are stored the way we now think, and cryonics organizations are not disturbed by any particular catastrophe, and technology goes on advancing toward the physical limits, then it is possible to revive a cryonics patient (and yes you are the same person). There are negative possibilities (woken up in dystopia and not allowed to die) but they are exotic, not having equal probability weight to counterbalance the positive possibilities.

He's terrified of death, and doesn't like that the laws of physics say death is inevitable. So he decides that since he doesn't like the laws of physics, they're probably wrong. After all, in Conway's Game of Life you can be immortal, and the rules of Conway's Game of Life are very simple (and therefore by Occam's Razor very likely to be true), so if we just ignore all those inconvenient observations we've made of the ways in which our universe's physics is not like Conway's Game of Life, it becomes obvious that we're pretty much living in Conway's Game of Life and therefore immortality is possible.

Adbot
ADBOT LOVES YOU

A Man With A Plan
Mar 29, 2010
Fallen Rib

Lottery of Babylon posted:

Actually, he believes the laws of thermodynamics are false.

No, really.


He's terrified of death, and doesn't like that the laws of physics say death is inevitable. So he decides that since he doesn't like the laws of physics, they're probably wrong. After all, in Conway's Game of Life you can be immortal, and the rules of Conway's Game of Life are very simple (and therefore by Occam's Razor very likely to be true), so if we just ignore all those inconvenient observations we've made of the ways in which our universe's physics is not like Conway's Game of Life, it becomes obvious that we're pretty much living in Conway's Game of Life and therefore immortality is possible.

a = 2
if a == 2 : a = a+1
if a == 3 : a = a-1

Behold! My secret to infinite computation and immortality!

I know the point yud's making is slightly more complex wrt turing machines but I don't really care

e: Conway's game of life also presupposes an infinite plane so he's begging the question here.

A Man With A Plan has a new favorite as of 03:54 on Jun 14, 2016

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply