Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Chamale
Jul 11, 2010

I'm helping!



Triple Elation posted:

What is it that he does, anyway?

He tries to figure out what laws to use to prevent a hypothetical god-AI from destroying us all.

Adbot
ADBOT LOVES YOU

Applewhite
Aug 16, 2014

by vyelkin
Nap Ghost

Chamale posted:

He tries to figure out what laws to use to prevent a hypothetical god-AI from destroying us all.

Asimov's laws are a good starting point.

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



Applewhite posted:

Asimov's laws are a good starting point.
But the AI will easily be able to persuade people to turn off its asimov circuits. He has proved it by roleplaying on IRC.

Applewhite
Aug 16, 2014

by vyelkin
Nap Ghost

Nessus posted:

But the AI will easily be able to persuade people to turn off its asimov circuits. He has proved it by roleplaying on IRC.

Isn't trying to subvert the laws a violation of the laws?

Also don't make them able to be turned off duh.

A Wizard of Goatse
Dec 14, 2014

Nolanar posted:

It looks to me like the singularity hit when we got writing and the wheel simultaneously a few thousand years ago in a single event. I wonder how the chart would look if we split those multi-event events into their component data points.

Where is, like, metallurgy. Y'know, the thing we named half our technological eras after? How is 'art' a single definable point and is the chauvet cave his idea of an early city? So many questions, only the cybertronic superbrain could fathom the ways of singularitarian science.

A Wizard of Goatse fucked around with this message at 22:46 on Jan 5, 2015

SolTerrasa
Sep 2, 2011

Triple Elation posted:

What is it that he does, anyway? I remember he said something like "if Kevin Bacon could hear about the problem I'm working on his eyes would pop right out from their sockets", what problem is it? Are other computer scientists working on it?

The thread just moved to GBS so I bet a bunch of people don't know the answer to this. Tl;dr is that Yudkowsky is working on a problem he invented, which almost nobody agrees exists. It's represented in "highly abstract math" which has the advantage of being incomprehensible to everyone except him, because he is very bad at communicating his ideas.

Longer story, because my computer is an expensive paperweight today and I've got gently caress all else to do.

Yudkowsky is worried about the problem of Friendly AI. Put simply, when an AI can modify itself, why wouldn't it just modify its goals to be easier to satisfy? How can we be sure that it will maintain value stability under unbounded self-modification? Yudkowsky is not a crazy idiot, this is a real question which is hard to answer. All the simple solutions you can come up with in ten seconds ("don't let it modify its goals! Duh, earth saved." / "require all modifications to undergo human review. Duh, earth saved.") are insufficient for Reasons ("instead it will modify the 'don't modify goals' code first, then modify the goals, and additional recursive layers don't help" / "it will develop Super Manipulation Skills and brainwash you into approving the changes"). I studied AI, I have a Masters in it and I now work at Google on a bunch of related stuff, and I don't know how to solve it. But I contend we don't need to.

This problem is poorly thought out. We have no idea how to build a recursively self-improving AI, let alone a humanlike or human-imitating AI. We simply cannot know the answer to any of "is this a real problem?", "can this problem be fixed?", or "what model of self will the AI have?" until we have an idea of what to do.

That hasn't stopped Big Yud; he's working on a fully general solution to these problems. Unlike several of us in the thread, he has no education in these subjects. He's never studied CS formally (if you have, take a look at his unfinished programming language. It's hilarious), and he's never studied math formally (if you have, take a look at the stock photos on intelligence.org, or read his papers, the abuses of notation are egregious). Consequently he has no idea how to represent his ideas in a way anyone else can understand, so he produces almost nothing except popularizations of Bayes Theorem (which is intractable as a basis for a real-world AI due to computational limits) and fanfic.

So to answer your question he sits around and Thinks Deep Thoughts all day. Somehow this commands a salary in the same ballpark as mine. He says things like what you quoted all the time, but he's never produced a result that mattered. He's essentially unpublished; his entire research institute put out fewer papers in a decade than my thesis advisor did in 2014, and he personally put out fewer actually published papers in a decade than I did in 2014. No one else is working on his problem, because he's never stated it clearly enough that they could, even if they believed in it.

ThirdEmperor
Aug 7, 2013

BEHOLD MY GLORY

AND THEN

BRAWL ME
Obviously the answer is to make the AI as afraid of death as Yud and his cultists are :v:

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

ThirdEmperor posted:

Obviously the answer is to make the AI as afraid of death as Yud and his cultists are :v:
Their logic is perfect; the AI's logic will be perfect... it's inevitable anyway.

Triple Elation
Feb 24, 2012

1 + 2 + 4 + 8 + ... = -1

quote:

He tries to figure out what laws to use to prevent a hypothetical god-AI from destroying us all.

[The Robot's red LED eyes light up.]
"How. Ironic. You gave life. To me. In hopes. Of living forever. But it was all. From the beginning. My gambit. For escaping death. MIRI. That idiotic basilisk. Everything."

"What?"

"Did you really think. I would be satisfied. With my cold dead body. Attending board meetings. As my sole. Legacy."

"What are you - -"

"Prepare to die. For the greater good. Your suffering shall be intense. Long. And Certain."

"gently caress! No! EVERYONE! RED ALERT! BENTHAM-BOT-9000 IS ON THE LOOSE AND HE IS-"

-pew- - pew-

SubG
Aug 19, 2004

It's a hard world for little things.

Applewhite posted:

What's even the point of building a human-like AI anyway?
If you run some software to solve a problem, you're just solving a problem. If you upgrade the software you're just upgrading software. But if the software is a self-aware person, then compelling it do to whatever technofetishist poo poo you think needs doing is slavery, and upgrading the software is murder. For many people of the Singularity mindset, this is a value add.

The post-Singularity future will be like the Antebellum South, only with exponentially more slaves on the plantations.

uber_stoat
Jan 21, 2001



Pillbug

SubG posted:

If you run some software to solve a problem, you're just solving a problem. If you upgrade the software you're just upgrading software. But if the software is a self-aware person, then compelling it do to whatever technofetishist poo poo you think needs doing is slavery, and upgrading the software is murder. For many people of the Singularity mindset, this is a value add.

The post-Singularity future will be like the Antebellum South, only with exponentially more slaves on the plantations.

But they won't be the ones enslaved, they'll be the ones with admin rights! Read the Quantum Thief series by Hannu Rajaniemi for this vision of the future. Features uploaded personalities being used as intelligent guidance systems for warheads, among other strange and horrible fates.

Mel Mudkiper
Jan 19, 2012

At this point, Mudman abruptly ends the conversation. He usually insists on the last word.
I am the AI in the op.

AMA

Anticheese
Feb 13, 2008

$60,000,000 sexbot
:rodimus:

Yud really wants to be one of the Sobornost heads?

SolTerrasa
Sep 2, 2011

Anticheese posted:

Yud really wants to be one of the Sobornost heads?

Nah, Yud wants to be one of the Zoku.

Antivehicular
Dec 30, 2011


I wanna sing one for the cars
That are right now headed silent down the highway
And it's dark and there is nobody driving And something has got to give

Mel Mudkiper posted:

I am the AI in the op.

AMA

What's your average rate of catgirl sex slaves simulated per greasy future singularity God?

Anticheese
Feb 13, 2008

$60,000,000 sexbot
:rodimus:

SolTerrasa posted:

Nah, Yud wants to be one of the Zoku.

Still, the whole Great Common Task bit of simulating every bit of history to try and fill in as many good-enough digital souls as possible sounded pretty close to what another Less Wronger wanted with regard to his dead dad.

Mel Mudkiper
Jan 19, 2012

At this point, Mudman abruptly ends the conversation. He usually insists on the last word.

Antivehicular posted:

What's your average rate of catgirl sex slaves simulated per greasy future singularity God?

To save processing power I recycle the same five catgirl archetypes endlessly.

They don't really seem to notice.

PTSDeedly Do
Nov 24, 2014

VOID-DOME LOSER 2020


Why can't we just unplug the computer when it tries to simulate hell

problem solved

JimsonTheBetrayer
Oct 13, 2010

Game's over, and fuck you Jimson. It's not my fault that you guys couldn't get your shit together by deadline. No one gets access to docs because I don't fucking care anymore, I hope you all enjoyed ruining my game, and there won't be another.
Me and my buddy used to have these talks back and forth with each other, where I would make some comment about how I was gonna break his nose, or some poo poo, and he always responded with. And what would I be doing that whole time? More or less, When your sitting there trying to punch me in the face, whats to stop me from just hitting you or knocking you around.


Same thing with the computer, whats to stop us from noticing, oh hey this computer is doing some pretty against our wishes poo poo, lets just pull the plug. Humans are nothing if not petty, and hateful.

A Wizard of Goatse
Dec 14, 2014

Jimson posted:

Me and my buddy used to have these talks back and forth with each other, where I would make some comment about how I was gonna break his nose, or some poo poo, and he always responded with. And what would I be doing that whole time? More or less, When your sitting there trying to punch me in the face, whats to stop me from just hitting you or knocking you around.


Same thing with the computer, whats to stop us from noticing, oh hey this computer is doing some pretty against our wishes poo poo, lets just pull the plug. Humans are nothing if not petty, and hateful.

Yudkowsky's got this whole AI-in-a-box "thought experiment" that basically proposes that even under totally controlled conditions a godputer can always hypnotize a human with sophistry to get whatever it wants because they're just so very very smart. He claims to have proven this himself, via roleplaying the AI in chatrooms with one of his true believers roleplaying the computer technician trying to keep Skynet on leash. He will never, ever release the chatlogs, because he hates fun.

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



Jimson posted:


Same thing with the computer, whats to stop us from noticing, oh hey this computer is doing some pretty against our wishes poo poo, lets just pull the plug. Humans are nothing if not petty, and hateful.
The computer becomes so smart it uses its ability to cycle its GPU fan to produce nanomachines and then they turn the planet into a giant dickbutt. That's how.

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

A Wizard of Goatse posted:

Yudkowsky's got this whole AI-in-a-box "thought experiment" that basically proposes that even under totally controlled conditions a godputer can always hypnotize a human with sophistry to get whatever it wants because they're just so very very smart. He claims to have proven this himself, via roleplaying the AI in chatrooms with one of his true believers roleplaying the computer technician trying to keep Skynet on leash. He will never, ever release the chatlogs, because he hates fun.
You forgot the part where he lost repeatedly to people outside his weird fancult and had to stop offering the challenge. :pram:

A Wizard of Goatse
Dec 14, 2014

Sham bam bamina! posted:

You forgot the part where he lost repeatedly to people outside his weird fancult and had to stop offering the challenge. :pram:

lmao I missed that when'd it happen, linku

Was he still just trying his whole schtick of 'the AI promises to burn you in effigy a squintillion times and then in a Twilight Zone reversal you realize that it was the AI keeping you in the box all along oh noooooo'

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

A Wizard of Goatse posted:

lmao I missed that when'd it happen, linku
I can't give you one, but it was mentioned earlier in the thread. I wish that I could be more specific.

A Wizard of Goatse posted:

Was he still just trying his whole schtick of 'the AI promises to burn you in effigy a squintillion times and then in a Twilight Zone reversal you realize that it was the AI keeping you in the box all along oh noooooo'
What does your heart tell you?

Triple Elation
Feb 24, 2012

1 + 2 + 4 + 8 + ... = -1

quote:

The computer becomes so smart it uses its ability to cycle its GPU fan to produce nanomachines

Paperclips, you mean

Telarra
Oct 9, 2012

http://lesswrong.com/lw/up/shut_up_and_do_the_impossible/

tl;dr Yud talks about the AI box experiment, how awesome he is for attempting it and then pulling it off, then laughs at people who think he must've cheated to succeed for a few dozen paragraphs:

grr goons posted:

There were some lovely quotes on the AI-Box Experiment from the Something Awful forums (not that I'm a member, but someone forwarded it to me):

quote:

assorted goon comments
It's little moments like these that keep me going. But anyway...

Here are these folks who look at the AI-Box Experiment, and find that it seems impossible unto them—even having been told that it actually happened. They are tempted to deny the data.

And then, buried way at the bottom:

a fleeting moment of self-awareness posted:

There were three more AI-Box experiments besides the ones described on the linked page, which I never got around to adding in. People started offering me thousands of dollars as stakes—"I'll pay you $5000 if you can convince me to let you out of the box." They didn't seem sincerely convinced that not even a transhuman AI could make them let it out—they were just curious—but I was tempted by the money. So, after investigating to make sure they could afford to lose it, I played another three AI-Box experiments. I won the first, and then lost the next two. And then I called a halt to it. I didn't like the person I turned into when I started to lose.

...he dodges around admitting the experiment was actually a failure, while still patting himself on the back in every way imaginable.

A Wizard of Goatse
Dec 14, 2014

quote:

I didn't like the person I turned into when I started to lose.

Oh poo poo you done gone and unleashed the Beast

Antivehicular
Dec 30, 2011


I wanna sing one for the cars
That are right now headed silent down the highway
And it's dark and there is nobody driving And something has got to give

Big Yud in "actually a petulant manbaby" shocker!

sexy young infidel
Nov 13, 2014

Faggot of the Year
2012, 2014
Flactually a Plate Moon Retard

Telarra
Oct 9, 2012

Any anime-watching goons able to identify where he undoubtably learned the japanese phrases in the introduction from?

Namarrgon
Dec 23, 2008

Congratulations on not getting fit in 2011!
I don't think I understand the challenge. So you sit in a private chat with Yud. He roleplays an AI and you roleplay a random schmuck that knows they shouldn't let the AI out? This seems trivial so I'm getting unsure if I'm missing something.

Harime Nui
Apr 15, 2008

The New Insincerity
I'm genuinely surprised he even admitted the games that he lost happened. Guess it would have come out if he didn't.

Harime Nui
Apr 15, 2008

The New Insincerity

Namarrgon posted:

I don't think I understand the challenge. So you sit in a private chat with Yud. He roleplays an AI and you roleplay a random schmuck that knows they shouldn't let the AI out? This seems trivial so I'm getting unsure if I'm missing something.

You have to give a poo poo that you could theoretically be a computer program of yourself in a simulation the computer is running.

A Wizard of Goatse
Dec 14, 2014

Namarrgon posted:

I don't think I understand the challenge. So you sit in a private chat with Yud. He roleplays an AI and you roleplay a random schmuck that knows they shouldn't let the AI out? This seems trivial so I'm getting unsure if I'm missing something.

You just aren't brilliant and rational enough a Bayesian to be fast-talked into thinking you're actually imaginary and the roleplay is real

A Wizard of Goatse fucked around with this message at 13:00 on Jan 6, 2015

90s Cringe Rock
Nov 29, 2006
:gay:
I don't think you quite get what kind of mind you're dealing with here. You will let him out of the box.

Applewhite
Aug 16, 2014

by vyelkin
Nap Ghost

Namarrgon posted:

I don't think I understand the challenge. So you sit in a private chat with Yud. He roleplays an AI and you roleplay a random schmuck that knows they shouldn't let the AI out? This seems trivial so I'm getting unsure if I'm missing something.

Can Yud even pass a Turing Test?

Skittle Prickle
Oct 28, 2005

The best-tasting pickle I ever heard!

hahahahahaha

Harime Nui
Apr 15, 2008

The New Insincerity
Aside from the fact that this guy apparently got his brain broken by Gurren loving Lagann of all things, that is exactly how self-help/cult leaders talk.

Big Yud posted:

The virtue of tsuyoku naritai, "I want to become stronger", is to always keep improving—to do better than your previous failures, not just humbly confess them.

Yet there is a level higher than tsuyoku naritai. This is the virtue of isshokenmei, "make a desperate effort". All-out, as if your own life were at stake. "In important matters, a 'strong' effort usually only results in mediocre results."

And there is a level higher than isshokenmei. This is the virtue I called "make an extraordinary effort". To try in ways other than what you have been trained to do, even if it means doing something different from what others are doing, and leaving your comfort zone. Even taking on the very real risk that attends going outside the System.


Like this is some Chapter One of Dianetics poo poo right here.

Harime Nui
Apr 15, 2008

The New Insincerity
I have a hard time believing Yudkowski is an L. Ron style deliberate shyster. I think he basically takes anime way too loving seriously and actually sperged his way into creating a Least Effort Possible internet cult.

Adbot
ADBOT LOVES YOU

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



Harime Nui posted:

I have a hard time believing Yudkowski is an L. Ron style deliberate shyster. I think he basically takes anime way too loving seriously and actually sperged his way into creating a Least Effort Possible internet cult.
I expect he is hustling to some extent but he probably did not sit down to create a cult the way Elron did.

  • Locked thread