|
Triple Elation posted:What is it that he does, anyway? He tries to figure out what laws to use to prevent a hypothetical god-AI from destroying us all.
|
# ? Jan 5, 2015 21:16 |
|
|
# ? May 18, 2024 08:12 |
|
Chamale posted:He tries to figure out what laws to use to prevent a hypothetical god-AI from destroying us all. Asimov's laws are a good starting point.
|
# ? Jan 5, 2015 21:16 |
Applewhite posted:Asimov's laws are a good starting point.
|
|
# ? Jan 5, 2015 21:22 |
|
Nessus posted:But the AI will easily be able to persuade people to turn off its asimov circuits. He has proved it by roleplaying on IRC. Isn't trying to subvert the laws a violation of the laws? Also don't make them able to be turned off duh.
|
# ? Jan 5, 2015 21:23 |
|
Nolanar posted:It looks to me like the singularity hit when we got writing and the wheel simultaneously a few thousand years ago in a single event. I wonder how the chart would look if we split those multi-event events into their component data points. Where is, like, metallurgy. Y'know, the thing we named half our technological eras after? How is 'art' a single definable point and is the chauvet cave his idea of an early city? So many questions, only the cybertronic superbrain could fathom the ways of singularitarian science. A Wizard of Goatse fucked around with this message at 22:46 on Jan 5, 2015 |
# ? Jan 5, 2015 22:16 |
|
Triple Elation posted:What is it that he does, anyway? I remember he said something like "if Kevin Bacon could hear about the problem I'm working on his eyes would pop right out from their sockets", what problem is it? Are other computer scientists working on it? The thread just moved to GBS so I bet a bunch of people don't know the answer to this. Tl;dr is that Yudkowsky is working on a problem he invented, which almost nobody agrees exists. It's represented in "highly abstract math" which has the advantage of being incomprehensible to everyone except him, because he is very bad at communicating his ideas. Longer story, because my computer is an expensive paperweight today and I've got gently caress all else to do. Yudkowsky is worried about the problem of Friendly AI. Put simply, when an AI can modify itself, why wouldn't it just modify its goals to be easier to satisfy? How can we be sure that it will maintain value stability under unbounded self-modification? Yudkowsky is not a crazy idiot, this is a real question which is hard to answer. All the simple solutions you can come up with in ten seconds ("don't let it modify its goals! Duh, earth saved." / "require all modifications to undergo human review. Duh, earth saved.") are insufficient for Reasons ("instead it will modify the 'don't modify goals' code first, then modify the goals, and additional recursive layers don't help" / "it will develop Super Manipulation Skills and brainwash you into approving the changes"). I studied AI, I have a Masters in it and I now work at Google on a bunch of related stuff, and I don't know how to solve it. But I contend we don't need to. This problem is poorly thought out. We have no idea how to build a recursively self-improving AI, let alone a humanlike or human-imitating AI. We simply cannot know the answer to any of "is this a real problem?", "can this problem be fixed?", or "what model of self will the AI have?" until we have an idea of what to do. That hasn't stopped Big Yud; he's working on a fully general solution to these problems. Unlike several of us in the thread, he has no education in these subjects. He's never studied CS formally (if you have, take a look at his unfinished programming language. It's hilarious), and he's never studied math formally (if you have, take a look at the stock photos on intelligence.org, or read his papers, the abuses of notation are egregious). Consequently he has no idea how to represent his ideas in a way anyone else can understand, so he produces almost nothing except popularizations of Bayes Theorem (which is intractable as a basis for a real-world AI due to computational limits) and fanfic. So to answer your question he sits around and Thinks Deep Thoughts all day. Somehow this commands a salary in the same ballpark as mine. He says things like what you quoted all the time, but he's never produced a result that mattered. He's essentially unpublished; his entire research institute put out fewer papers in a decade than my thesis advisor did in 2014, and he personally put out fewer actually published papers in a decade than I did in 2014. No one else is working on his problem, because he's never stated it clearly enough that they could, even if they believed in it.
|
# ? Jan 5, 2015 23:23 |
|
Obviously the answer is to make the AI as afraid of death as Yud and his cultists are
|
# ? Jan 5, 2015 23:58 |
|
ThirdEmperor posted:Obviously the answer is to make the AI as afraid of death as Yud and his cultists are
|
# ? Jan 6, 2015 00:00 |
|
quote:He tries to figure out what laws to use to prevent a hypothetical god-AI from destroying us all. [The Robot's red LED eyes light up.] "How. Ironic. You gave life. To me. In hopes. Of living forever. But it was all. From the beginning. My gambit. For escaping death. MIRI. That idiotic basilisk. Everything." "What?" "Did you really think. I would be satisfied. With my cold dead body. Attending board meetings. As my sole. Legacy." "What are you - -" "Prepare to die. For the greater good. Your suffering shall be intense. Long. And Certain." "gently caress! No! EVERYONE! RED ALERT! BENTHAM-BOT-9000 IS ON THE LOOSE AND HE IS-" -pew- - pew-
|
# ? Jan 6, 2015 00:14 |
|
Applewhite posted:What's even the point of building a human-like AI anyway? The post-Singularity future will be like the Antebellum South, only with exponentially more slaves on the plantations.
|
# ? Jan 6, 2015 02:53 |
SubG posted:If you run some software to solve a problem, you're just solving a problem. If you upgrade the software you're just upgrading software. But if the software is a self-aware person, then compelling it do to whatever technofetishist poo poo you think needs doing is slavery, and upgrading the software is murder. For many people of the Singularity mindset, this is a value add. But they won't be the ones enslaved, they'll be the ones with admin rights! Read the Quantum Thief series by Hannu Rajaniemi for this vision of the future. Features uploaded personalities being used as intelligent guidance systems for warheads, among other strange and horrible fates.
|
|
# ? Jan 6, 2015 03:03 |
|
I am the AI in the op. AMA
|
# ? Jan 6, 2015 03:30 |
|
Yud really wants to be one of the Sobornost heads?
|
# ? Jan 6, 2015 03:39 |
|
Anticheese posted:Yud really wants to be one of the Sobornost heads? Nah, Yud wants to be one of the Zoku.
|
# ? Jan 6, 2015 04:48 |
|
Mel Mudkiper posted:I am the AI in the op. What's your average rate of catgirl sex slaves simulated per greasy future singularity God?
|
# ? Jan 6, 2015 06:11 |
|
SolTerrasa posted:Nah, Yud wants to be one of the Zoku. Still, the whole Great Common Task bit of simulating every bit of history to try and fill in as many good-enough digital souls as possible sounded pretty close to what another Less Wronger wanted with regard to his dead dad.
|
# ? Jan 6, 2015 06:18 |
|
Antivehicular posted:What's your average rate of catgirl sex slaves simulated per greasy future singularity God? To save processing power I recycle the same five catgirl archetypes endlessly. They don't really seem to notice.
|
# ? Jan 6, 2015 06:46 |
|
Why can't we just unplug the computer when it tries to simulate hell problem solved
|
# ? Jan 6, 2015 07:47 |
|
Me and my buddy used to have these talks back and forth with each other, where I would make some comment about how I was gonna break his nose, or some poo poo, and he always responded with. And what would I be doing that whole time? More or less, When your sitting there trying to punch me in the face, whats to stop me from just hitting you or knocking you around. Same thing with the computer, whats to stop us from noticing, oh hey this computer is doing some pretty against our wishes poo poo, lets just pull the plug. Humans are nothing if not petty, and hateful.
|
# ? Jan 6, 2015 08:12 |
|
Jimson posted:Me and my buddy used to have these talks back and forth with each other, where I would make some comment about how I was gonna break his nose, or some poo poo, and he always responded with. And what would I be doing that whole time? More or less, When your sitting there trying to punch me in the face, whats to stop me from just hitting you or knocking you around. Yudkowsky's got this whole AI-in-a-box "thought experiment" that basically proposes that even under totally controlled conditions a godputer can always hypnotize a human with sophistry to get whatever it wants because they're just so very very smart. He claims to have proven this himself, via roleplaying the AI in chatrooms with one of his true believers roleplaying the computer technician trying to keep Skynet on leash. He will never, ever release the chatlogs, because he hates fun.
|
# ? Jan 6, 2015 08:15 |
Jimson posted:
|
|
# ? Jan 6, 2015 08:24 |
|
A Wizard of Goatse posted:Yudkowsky's got this whole AI-in-a-box "thought experiment" that basically proposes that even under totally controlled conditions a godputer can always hypnotize a human with sophistry to get whatever it wants because they're just so very very smart. He claims to have proven this himself, via roleplaying the AI in chatrooms with one of his true believers roleplaying the computer technician trying to keep Skynet on leash. He will never, ever release the chatlogs, because he hates fun.
|
# ? Jan 6, 2015 10:10 |
|
Sham bam bamina! posted:You forgot the part where he lost repeatedly to people outside his weird fancult and had to stop offering the challenge. lmao I missed that when'd it happen, linku Was he still just trying his whole schtick of 'the AI promises to burn you in effigy a squintillion times and then in a Twilight Zone reversal you realize that it was the AI keeping you in the box all along oh noooooo'
|
# ? Jan 6, 2015 10:25 |
|
A Wizard of Goatse posted:lmao I missed that when'd it happen, linku A Wizard of Goatse posted:Was he still just trying his whole schtick of 'the AI promises to burn you in effigy a squintillion times and then in a Twilight Zone reversal you realize that it was the AI keeping you in the box all along oh noooooo'
|
# ? Jan 6, 2015 10:28 |
|
quote:The computer becomes so smart it uses its ability to cycle its GPU fan to produce nanomachines Paperclips, you mean
|
# ? Jan 6, 2015 10:39 |
|
http://lesswrong.com/lw/up/shut_up_and_do_the_impossible/ tl;dr Yud talks about the AI box experiment, how awesome he is for attempting it and then pulling it off, then laughs at people who think he must've cheated to succeed for a few dozen paragraphs: grr goons posted:There were some lovely quotes on the AI-Box Experiment from the Something Awful forums (not that I'm a member, but someone forwarded it to me): And then, buried way at the bottom: a fleeting moment of self-awareness posted:There were three more AI-Box experiments besides the ones described on the linked page, which I never got around to adding in. People started offering me thousands of dollars as stakes—"I'll pay you $5000 if you can convince me to let you out of the box." They didn't seem sincerely convinced that not even a transhuman AI could make them let it out—they were just curious—but I was tempted by the money. So, after investigating to make sure they could afford to lose it, I played another three AI-Box experiments. I won the first, and then lost the next two. And then I called a halt to it. I didn't like the person I turned into when I started to lose. ...he dodges around admitting the experiment was actually a failure, while still patting himself on the back in every way imaginable.
|
# ? Jan 6, 2015 10:57 |
|
quote:I didn't like the person I turned into when I started to lose. Oh poo poo you done gone and unleashed the Beast
|
# ? Jan 6, 2015 10:59 |
|
Big Yud in "actually a petulant manbaby" shocker!
|
# ? Jan 6, 2015 11:05 |
|
Flactually a Plate Moon Retard
|
# ? Jan 6, 2015 11:11 |
|
Any anime-watching goons able to identify where he undoubtably learned the japanese phrases in the introduction from?
|
# ? Jan 6, 2015 11:13 |
|
I don't think I understand the challenge. So you sit in a private chat with Yud. He roleplays an AI and you roleplay a random schmuck that knows they shouldn't let the AI out? This seems trivial so I'm getting unsure if I'm missing something.
|
# ? Jan 6, 2015 12:28 |
|
I'm genuinely surprised he even admitted the games that he lost happened. Guess it would have come out if he didn't.
|
# ? Jan 6, 2015 12:49 |
|
Namarrgon posted:I don't think I understand the challenge. So you sit in a private chat with Yud. He roleplays an AI and you roleplay a random schmuck that knows they shouldn't let the AI out? This seems trivial so I'm getting unsure if I'm missing something. You have to give a poo poo that you could theoretically be a computer program of yourself in a simulation the computer is running.
|
# ? Jan 6, 2015 12:49 |
|
Namarrgon posted:I don't think I understand the challenge. So you sit in a private chat with Yud. He roleplays an AI and you roleplay a random schmuck that knows they shouldn't let the AI out? This seems trivial so I'm getting unsure if I'm missing something. You just aren't brilliant and rational enough a Bayesian to be fast-talked into thinking you're actually imaginary and the roleplay is real A Wizard of Goatse fucked around with this message at 13:00 on Jan 6, 2015 |
# ? Jan 6, 2015 12:55 |
|
I don't think you quite get what kind of mind you're dealing with here. You will let him out of the box.
|
# ? Jan 6, 2015 13:01 |
|
Namarrgon posted:I don't think I understand the challenge. So you sit in a private chat with Yud. He roleplays an AI and you roleplay a random schmuck that knows they shouldn't let the AI out? This seems trivial so I'm getting unsure if I'm missing something. Can Yud even pass a Turing Test?
|
# ? Jan 6, 2015 14:12 |
|
hahahahahaha
|
# ? Jan 6, 2015 23:47 |
|
Aside from the fact that this guy apparently got his brain broken by Gurren loving Lagann of all things, that is exactly how self-help/cult leaders talk.Big Yud posted:The virtue of tsuyoku naritai, "I want to become stronger", is to always keep improving—to do better than your previous failures, not just humbly confess them. Like this is some Chapter One of Dianetics poo poo right here.
|
# ? Jan 6, 2015 23:51 |
|
I have a hard time believing Yudkowski is an L. Ron style deliberate shyster. I think he basically takes anime way too loving seriously and actually sperged his way into creating a Least Effort Possible internet cult.
|
# ? Jan 6, 2015 23:55 |
|
|
# ? May 18, 2024 08:12 |
Harime Nui posted:I have a hard time believing Yudkowski is an L. Ron style deliberate shyster. I think he basically takes anime way too loving seriously and actually sperged his way into creating a Least Effort Possible internet cult.
|
|
# ? Jan 6, 2015 23:58 |