Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
SolTerrasa
Sep 2, 2011

su3su2u1 posted:

The definition of friendliness they create will just be an abstraction that shares some properties of what we might think of as "friendly."

...

What they really want to create is a sort of "best practices guide to AI development such that it doesn't kill everyone" - that isn't a math problem.

I've always thought so; this seems similar to what I've been saying: "it's an engineering problem".

It would be really helpful to understanding what the gently caress they plan to do if they'd release any information about their formalization of the problem.

Adbot
ADBOT LOVES YOU

su3su2u1
Apr 23, 2014

SolTerrasa posted:

I've always thought so; this seems similar to what I've been saying: "it's an engineering problem".

It would be really helpful to understanding what the gently caress they plan to do if they'd release any information about their formalization of the problem.

And yea, I was basically saying "I agree it's an engineering problem, which is why formal math is basically useless."

Judging by their publications, they are barely getting to the point where they can formalize "cooperate in the prisoners dilemma" with category theory. An accomplishment they seem abnormally proud of (it's the only publication on the arxiv). It also uses 'timeless decision theory' in that they read the source code of the other bot.

The capstone accomplishment of a decade of SIAI is a piece of code that makes it easy to make formal prisoner's dilemma agents play against each other.

su3su2u1 fucked around with this message at 08:59 on Mar 25, 2015

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET

su3su2u1 posted:

And yea, I was basically saying "I agree it's an engineering problem, which is why formal math is basically useless."

Judging by their publications, they are barely getting to the point where they can formalize "cooperate in the prisoners dilemma" with category theory. An accomplishment they seem abnormally proud of (it's the only publication on the arxiv). It also uses 'timeless decision theory' in that they read the source code of the other bot.

Do they get into the halting problem issues there? That seems like the only super interesting part of that particular problem, if they have anything new to say about it.

For those playing at home, here's what that means in this context. (the true halting problem is much more general) AI A simulates AI B and does whatever beats B -- B simulates A and does whatever beats A. But they'll compute forever at this rate -- to simulate B, A has to simulate itself and B's reaction -- but to simulate itself for that purpose, it has to simulate itself and B's reaction again, and so on. B has the same problem. Either could stop thinking at a certain recursion depth, but then the other would win by thinking one recursion level deeper. (as it would know by simulating it when the first one would stop) So deciding to terminate is always wrong. Either of these AIs is strong against opponents that can't attempt to simulate *it*, but how do you guarantee that?

Krotera fucked around with this message at 09:04 on Mar 25, 2015

su3su2u1
Apr 23, 2014

Krotera posted:

Do they get into the halting problem issues there? That seems like the only super interesting part of that particular problem, if they have anything new to say about it.

They show that if the agent is defined via their model, it will cooperate with itself.

It you choose not to represent the agent via their models (i.e. most real programs), I think all bets are off.

http://arxiv.org/pdf/1401.5577.pdf

Tiggum
Oct 24, 2007

Your life and your quest end here.


Krotera posted:

For those playing at home, here's what that means in this context. (the true halting problem is much more general) AI A simulates AI B and does whatever beats B -- B simulates A and does whatever beats A. But they'll compute forever at this rate -- to simulate B, A has to simulate itself and B's reaction -- but to simulate itself for that purpose, it has to simulate itself and B's reaction again, and so on. B has the same problem. Either could stop thinking at a certain recursion depth, but then the other would win by thinking one recursion level deeper. (as it would know by simulating it when the first one would stop) So deciding to terminate is always wrong. Either of these AIs is strong against opponents that can't attempt to simulate *it*, but how do you guarantee that?

Well, the correct solution is to spend years building up an immunity to iocaine powder and poison both drinks.

IronClaymore
Jun 30, 2010

by Athanatos

Darth Walrus posted:

Expectation had nothing to do with it. Like most of this thread's posters, I stuck my hand in that bear trap voluntarily.

I wasn't really coerced into it, but that's what I'm going with. Yep, that's what it's going to read on my obituary, that I only read this stuff because some guy I've known since I was 2 years old told me to. But I'm lying to you and myself. I've swum these seas of insanity voluntarily, I even read those loving profound "sequences". And it was partially Robo's Basilisk, that I only actually read about here, that brought me out of it. Because that poo poo was loving stupid. Did people actually fall for that? Really? I genuinely hope at least someone did because that would really be the funniest thing that ever happened.

Everyone who knows me knows that I'm a colossal nerd, an aspie with just a hint of sociopath thrown in, enough to be able to lie through an evaluation and count as normal. And if the concept of "game recognises game" has any validity, or the "takes one to know one" cliche is more your liking, take it from me, Yudkowsky is loving nuts. But entertaining. If this story was a bear trap it was still an enjoyable one. I'd see it published ahead of most other books out right now, but that's a really low bar to reach.

Also, if I ever meet him, I plan on telling him that at least my brother is still alive, just for kicks. The inevitable broken nose will be worth it.

Fried Chicken
Jan 9, 2011

Don't fry me, I'm no chicken!

su3su2u1 posted:

And yea, I was basically saying "I agree it's an engineering problem, which is why formal math is basically useless."

Judging by their publications, they are barely getting to the point where they can formalize "cooperate in the prisoners dilemma" with category theory. An accomplishment they seem abnormally proud of (it's the only publication on the arxiv). It also uses 'timeless decision theory' in that they read the source code of the other bot.

The capstone accomplishment of a decade of SIAI is a piece of code that makes it easy to make formal prisoner's dilemma agents play against each other.

When you say "their publications" here, do you mean AI researchers taking a bayes approach to it, or Yud and his cult?

su3su2u1
Apr 23, 2014

Fried Chicken posted:

When you say "their publications" here, do you mean AI researchers taking a bayes approach to it, or Yud and his cult?

Yud-cult

Telarra
Oct 9, 2012

Specifically, Yud's research team, the Machine Intelligence Research Institute.

Pvt.Scott
Feb 16, 2007

What God wants, God gets, God help us all
I'm not sure I see the problem in the catgirl scenario. Like, if you get tired of your furry fuckdolls, just leave the volcano lair and hang out with some of the other people in paradise.

JosephWongKS
Apr 4, 2009

by Nyc_Tattoo
Chapter 8: Positive Bias
Part Five


quote:


"No, I haven't done Step 2, 'Do an experiment to test your hypothesis.'"

The boy closed his mouth again, and began to smile.

Hermione looked at the drinks can, which she'd automatically put into the cupholder at the window. She took it up and peered inside, and found that it was around one-third full.

"Well," said Hermione, "the experiment I want to do is to pour it on my robes and see what happens, and my prediction is that the stain will disappear. Only if it doesn't work, my robes will be stained, and I don't want that."

"Do it to mine," said the boy, "that way you don't have to worry about your robes getting stained."

"But -" Hermione said. There was something wrong with that thinking but she didn't know how to say it exactly.

"I have spare robes in my trunk," said the boy.

"But there's nowhere for you to change," Hermione objected. Then she thought better of it. "Though I suppose I could leave and close the door -"

"I have somewhere to change in my trunk, too."

Hermione looked at his trunk, which, she was beginning to suspect, was rather more special than her own.

"All right," Hermione said, "since you say so," and she rather gingerly poured a bit of green pop onto a corner of the boy's robes. Then she stared at it, trying to remember how long the original fluid had taken to disappear...

And the green stain vanished!

Hermione let out a sigh of relief, not least because this meant she wasn't dealing with all of the Dark Lord's magical power.

Well, Step 3 was measuring the results, but in this case that was just seeing that the stain had vanished. And she supposed she could probably skip Step 4, about the cardboard poster. "My answer is that the robes are Charmed to keep themselves clean."

"Not quite," said the boy.

Hermione felt a stab of disappointment. She really wished she wouldn't have felt that way, the boy wasn't a teacher, but it was still a test and she'd gotten a question wrong and that always felt like a little punch in the stomach.

(It said almost everything you needed to know about Hermione Granger that she had never let that stop her, or even let it interfere with her love of being tested.)

"The sad thing is," said the boy, "you probably did everything the book told you to do. You made a prediction that would distinguish between the robe being charmed and not charmed, and you tested it, and rejected the null hypothesis that the robe was not charmed. But unless you read the very, very best sort of books, they won't quite teach you how to do science properly. Well enough to really get the right answer, I mean, and not just churn out another publication like Dad always complains about. So let me try to explain - without giving away the answer - what you did wrong this time, and I'll give you another chance."

She was starting to resent the boy's oh-so-superior tone when he was just another eleven-year-old like her, but that was secondary to finding out what she'd done wrong. "All right."


At least Eliezer is self-aware that Eliezarry is condescending and obnoxious. So that’s a start, I guess?

No wait a minute, there’s also the bit about “that was secondary to finding out what she’d done wrong”. That implies that Eliezer thinks that if someone is being condescending to you, the onus is on you to be condescended to in order to find out “what you’d done wrong”. He’s merely justifying Eliezarry’s being so unlikable and annoying.

Pvt.Scott
Feb 16, 2007

What God wants, God gets, God help us all

JosephWongKS posted:

Chapter 8: Positive Bias
Part Five



At least Eliezer is self-aware that Eliezarry is condescending and obnoxious. So that’s a start, I guess?

No wait a minute, there’s also the bit about “that was secondary to finding out what she’d done wrong”. That implies that Eliezer thinks that if someone is being condescending to you, the onus is on you to be condescended to in order to find out “what you’d done wrong”. He’s merely justifying Eliezarry’s being so unlikable and annoying.


That can't be right. You'd have to be some sort of obnoxious, condescending prick to think like that.

Added Space
Jul 13, 2012

Free Markets
Free People

Curse you Hayard-Gunnes!

Pvt.Scott posted:

I'm not sure I see the problem in the catgirl scenario. Like, if you get tired of your furry fuckdolls, just leave the volcano lair and hang out with some of the other people in paradise.

:eng101: The problem is in this statement:

quote:

He said, "Well, then I'd just modify my brain not to get bored—"

If you're stuck in a lotus-eating Singularity scenario where you can edit your brain, motivation becomes a problem. You can delete your pain receptors, delete your boredom, delete any distracting influences. Given long enough, you'd eventually delete the thought process that could tell you why this all was a bad idea, so you'd keep deleting your mental impulses until you were down to a mindless puppet getting pumped full of pleasure signals. This is referred to as a "wire-heading" scenario.

Big Yud doesn't see this as a potential problem of transhumanism because a Friendly AI wouldn't let you do this because shut up.

Legacyspy
Oct 25, 2008

SSNeoman posted:

Right my mistake. It's actually 3^^7625597484987 which is some godawful number you get if you take 3 and then raise it to the 3rd power 7,625,597,484,987 times which is still the same thing as "a meaninglessly big number" which means my point still stands.

Of course the actual number is irrelevant. I was just demonstrating that you had at least 2'nd hand knowledge of what he had written, because if you had read the original post you would know 3^^^3 != 3^3^3^3.

SSNeoman posted:

But you know what fine, whatever. Lemme explain why Yud's conclusion is nuts.
So there is a branch of philosophy called utilitarianism. The basic premise is that it wants to achieve the maximum amount of happiness for the maximum amount of people (in broad strokes).
In Yud's PHILO101 he blasts the problem with a blunt solution. Dude is tortured for 50 years, or someone is tortured (ie: removes the dust speck) for like 1 second? Well obviously we'd pick option 2. But when you SHUT UP AND MULTIPLY our second by that "meaninglessly big number", suddenly having one dude tortured for 50 years doesn't sound that bad, right? BEEP BOOP PROBLEM SOLVED GET hosed CENTURIES OF PHILOSOPHY

Well of course not. There are other factors involved when you realize that we are dealing with a being who has life, willpower, the ability to feel pain and all that other good poo poo. We are taking 50 years out of a person's life and replacing it with pain and misery, instead of inconveniencing a lot of people for an insignificant amount of time. This logic, by the way, is still equating torture with the pain caused by dust specks. I am still playing by Yud's rules despite the fact the two are obviously not equal. You are crushing a person's dreams and ambitions just for the sake of not bothering a whole lot of people for something they won't remember. What about the life this poor person is missing out by going through this torture? All those people won't even remember the speck by the end of the day, but that person will carry his PTSD for the rest of his (no doubt shortened) life, if he's not loving catatonic by the end of the first year.
I am explaining this in-depth because it's a problem that requires you to do so. It is something LW themselves avoid doing, and I hope you are not falling into the same trap.

Sure. You can absolutely argue that dust-specks is the preferable choice. The point of the question though is of consistency. If one says that there is no sufficiently large number where torture is preferable to dust specks, then to be logically consistent you must make sacrifices elsewhere. But most people don't behave in a way that is logically consistent with a preference to dust over specks. For example, they drive cars distances they could walk, a 1 / (some number) chance of causing other people serve harm, to save them trouble of walking to the grocery store ( a minor inconvenience). Of course most people agree that "some number" is so big that the chance of causing enough harm to worry about is not worth the trouble of driving to the grocery store.

su3su2u1 posted:

I mock Yudkowsky from a position of strength rather than of ignorance- I have a phd in physics, and Yud included (wrong) physics references in his fanfic. He incorrectly references psychology experiments in his fanfic. He uses the incorrect names for biases in his fanfic. He gets computational complexity stuff wrong in his fanfic. He does this despite insisting on page 1 that "All science mentioned is real science." Let me ask you- if an AI researcher can't get computational complexity correct, why should I trust anything else he writes? If someone who has founded a community around avoiding mental biases can't get the references right in a fanfic, why should I trust his other writing?

His technical work is no better. His paper "timeless decision theory" paper is 100 pages of rambling, with no formal definition of the theory anywhere (and it would be super easy to formalize the theory as described). His research institute is a joke- they've been operating for more than a decade with only 1 paper on arxiv and basically no citations to any of their self-published garbage.

I am in no position to argue whether or not anything he has done w.r.t to A.I is legitimate. And I didn't mean you guys were ignorant of science or anything. I just meant a lot of people were mocking stuff that hadn't even personally read.


There is nothing ironic about it. I said Artemis was a character like Harry in reply to a comment by Nessus first saying that Artemis was a character like Harry, and to my limited knowledge that seemed correct. If you want to argue whether or not Artemis is like Harry, argue with Nessus, not me. I don't know the books well enough.


akulanization posted:

I have no evidence this conversation actually occurred, but if it did that still doesn't change my point. Harriezer is held up as an example, he's powerful and has agency because he is a rationalist. He is obviously meant to inspire people to be rationalists like himself, a perception that isn't helped by the author saying poo poo like this:

Well I'm too lazy to screen shot the reddit P.M because you'd probably just say that wasn't sufficient evidence (Photoshop and all). You can just P.M him yourself on Reddit if you want. And it doesn't change your point? Why not just admit you were wrong instead of backtracking? I definitely don't disagree that he was meant to inspire people or lead them to the sequences. He was clearly intended to. However he was clearly not intended to be some sort of super rationalist, as you and others argued.

akulanization posted:

Also the actual number is immaterial if you want to approach this problem from the perspective of mathematics, it's an arbitrarily large real number. If you grant the premise that both events are measured with the same scale and that minimizing that number is a priority, then the number doesn't matter; you can always select a number big enough that the "math" works out the way you want. The problem with the torture proponent's stance is that, as SSNEOMAN pointed out, they don't actually consider the problem. They wipe out a life because that's better than a trifling inconvenience to a huge number of people.

I agree the actual number is immaterial. I was just using the actual number to demonstrate that the he clearly hadn't read the original thing Eliezer had written. And the problem with the "dust speck people" is that if they are consistent in their preferences, then they must make sacrifices elsewhere. Only most people are not consistent and would not make these sacrifices if asked.

petrol blue posted:

Yeah, the 'position of ignorance' line probably wasn't a good move, Legacyspy, goons like physics and ai almost as much as Mt. Dew.



Sorry. I didn't mean to imply that goons were ignorant of physics or math or computer science. I really, really appreciate the knowledgeable people here and have personally benefited from the posters in Caverns & Science/Academics. I meant that some (not all) were ignorant of what Eliezer had written.

Nessus posted:

Here you go, buddy! http://forums.somethingawful.com/showthread.php?threadid=3627012

The idea is that the AI is going to inevitably become God, so it is the most important thing ever to make sure that when the computer inevitably becomes Literally God, we make sure it's a nice friendly New Testament God who cares for us, rather than an Old Testament God who will send us all to Robot Hell.

This is what I mean. If Nessus is just being sarcastic or w.e that is fine. But if he honestly thinks that friendly A.I is a worry over A.I torturing us... then he doesn't understand what, right or wrong, Eliezer is talking about. Its a worry that the A.I, in pursuit of the goals we gave it, may have unintended consequences that could be bad for us. This can be as simple as being a lovely A.I that when asked "How do we get rid of insects eating our sugar cane crop" says "introduce the cane toad" not understanding that the consequences of cane toad infestation will be far more annoying than the insects eating our crops. Or an A.I that does for some reason we can't couldn't have foreseen decide to "kill all humans" in pursuit of its goals. I think this is unlikely but I don't think the idea of "How do we get an A.I to recommend courses of actions that take into account the complex values we have (like not liking a cane toad infestation) is a useful one. Whether Eliezer is actually doing anything useful on this front, I can't tell. But afaik hes also one of the few people even talking about it.

Night10194 posted:

I mock the writing in the story because it's bad and we're on somethingawful.com,

I have no qualms with this.

Night10194 posted:

but Yud's cult is legitimately fascinating to me as a religious studies guy who is interested in getting into studying emerging religions, and so his proselytization-fiction is actually really interesting from an academic standpoint.

Yud's cult? Lesswrong is more or less dead. The only active following Yudoswky has right now is /r/hpmor and that will slowly die off since nothing is being written. That is not say there isn't a culture of people who share similar ideas to Yud, the "rationalist" culture or w.e. But I don't think they "belong" to Yud. I'm curious. How would you identify someone in his cult? Am I in his cult?

Legacyspy fucked around with this message at 07:45 on Mar 26, 2015

Night10194
Feb 13, 2012

We'll start,
like many good things,
with a bear.

I'd define it primarily as people who are actual contributors to his institute (financial ones), and who are members of the Less Wrong community. I have no idea if you're a member. You're pretty obviously a fan of the work, but that has no real bearing on if you self identify as a member of the community or of Yudkowsky or similar 'rationalist' orbits. My primary academic interest is in the fact that this fiction, and the Sequences, and many of his theories, have a cast very similar to a lot of Christian religious and apocalyptic dogma, despite their avowed atheism. I'm currently beginning to gather data and do reading on his work because of the fascinating parallels between the Cryonics stuff and the Christian resurrection of the Dead, the similarities between AI Go Foom and classic apocalypse, etc, because I have approval and support from my old advisers from my master's program that there might be a productive bit of work to be done on singularity and science fetish cults, and on the sort of cross pollination between commonplace religious ideas in the larger culture and the texture of what they end up believing.

I'm at the very beginning of working on this, mind, and have a hell of a lot of reading to do still. Just some of the ideas and the general shape of things piqued my interest in their similarity and merit looking into from an academic standpoint.

Tiggum
Oct 24, 2007

Your life and your quest end here.


Legacyspy posted:

This is what I mean. If Nessus is just being sarcastic or w.e that is fine. But if he honestly thinks that friendly A.I is a worry over A.I torturing us... then he doesn't understand what, right or wrong, Eliezer is talking about. Its a worry that the A.I, in pursuit of the goals we gave it, may have unintended consequences that could be bad for us. This can be as simple as being a lovely A.I that when asked "How do we get rid of insects eating our sugar cane crop" says "introduce the cane toad" not understanding that the consequences of cane toad infestation will be far more annoying than the insects eating our crops. Or an A.I that does for some reason we can't couldn't have foreseen decide to "kill all humans" in pursuit of its goals. I think this is unlikely but I don't think the idea of "How do we get an A.I to recommend courses of actions that take into account the complex values we have (like not liking a cane toad infestation) is a useful one. Whether Eliezer is actually doing anything useful on this front, I can't tell. But afaik hes also one of the few people even talking about it.

Saying that no one else is even talking about this problem is like saying no one's talking about the problem of interplanetary diplomacy. Since there are no other inhabited planets to communicate with right now, it's not exactly a pressing question. If we end up colonising other planets or meeting another intelligent species then it'll be relevant, but until then we don't even know the parameters we'd be working within, so it's fairly useless to come up with any "solutions" just yet. How to make sure an AGI is "friendly" is a potential problem, but it's not one we can actually take any steps to solve until we know what form an AGI might actually take.

Legacyspy
Oct 25, 2008

Night10194 posted:

I'd define it primarily as people who are actual contributors to his institute (financial ones), and who are members of the Less Wrong community. I have no idea if you're a member. You're pretty obviously a fan of the work, but that has no real bearing on if you self identify as a member of the community or of Yudkowsky or similar 'rationalist' orbits. My primary academic interest is in the fact that this fiction, and the Sequences, and many of his theories, have a cast very similar to a lot of Christian religious and apocalyptic dogma, despite their avowed atheism. I'm currently beginning to gather data and do reading on his work because of the fascinating parallels between the Cryonics stuff and the Christian resurrection of the Dead, the similarities between AI Go Foom and classic apocalypse, etc, because I have approval and support from my old advisers from my master's program that there might be a productive bit of work to be done on singularity and science fetish cults, and on the sort of cross pollination between commonplace religious ideas in the larger culture and the texture of what they end up believing.

I'm at the very beginning of working on this, mind, and have a hell of a lot of reading to do still. Just some of the ideas and the general shape of things piqued my interest in their similarity and merit looking into from an academic standpoint.

That sounds interesting, and a lot more fair than what I expected from most of the other times I've heard lesswrong described as a cult. As a note, I've never given him money. I don't have an account of lesswrong, though I've read some of it. I've lived in Berkely for several years but I've never been to a lesswrong meetup or their offices despite them being down the street. I've only met Eliezier once which consisted of me saying hi and telling him that I enjoyed hpmor. This was at the hpmor wrap party in Berkeley. Which I went to because I figured it would be fun, and it was literally minutes away it so it would be lame if I didn't go. It was fun. I played a bunch of board/card games, ate pizza. Eliezer answered questions about hpmor. Some guy made a bunch of intentionally annoying magic decks. A deck titled "existentialist risk" that was pretty much nothing but mana-accel and board wipes.

The one thing I do give money to as a indirect side effect of lesswrong and all that, is that I give money to the Against Malaria Foundation.

su3su2u1
Apr 23, 2014

Legacyspy posted:

Sure. You can absolutely argue that dust-specks is the preferable choice. The point of the question though is of consistency. If one says that there is no sufficiently large number where torture is preferable to dust specks, then to be logically consistent you must make sacrifices elsewhere. But most people don't behave in a way that is logically consistent with a preference to dust over specks. For example, they drive cars distances they could walk, a 1 / (some number) chance of causing other people serve harm, to save them trouble of walking to the grocery store ( a minor inconvenience). Of course most people agree that "some number" is so big that the chance of causing enough harm to worry about is not worth the trouble of driving to the grocery store.

Your comparison is bad- choosing torture over dust specks is giving one person a certainty of ruining a life vs lots of minor inconveniences. Your grocery store example is choosing a small chance of causing harm vs a SINGLE minor inconvenience. You could go one step further and say everyone could choose not to drive to save some lives, but choosing not to drive would also cost lives. There is no inconsistency in choosing driving to the store over walking but also choosing dust specks over torture.

quote:

I am in no position to argue whether or not anything he has done w.r.t to A.I is legitimate. And I didn't mean you guys were ignorant of science or anything. I just meant a lot of people were mocking stuff that hadn't even personally read.

So take my word that the majority of science references in HPMOR are wrong.

quote:

Yud's cult? Lesswrong is more or less dead. The only active following Yudoswky has right now is /r/hpmor and that will slowly die off since nothing is being written. That is not say there isn't a culture of people who share similar ideas to Yud, the "rationalist" culture or w.e. But I don't think they "belong" to Yud. I'm curious. How would you identify someone in his cult? Am I in his cult?

Yudkowsky lives entirely off the donations of people who give him money to save the world.

Legacyspy
Oct 25, 2008

Tiggum posted:

Saying that no one else is even talking about this problem is like saying no one's talking about the problem of interplanetary diplomacy. Since there are no other inhabited planets to communicate with right now, it's not exactly a pressing question. If we end up colonising other planets or meeting another intelligent species then it'll be relevant, but until then we don't even know the parameters we'd be working within, so it's fairly useless to come up with any "solutions" just yet.

But this sounds like a totally interesting thing to explore. Have you ever seen? http://www.princeton.edu/~pkrugman/interstellar.pdf ? (Obviously Eliezer is no krugman) I think it interplanetary diplomacy would be a fascinating area of discussion, with information being bounded by the speed of light and all that. Interstellar would be even better. How such diplomacy work when the first mover has such a significant advantage? Our rivals at Alpha Centauri could be settling Epsilon Eridani behind our backs, right as we negotiate the settlement rights Centauri command.

Tiggum
Oct 24, 2007

Your life and your quest end here.


Legacyspy posted:

But this sounds like a totally interesting thing to explore.

Sure, it's a great premise for science fiction. And there's tons of sci-fi about interplanetary diplomacy (and AGIs). There just aren't many people talking about it as a practical issue, because it isn't one.

platedlizard
Aug 31, 2012

I like plates and lizards.

Tiggum posted:

Sure, it's a great premise for science fiction. And there's tons of sci-fi about interplanetary diplomacy (and AGIs). There just aren't many people talking about it as a practical issue, because it isn't one.

I'm mostly just piggy backing here to point out that Paul Krugman wrote a paper about interstellar trade because why the gently caress not

JosephWongKS
Apr 4, 2009

by Nyc_Tattoo
Chapter 8: Positive Bias
Part Six


quote:


The boy's expression grew more intense. "This is a game based on a famous experiment called the 2-4-6 task, and this is how it works. I have a rule - known to me, but not to you - which fits some triplets of three numbers, but not others. 2-4-6 is one example of a triplet which fits the rule. In fact... let me write down the rule, just so you know it's a fixed rule, and fold it up and give it to you. Please don't look, since I infer from earlier that you can read upside-down."

The boy said "paper" and "mechanical pencil" to his pouch, and she shut her eyes tightly while he wrote.
"There," said the boy, and he was holding a tightly folded piece of paper. "Put this in your pocket," and she did.

"Now the way this game works," said the boy, "is that you give me a triplet of three numbers, and I'll tell you 'Yes' if the three numbers are an instance of the rule, and 'No' if they're not. I am Nature, the rule is one of my laws, and you are investigating me. You already know that 2-4-6 gets a 'Yes'. When you've performed all the further experimental tests you want - asked me as many triplets as you feel necessary - you stop and guess the rule, and then you can unfold the sheet of paper and see how you did. Do you understand the game?"

"Of course I do," said Hermione.

"Go."

"4-6-8" said Hermione.

"Yes," said the boy.

"10-12-14", said Hermione.

"Yes," said the boy.

Hermione tried to cast her mind a little further afield, since it seemed like she'd already done all the testing she needed, and yet it couldn't be that easy, could it?

"1-3-5."

"Yes."

"Minus 3, minus 1, plus 1."

"Yes."

Hermione couldn't think of anything else to do. "The rule is that the numbers have to increase by two each time."

"Now suppose I tell you," said the boy, "that this test is harder than it looks, and that only 20% of grownups get it right."

Hermione frowned. What had she missed? Then, suddenly, she thought of a test she still needed to do.

"2-5-8!" she said triumphantly.

"Yes."

"10-20-30!"

"Yes."

"The real answer is that the numbers have to go up by the same amount each time. It doesn't have to be 2."

"Very well," said the boy, "take the paper out and see how you did."

Hermione took the paper out of her pocket and unfolded it.

Three real numbers in increasing order, lowest to highest.

Hermione's jaw dropped. She had the distinct feeling of something terribly unfair having been done to her, that the boy was a dirty rotten cheating liar, but when she cast her mind back she couldn't think of any wrong responses that he'd given.

"What you've just discovered is called 'positive bias'," said the boy. "You had a rule in your mind, and you kept on thinking of triplets that should make the rule say 'Yes'. But you didn't try to test any triplets that should make the rule say 'No'. In fact you didn't get a single 'No', so 'any three numbers' could have just as easily been the rule. It's sort of like how people imagine experiments that could confirm their hypotheses instead of trying to imagine experiments that could falsify them - that's not quite exactly the same mistake but it's close. You have to learn to look on the negative side of things, stare into the darkness. When this experiment is performed, only 20% of grownups get the answer right. And many of the others invent fantastically complicated hypotheses and put great confidence in their wrong answers since they've done so many experiments and everything came out like they expected."


That does make sense, and it is an interesting experiment. I’ll try it out with my friends one of these days. Eliezarry’s also relatively undouchey in explaining the test, so points to him.

akulanization
Dec 21, 2013

Legacyspy posted:

There is nothing ironic about it. I said Artemis was a character like Harry in reply to a comment by Nessus first saying that Artemis was a character like Harry, and to my limited knowledge that seemed correct. If you want to argue whether or not Artemis is like Harry, argue with Nessus, not me. I don't know the books well enough.

I am desperately trying to remain civil in this discussion, you are making it very hard. You posted in support of a position, when you were presented with an argument or evidence that the position was incorrect you

Legacyspy posted:

Why not just admit you were wrong instead of backtracking?
claimed to have never held the position in the first place. Which is colossally dishonest, and in light of your response to me, hypocritical. I mean sure, you could be wrong but instead words don't mean what they are normally considered to mean and I have to treat everything you say as a superposition of what you wrote and the very opposite with absolutely no way to distinguish them.

Legacyspy posted:

Well I'm too lazy to screen shot the reddit P.M because you'd probably just say that wasn't sufficient evidence (Photoshop and all). You can just P.M him yourself on Reddit if you want. And it doesn't change your point? Why not just admit you were wrong instead of backtracking? I definitely don't disagree that he was meant to inspire people or lead them to the sequences. He was clearly intended to. However he was clearly not intended to be some sort of super rationalist, as you and others argued.

You don't appear to grasp that Harriezer may not be the infini-rational from the perspective of Big Yud, but in the story he is the only "rationalist" and he uses that to go on, and on, and on about how dumb people are and how they need to think more like him. He is definitely the High King of Rational Mountain because there are no challengers to the throne. When Yud goes, "well he isn't as rational as I am :smuggo:" He comes off as a hack who is trying to post hoc rationalize the mammoth failings of his character. I mean he has had years to come up with excuses for his terrible pacing and characterization.

Legacyspy posted:

I agree the actual number is immaterial. I was just using the actual number to demonstrate that the he clearly hadn't read the original thing Eliezer had written. And the problem with the "dust speck people" is that if they are consistent in their preferences, then they must make sacrifices elsewhere. Only most people are not consistent and would not make these sacrifices if asked.
You are doing the thing where you contradict yourself, again. If the number is immaterial to the argument or the counter argument then obviously it does not matter if it was quoted correctly. Since both the argument for torture and the argument against it do not rely on a precise value of N, then there is no use distinguishing between ~10^38 and an even bigger number. So if you are trying to argue that they didn't read Yud, it's trivial to show they did since they loving quoted him. And if you are arguing they didn't understand him, then you need to actually show how a component they missed addresses or renders moot some part of their argument; the precise value of a stupidly huge number isn't doing that.

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



Legacyspy posted:

This is what I mean. If Nessus is just being sarcastic or w.e that is fine. But if he honestly thinks that friendly A.I is a worry over A.I torturing us... then he doesn't understand what, right or wrong, Eliezer is talking about. Its a worry that the A.I, in pursuit of the goals we gave it, may have unintended consequences that could be bad for us. This can be as simple as being a lovely A.I that when asked "How do we get rid of insects eating our sugar cane crop" says "introduce the cane toad" not understanding that the consequences of cane toad infestation will be far more annoying than the insects eating our crops. Or an A.I that does for some reason we can't couldn't have foreseen decide to "kill all humans" in pursuit of its goals. I think this is unlikely but I don't think the idea of "How do we get an A.I to recommend courses of actions that take into account the complex values we have (like not liking a cane toad infestation) is a useful one. Whether Eliezer is actually doing anything useful on this front, I can't tell. But afaik hes also one of the few people even talking about it.
The Nessus believes that AI of the near-term future will largely be a limited assistant and will probably only cause us harm when it crashes the stock market, as has already nearly happened. The Nessus further believes that for the scenario you outline, you essentially assume the AI is going to be as much of a schmuck as humans, which seems like a likely outcome. Humans are pretty stupid, and genetically engineering ourselves to do math problems better will not stop us from being dumbasses.

The Nessus further believes this entire friendly AI whatever-the-hell involves numerous presuppositions which beg the question. (Example: Why is there only one AI? Presumably there would be prototypes. Example 2: What if the AIs get into fights? Example 3: What if the AIs discover that fantasy magic nanotechnology is actually impossible?)

The Nessus expects if the AI kills us all, it will be because some rich dumbass put it in charge of something important in order to lay off some trained humans and juice up his (or her, but let's be real: his) bonus.

The Nessus also thinks these people have managed to reinvent apocalyptic Christianity with hilarious precision, and that's pretty funny. Death is certain (though it is good to progress science to make it come later, with less pain and disability on the way). The computer will not save you.

Legacyspy
Oct 25, 2008

su3su2u1 posted:

You could go one step further and say everyone could choose not to drive to save some lives, but choosing not to drive would also cost lives.

Which is why I didn't say it.



su3su2u1 posted:

There is no inconsistency in choosing driving to the store over walking but also choosing dust specks over torture.

Of course you can say that. I didn't formalize what walking to the grocery store instead of driving is, so if you do differently then it won't be inconsistent.

However, I can show that preferring dust-specks to torture, does mean to be consistent you have to bite the bullet elsewhere.

Those that prefer dust-speck to torture are saying that there is no sufficiently large number, such that a non-negligible (it is an assumption of the original discussion that dust specks are not negligible, see the original discussion) inconvenience experienced by such a number of people, is less preferable then a far larger magnitude of harm to one person.


su3su2u1 posted:

Your comparison is bad- choosing torture over dust specks is giving one person a certainty of ruining a life vs lots of minor inconveniences.

I'm fairly certain that certainty is a red-herring here. Is the certainty of it really that relevant? What if it is a (Big number - 1)/(Big Number) chance? Does that really change things? Especially since the people who prefer dust-specks to torture are arguing there is no sufficiently large number, I can just pick what ever number of people being trivially inconvenienced results in as close to unity probability of at least one person getting what ever I want happen to them (that has non-zero probability for a single trial). Does that really change things?


Is there a number large enough such that you wouldn't pay a dollar to prevent a 1/(large number) chance of preventing something terrible from happening? Or do you really live your life taking any minor inconvenience that prevents no matter how negligible, a very, very small chance of something bad happening to you?

Everyone doesn't prefer a minor inconvenience which would prevent a 1/(big enough number) chance of something happening them, for just taking that chance instead. Do you eat bananas? Would not eating bananas (or other foods similarly radioactive) consist of a minor inconvenience (if not just substitute something else you consider a minor inconvenience and pretend it too has a similarly low chance of a really bad consequence)? But bananas contain potassium that could decay and in the process emit a particle that strikes the DNA of one your cells, making it cancerous, which by chance doesn't get caught in time by your body, and now you have cancer.

The point is, all you need to do to get a world where someone is tortured (very bad consequence) to avoid a minor inconvenience across a sufficiently large number of people, is have a sufficiently large number of people such that the probability of at least one person getting a very bad consequence is near unity. And that is why people who prefer dust speckers are inconsistent.

Legacyspy fucked around with this message at 09:59 on Mar 26, 2015

Legacyspy
Oct 25, 2008

akulanization posted:

I am desperately trying to remain civil in this discussion, you are making it very hard. You posted in support of a position, when you were presented with an argument or evidence that the position was incorrect you claimed to have never held the position in the first place.

No. I never claimed to have never held that position. I absolutely said Harry was like Artemis. However, when you tried to debate with me that Harry was not like Artemis, I tried to explain that I had only said so because Nessus raised that Harry was like Artemis, and I went with it because to my limited knowledge that seemed correct, and that if you wanted to argue that Harry was not like Artemis you were better off arguing with Nessus. There is nothing inconsistent or ironic about these series of statements. I'd challenge you to show me where I "claimed to have never held the position in the first place." What I said was:

quote:

So, first of all it was raised by Nessus, not me, that said Harry is like Artemis. I could remember some similarity so I rolled with.

This is not denying that I said Harry was like Artemis. This is me saying that I was had agreeged with Nessus that Harry was like Artemis (then afterwards qualifying it with the statement that my knowledge about Artemis was dated). Denying it would be saying. "Nessus said it, not me". Which I certainly didn't say. This may be an issue of the colloquialism "rolled with it" and my failure to complete the sentence. However I am done having an argument about an argument. If you want to debate about whether or not Harry is like Artemis talk to Nessus, not me. He is the one who first made the connection.


akulanization posted:

You don't appear to grasp that Harriezer may not be the infini-rational from the perspective of Big Yud, but in the story he is the only "rationalist" and he uses that to go on, and on, and on about how dumb people are and how they need to think more like him. He is definitely the High King of Rational Mountain because there are no challengers to the throne. When Yud goes, "well he isn't as rational as I am :smuggo:" He comes off as a hack who is trying to post hoc rationalize the mammoth failings of his character. I mean he has had years to come up with excuses for his terrible pacing and characterization.

Ok. Sorry. I thought you were arguing something else. I don't disagree that within the story Harry is the king of rationality mountain. No one else is as good as him. However the initial conversation I was having, to which you joined was whether not whether Harry was intended to be the best rationalist in the story, but whether he was meant to be some sort of uber-rationalist-supermind ingeneral. Not the best rationalist in the story, but an example of near-perfect rationality in general. It is this claim I disagree with. I no longer think you are making this claim(are you?) so I don't disagree with you . If you attempt to make an argument about an argument regarding this, I will not be joining that.




akulanization posted:

You are doing the thing where you contradict yourself, again. If the number is immaterial to the argument or the counter argument then obviously it does not matter if it was quoted correctly. Since both the argument for torture and the argument against it do not rely on a precise value of N, then there is no use distinguishing between ~10^38 and an even bigger number. So if you are trying to argue that they didn't read Yud, it's trivial to show they did since they loving quoted him. And if you are arguing they didn't understand him, then you need to actually show how a component they missed addresses or renders moot some part of their argument; the precise value of a stupidly huge number isn't doing that.

I'm not contradicting myself. You are correct it doesn't matter with w.r.t to the validity of the argument. But I wasn't using the failure to get the number write as a strike against his argument. It was an example of how some people only learn what they know about Eliezer's writing from other people. Here is the original discussion:

http://lesswrong.com/lw/kn/torture_vs_dust_specks/

Now on the guy on the first page said that 3^^^3 is equal to 3^3^3. You would have to be really, really stupid to read that post and come to the conclusion that 3^3^3 = 3^^^3. I didn't think the poster was that stupid. So I was only left with the conclusion that the poster did not read that post. I was using the number to demonstrate that the poster was mocking torture vs dust specks with out having read the post on torture vs dust specks.

I think this is a perfectly legitimate use of the number, regardless of the numbers relevance to the actual argument.

Similarly I will not be continuing any sort of argument about an argument here.


Edit:

Actually this is taking too much of time and making me too stressed. So I'm not coming back until I forget why I left. For those of you interested in torture vs dust specks, its not as simple as torture! bad! If you prefer dust specks to torture, then your forced to bite the bullet elsewhere. Most people don't, which is inconsistent. You can do some googling to see examples of consequences of preferring dust specks to torture.

Legacyspy fucked around with this message at 10:33 on Mar 26, 2015

Pvt.Scott
Feb 16, 2007

What God wants, God gets, God help us all

"Legacyspy" posted:

examples of consequences of preferring dust specks to torture.

Being a decent human being with basic empathy.

E: people are anything but consistent in their views and practices.

petrol blue
Feb 9, 2013

sugar and spice
and
ethanol slammers
I sat down and thought about the best choice for ages, invalidating the benefit of choosing optimally. :ohdear:

Pvt.Scott
Feb 16, 2007

What God wants, God gets, God help us all

petrol blue posted:

I sat down and thought about the best choice for ages, invalidating the benefit of choosing optimally. :ohdear:

You fool! Now neither event will take place!

Tiggum
Oct 24, 2007

Your life and your quest end here.


Legacyspy posted:

The point is, all you need to do to get a world where someone is tortured (very bad consequence) to avoid a minor inconvenience across a sufficiently large number of people, is have a sufficiently large number of people such that the probability of at least one person getting a very bad consequence is near unity. And that is why people who prefer dust speckers are inconsistent.

What even is your point? I've read your post several times and it's gibberish to me.

anilEhilated
Feb 17, 2014

But I say fuck the rain.

Grimey Drawer
Remeber, your insticts are always right.
Mind you, if your instincts are telling you torture is preferrable to dust spec in the eye given just about any number of examples, you're pretty hosed up in addition to that.

Arcturas
Mar 30, 2011

There are a bajillion problems with what you wrote, Legacysoy, but the one that stands out to me is the torture v flyspeck argument. At the core, the problem with that is that torture is categorically different than flyspecks and the two are not comparable. There is a difference in kind, not simply a difference in magnitude. That's why you can't just say "I can change the scenario and you are still wrong!" Because the way you set up the scenario matters. CO2 emissions causing a chance of asthma and therefore death is categorically different than torture. Thy are different kinds of moral harm.

EDIT: For instance, things like agency matter. It's very different if someone volunteers to be tortured to save everyone from inconvenience than if we decide to torture them. The certainty of harm matters. Those are both key distinctions between "eating a food with a possible radioactive thing that could maybe cause me to get cancer which will kill me" and torture.

The number of other people the harm is laundered through matters. This affects things like how culpable I am for the use of child labor because of my purchase of an iPhone or whatever tech gadget-I am culpable, and probably shouldn't buy those items, but it's far less morally indefensible than torture. That's why it's more morally okay to buy those items and then still work to end the abuses of the companies abroad by applying political or moral pressure, or through other means.

In short, Yud's basically ineffectually flailing at the philosophical debate over utilitarianism without engaging with the centuries of moral thought that's gone into the issue. That's fine and all, but claiming it's some sort of moral breakthrough because of "big numbers" is super dumb. As people in the thread pointed out and you ignored.

Also, moral inconsistency is part of our lives. We are not required, as humans, to be morally perfect and consistent.

EDIT the second: You should read "The Ones Who Walk Away From Omelas" by Ursula Le Guin.

Arcturas fucked around with this message at 15:33 on Mar 26, 2015

James Garfield
May 5, 2012
Am I a manipulative abuser in real life, or do I just roleplay one on the Internet for fun? You decide!

Legacyspy posted:


The point is, all you need to do to get a world where someone is tortured (very bad consequence) to avoid a minor inconvenience across a sufficiently large number of people, is have a sufficiently large number of people such that the probability of at least one person getting a very bad consequence is near unity. And that is why people who prefer dust speckers are inconsistent.

Being inconsistent in that way is a good thing though?

edit: is it also :smug:inconsistent:smug: to prefer a one in a billion chance of 3^^^^^^^3 people getting dust specks in their eyes over a guarantee of one person being tortured for 50 years?

James Garfield fucked around with this message at 15:27 on Mar 26, 2015

Added Space
Jul 13, 2012

Free Markets
Free People

Curse you Hayard-Gunnes!
I think he's saying that, given enough dust specks, people are going to lose eyes or get distracted and die. If you alter the problem in such a way, the torture does start to look more preferable. :shrug:

platedlizard
Aug 31, 2012

I like plates and lizards.

Added Space posted:

I think he's saying that, given enough dust specks, people are going to lose eyes or get distracted and die. If you alter the problem in such a way, the torture does start to look more preferable. :shrug:

Nah, because torture is a deliberate act of evil and accidents/eye injuries from dust specks aren't. Accidentally hitting a child on the road because something interfered with your vision sucks for you and the child, but it's isn't the same level of awful as taking that kid and torturing them for 50 years. A person has to be a real rear end in a top hat to think they are at all comparable imo

Telarra
Oct 9, 2012

It's also unbelievably stupid because it's the "well what if I multiply it by INFINITY? :smug:" trick they love so much. Hell, that's the entire sum of Yud's arguments about cryonics right there.

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



platedlizard posted:

Nah, because torture is a deliberate act of evil and accidents/eye injuries from dust specks aren't. Accidentally hitting a child on the road because something interfered with your vision sucks for you and the child, but it's isn't the same level of awful as taking that kid and torturing them for 50 years. A person has to be a real rear end in a top hat to think they are at all comparable imo
What I also notice is that this 3^^^^3 figure or whatever is so ridiculously huge that it becomes absurd, and I think it's because Yud realizes that if you did the numbers for any semi-sane number, such as "every human who's ever lived" (which would probably be a hundred billion tops?) or "all the inhabitants of a thousand Earths" (which we could probably top out at ten trillion), the effects would be meaningless.

What I also don't understand is what the hell this is supposed to prove, exactly. Like what's the theological point of the dust-speck thing? The Nessus seeks to understand this foolishness.

Darth Walrus
Feb 13, 2012

Nessus posted:

What I also notice is that this 3^^^^3 figure or whatever is so ridiculously huge that it becomes absurd, and I think it's because Yud realizes that if you did the numbers for any semi-sane number, such as "every human who's ever lived" (which would probably be a hundred billion tops?) or "all the inhabitants of a thousand Earths" (which we could probably top out at ten trillion), the effects would be meaningless.

What I also don't understand is what the hell this is supposed to prove, exactly. Like what's the theological point of the dust-speck thing? The Nessus seeks to understand this foolishness.

That human scale insensitivity means that we are irrationally sceptical about the infinite virtues of the singularitarian rapture, and should really get over it and donate more money to MIRI.

platedlizard
Aug 31, 2012

I like plates and lizards.

Nessus posted:

What I also notice is that this 3^^^^3 figure or whatever is so ridiculously huge that it becomes absurd, and I think it's because Yud realizes that if you did the numbers for any semi-sane number, such as "every human who's ever lived" (which would probably be a hundred billion tops?) or "all the inhabitants of a thousand Earths" (which we could probably top out at ten trillion), the effects would be meaningless.

What I also don't understand is what the hell this is supposed to prove, exactly. Like what's the theological point of the dust-speck thing? The Nessus seeks to understand this foolishness.

He's calculating human suffering in terms of pain-points which is stupid because it doesn't account for pain that's just part of life or suffering that has nothing to do with physical pain.

Adbot
ADBOT LOVES YOU

Cycloneman
Feb 1, 2009
ASK ME ABOUT
SISTER FUCKING

Legacyspy posted:

Is there a number large enough such that you wouldn't pay a dollar to prevent a 1/(large number) chance of preventing something terrible from happening? Or do you really live your life taking any minor inconvenience that prevents no matter how negligible, a very, very small chance of something bad happening to you?

Everyone doesn't prefer a minor inconvenience which would prevent a 1/(big enough number) chance of something happening them, for just taking that chance instead. Do you eat bananas? Would not eating bananas (or other foods similarly radioactive) consist of a minor inconvenience (if not just substitute something else you consider a minor inconvenience and pretend it too has a similarly low chance of a really bad consequence)? But bananas contain potassium that could decay and in the process emit a particle that strikes the DNA of one your cells, making it cancerous, which by chance doesn't get caught in time by your body, and now you have cancer.

The point is, all you need to do to get a world where someone is tortured (very bad consequence) to avoid a minor inconvenience across a sufficiently large number of people, is have a sufficiently large number of people such that the probability of at least one person getting a very bad consequence is near unity. And that is why people who prefer dust speckers are inconsistent.
The solution to this "problem" - that choosing dust specks over torture is "inconsistent" with choosing low possibility of death over minor inconvenience - is that the situations are not analogous at all. By avoiding torture I am achieving a terminal value (fairness) that I am not achieving by avoiding minor risks.

If you ignore all morality except mindless beep-boop happy unit utilons, then, yes, the dust speck/torture problem is solved by picking torture. But there's plenty of terminal values humans have besides maximizing the number of beep-boop happy unit utilons, such as an equitable distribution of goods and evils. The torture solution to the torture/dust speck problem is very much not an equitable distribution of goods and evils; the dust speck solution is.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply