|
su3su2u1 posted:The definition of friendliness they create will just be an abstraction that shares some properties of what we might think of as "friendly." I've always thought so; this seems similar to what I've been saying: "it's an engineering problem". It would be really helpful to understanding what the gently caress they plan to do if they'd release any information about their formalization of the problem.
|
# ? Mar 25, 2015 06:43 |
|
|
# ? May 17, 2024 13:31 |
|
SolTerrasa posted:I've always thought so; this seems similar to what I've been saying: "it's an engineering problem". And yea, I was basically saying "I agree it's an engineering problem, which is why formal math is basically useless." Judging by their publications, they are barely getting to the point where they can formalize "cooperate in the prisoners dilemma" with category theory. An accomplishment they seem abnormally proud of (it's the only publication on the arxiv). It also uses 'timeless decision theory' in that they read the source code of the other bot. The capstone accomplishment of a decade of SIAI is a piece of code that makes it easy to make formal prisoner's dilemma agents play against each other. su3su2u1 fucked around with this message at 08:59 on Mar 25, 2015 |
# ? Mar 25, 2015 08:56 |
|
su3su2u1 posted:And yea, I was basically saying "I agree it's an engineering problem, which is why formal math is basically useless." Do they get into the halting problem issues there? That seems like the only super interesting part of that particular problem, if they have anything new to say about it. For those playing at home, here's what that means in this context. (the true halting problem is much more general) AI A simulates AI B and does whatever beats B -- B simulates A and does whatever beats A. But they'll compute forever at this rate -- to simulate B, A has to simulate itself and B's reaction -- but to simulate itself for that purpose, it has to simulate itself and B's reaction again, and so on. B has the same problem. Either could stop thinking at a certain recursion depth, but then the other would win by thinking one recursion level deeper. (as it would know by simulating it when the first one would stop) So deciding to terminate is always wrong. Either of these AIs is strong against opponents that can't attempt to simulate *it*, but how do you guarantee that? Krotera fucked around with this message at 09:04 on Mar 25, 2015 |
# ? Mar 25, 2015 08:58 |
|
Krotera posted:Do they get into the halting problem issues there? That seems like the only super interesting part of that particular problem, if they have anything new to say about it. They show that if the agent is defined via their model, it will cooperate with itself. It you choose not to represent the agent via their models (i.e. most real programs), I think all bets are off. http://arxiv.org/pdf/1401.5577.pdf
|
# ? Mar 25, 2015 09:20 |
|
Krotera posted:For those playing at home, here's what that means in this context. (the true halting problem is much more general) AI A simulates AI B and does whatever beats B -- B simulates A and does whatever beats A. But they'll compute forever at this rate -- to simulate B, A has to simulate itself and B's reaction -- but to simulate itself for that purpose, it has to simulate itself and B's reaction again, and so on. B has the same problem. Either could stop thinking at a certain recursion depth, but then the other would win by thinking one recursion level deeper. (as it would know by simulating it when the first one would stop) So deciding to terminate is always wrong. Either of these AIs is strong against opponents that can't attempt to simulate *it*, but how do you guarantee that? Well, the correct solution is to spend years building up an immunity to iocaine powder and poison both drinks.
|
# ? Mar 25, 2015 10:06 |
|
Darth Walrus posted:Expectation had nothing to do with it. Like most of this thread's posters, I stuck my hand in that bear trap voluntarily. I wasn't really coerced into it, but that's what I'm going with. Yep, that's what it's going to read on my obituary, that I only read this stuff because some guy I've known since I was 2 years old told me to. But I'm lying to you and myself. I've swum these seas of insanity voluntarily, I even read those loving profound "sequences". And it was partially Robo's Basilisk, that I only actually read about here, that brought me out of it. Because that poo poo was loving stupid. Did people actually fall for that? Really? I genuinely hope at least someone did because that would really be the funniest thing that ever happened. Everyone who knows me knows that I'm a colossal nerd, an aspie with just a hint of sociopath thrown in, enough to be able to lie through an evaluation and count as normal. And if the concept of "game recognises game" has any validity, or the "takes one to know one" cliche is more your liking, take it from me, Yudkowsky is loving nuts. But entertaining. If this story was a bear trap it was still an enjoyable one. I'd see it published ahead of most other books out right now, but that's a really low bar to reach. Also, if I ever meet him, I plan on telling him that at least my brother is still alive, just for kicks. The inevitable broken nose will be worth it.
|
# ? Mar 25, 2015 16:25 |
|
su3su2u1 posted:And yea, I was basically saying "I agree it's an engineering problem, which is why formal math is basically useless." When you say "their publications" here, do you mean AI researchers taking a bayes approach to it, or Yud and his cult?
|
# ? Mar 25, 2015 17:00 |
|
Fried Chicken posted:When you say "their publications" here, do you mean AI researchers taking a bayes approach to it, or Yud and his cult? Yud-cult
|
# ? Mar 25, 2015 18:31 |
|
Specifically, Yud's research team, the Machine Intelligence Research Institute.
|
# ? Mar 25, 2015 23:42 |
|
I'm not sure I see the problem in the catgirl scenario. Like, if you get tired of your furry fuckdolls, just leave the volcano lair and hang out with some of the other people in paradise.
|
# ? Mar 26, 2015 03:12 |
|
Chapter 8: Positive Bias Part Five quote:
At least Eliezer is self-aware that Eliezarry is condescending and obnoxious. So that’s a start, I guess? No wait a minute, there’s also the bit about “that was secondary to finding out what she’d done wrong”. That implies that Eliezer thinks that if someone is being condescending to you, the onus is on you to be condescended to in order to find out “what you’d done wrong”. He’s merely justifying Eliezarry’s being so unlikable and annoying.
|
# ? Mar 26, 2015 03:56 |
|
JosephWongKS posted:Chapter 8: Positive Bias That can't be right. You'd have to be some sort of obnoxious, condescending prick to think like that.
|
# ? Mar 26, 2015 04:02 |
|
Pvt.Scott posted:I'm not sure I see the problem in the catgirl scenario. Like, if you get tired of your furry fuckdolls, just leave the volcano lair and hang out with some of the other people in paradise. The problem is in this statement: quote:He said, "Well, then I'd just modify my brain not to get bored—" If you're stuck in a lotus-eating Singularity scenario where you can edit your brain, motivation becomes a problem. You can delete your pain receptors, delete your boredom, delete any distracting influences. Given long enough, you'd eventually delete the thought process that could tell you why this all was a bad idea, so you'd keep deleting your mental impulses until you were down to a mindless puppet getting pumped full of pleasure signals. This is referred to as a "wire-heading" scenario. Big Yud doesn't see this as a potential problem of transhumanism because a Friendly AI wouldn't let you do this because shut up.
|
# ? Mar 26, 2015 04:24 |
|
SSNeoman posted:Right my mistake. It's actually 3^^7625597484987 which is some godawful number you get if you take 3 and then raise it to the 3rd power 7,625,597,484,987 times which is still the same thing as "a meaninglessly big number" which means my point still stands. Of course the actual number is irrelevant. I was just demonstrating that you had at least 2'nd hand knowledge of what he had written, because if you had read the original post you would know 3^^^3 != 3^3^3^3. SSNeoman posted:But you know what fine, whatever. Lemme explain why Yud's conclusion is nuts. Sure. You can absolutely argue that dust-specks is the preferable choice. The point of the question though is of consistency. If one says that there is no sufficiently large number where torture is preferable to dust specks, then to be logically consistent you must make sacrifices elsewhere. But most people don't behave in a way that is logically consistent with a preference to dust over specks. For example, they drive cars distances they could walk, a 1 / (some number) chance of causing other people serve harm, to save them trouble of walking to the grocery store ( a minor inconvenience). Of course most people agree that "some number" is so big that the chance of causing enough harm to worry about is not worth the trouble of driving to the grocery store. su3su2u1 posted:I mock Yudkowsky from a position of strength rather than of ignorance- I have a phd in physics, and Yud included (wrong) physics references in his fanfic. He incorrectly references psychology experiments in his fanfic. He uses the incorrect names for biases in his fanfic. He gets computational complexity stuff wrong in his fanfic. He does this despite insisting on page 1 that "All science mentioned is real science." Let me ask you- if an AI researcher can't get computational complexity correct, why should I trust anything else he writes? If someone who has founded a community around avoiding mental biases can't get the references right in a fanfic, why should I trust his other writing? I am in no position to argue whether or not anything he has done w.r.t to A.I is legitimate. And I didn't mean you guys were ignorant of science or anything. I just meant a lot of people were mocking stuff that hadn't even personally read. There is nothing ironic about it. I said Artemis was a character like Harry in reply to a comment by Nessus first saying that Artemis was a character like Harry, and to my limited knowledge that seemed correct. If you want to argue whether or not Artemis is like Harry, argue with Nessus, not me. I don't know the books well enough. akulanization posted:I have no evidence this conversation actually occurred, but if it did that still doesn't change my point. Harriezer is held up as an example, he's powerful and has agency because he is a rationalist. He is obviously meant to inspire people to be rationalists like himself, a perception that isn't helped by the author saying poo poo like this: Well I'm too lazy to screen shot the reddit P.M because you'd probably just say that wasn't sufficient evidence (Photoshop and all). You can just P.M him yourself on Reddit if you want. And it doesn't change your point? Why not just admit you were wrong instead of backtracking? I definitely don't disagree that he was meant to inspire people or lead them to the sequences. He was clearly intended to. However he was clearly not intended to be some sort of super rationalist, as you and others argued. akulanization posted:Also the actual number is immaterial if you want to approach this problem from the perspective of mathematics, it's an arbitrarily large real number. If you grant the premise that both events are measured with the same scale and that minimizing that number is a priority, then the number doesn't matter; you can always select a number big enough that the "math" works out the way you want. The problem with the torture proponent's stance is that, as SSNEOMAN pointed out, they don't actually consider the problem. They wipe out a life because that's better than a trifling inconvenience to a huge number of people. I agree the actual number is immaterial. I was just using the actual number to demonstrate that the he clearly hadn't read the original thing Eliezer had written. And the problem with the "dust speck people" is that if they are consistent in their preferences, then they must make sacrifices elsewhere. Only most people are not consistent and would not make these sacrifices if asked. petrol blue posted:Yeah, the 'position of ignorance' line probably wasn't a good move, Legacyspy, goons like physics and ai almost as much as Mt. Dew. Sorry. I didn't mean to imply that goons were ignorant of physics or math or computer science. I really, really appreciate the knowledgeable people here and have personally benefited from the posters in Caverns & Science/Academics. I meant that some (not all) were ignorant of what Eliezer had written. Nessus posted:Here you go, buddy! http://forums.somethingawful.com/showthread.php?threadid=3627012 This is what I mean. If Nessus is just being sarcastic or w.e that is fine. But if he honestly thinks that friendly A.I is a worry over A.I torturing us... then he doesn't understand what, right or wrong, Eliezer is talking about. Its a worry that the A.I, in pursuit of the goals we gave it, may have unintended consequences that could be bad for us. This can be as simple as being a lovely A.I that when asked "How do we get rid of insects eating our sugar cane crop" says "introduce the cane toad" not understanding that the consequences of cane toad infestation will be far more annoying than the insects eating our crops. Or an A.I that does for some reason we can't couldn't have foreseen decide to "kill all humans" in pursuit of its goals. I think this is unlikely but I don't think the idea of "How do we get an A.I to recommend courses of actions that take into account the complex values we have (like not liking a cane toad infestation) is a useful one. Whether Eliezer is actually doing anything useful on this front, I can't tell. But afaik hes also one of the few people even talking about it. Night10194 posted:I mock the writing in the story because it's bad and we're on somethingawful.com, I have no qualms with this. Night10194 posted:but Yud's cult is legitimately fascinating to me as a religious studies guy who is interested in getting into studying emerging religions, and so his proselytization-fiction is actually really interesting from an academic standpoint. Yud's cult? Lesswrong is more or less dead. The only active following Yudoswky has right now is /r/hpmor and that will slowly die off since nothing is being written. That is not say there isn't a culture of people who share similar ideas to Yud, the "rationalist" culture or w.e. But I don't think they "belong" to Yud. I'm curious. How would you identify someone in his cult? Am I in his cult? Legacyspy fucked around with this message at 07:45 on Mar 26, 2015 |
# ? Mar 26, 2015 07:30 |
|
I'd define it primarily as people who are actual contributors to his institute (financial ones), and who are members of the Less Wrong community. I have no idea if you're a member. You're pretty obviously a fan of the work, but that has no real bearing on if you self identify as a member of the community or of Yudkowsky or similar 'rationalist' orbits. My primary academic interest is in the fact that this fiction, and the Sequences, and many of his theories, have a cast very similar to a lot of Christian religious and apocalyptic dogma, despite their avowed atheism. I'm currently beginning to gather data and do reading on his work because of the fascinating parallels between the Cryonics stuff and the Christian resurrection of the Dead, the similarities between AI Go Foom and classic apocalypse, etc, because I have approval and support from my old advisers from my master's program that there might be a productive bit of work to be done on singularity and science fetish cults, and on the sort of cross pollination between commonplace religious ideas in the larger culture and the texture of what they end up believing. I'm at the very beginning of working on this, mind, and have a hell of a lot of reading to do still. Just some of the ideas and the general shape of things piqued my interest in their similarity and merit looking into from an academic standpoint.
|
# ? Mar 26, 2015 07:41 |
|
Legacyspy posted:This is what I mean. If Nessus is just being sarcastic or w.e that is fine. But if he honestly thinks that friendly A.I is a worry over A.I torturing us... then he doesn't understand what, right or wrong, Eliezer is talking about. Its a worry that the A.I, in pursuit of the goals we gave it, may have unintended consequences that could be bad for us. This can be as simple as being a lovely A.I that when asked "How do we get rid of insects eating our sugar cane crop" says "introduce the cane toad" not understanding that the consequences of cane toad infestation will be far more annoying than the insects eating our crops. Or an A.I that does for some reason we can't couldn't have foreseen decide to "kill all humans" in pursuit of its goals. I think this is unlikely but I don't think the idea of "How do we get an A.I to recommend courses of actions that take into account the complex values we have (like not liking a cane toad infestation) is a useful one. Whether Eliezer is actually doing anything useful on this front, I can't tell. But afaik hes also one of the few people even talking about it. Saying that no one else is even talking about this problem is like saying no one's talking about the problem of interplanetary diplomacy. Since there are no other inhabited planets to communicate with right now, it's not exactly a pressing question. If we end up colonising other planets or meeting another intelligent species then it'll be relevant, but until then we don't even know the parameters we'd be working within, so it's fairly useless to come up with any "solutions" just yet. How to make sure an AGI is "friendly" is a potential problem, but it's not one we can actually take any steps to solve until we know what form an AGI might actually take.
|
# ? Mar 26, 2015 07:48 |
|
Night10194 posted:I'd define it primarily as people who are actual contributors to his institute (financial ones), and who are members of the Less Wrong community. I have no idea if you're a member. You're pretty obviously a fan of the work, but that has no real bearing on if you self identify as a member of the community or of Yudkowsky or similar 'rationalist' orbits. My primary academic interest is in the fact that this fiction, and the Sequences, and many of his theories, have a cast very similar to a lot of Christian religious and apocalyptic dogma, despite their avowed atheism. I'm currently beginning to gather data and do reading on his work because of the fascinating parallels between the Cryonics stuff and the Christian resurrection of the Dead, the similarities between AI Go Foom and classic apocalypse, etc, because I have approval and support from my old advisers from my master's program that there might be a productive bit of work to be done on singularity and science fetish cults, and on the sort of cross pollination between commonplace religious ideas in the larger culture and the texture of what they end up believing. That sounds interesting, and a lot more fair than what I expected from most of the other times I've heard lesswrong described as a cult. As a note, I've never given him money. I don't have an account of lesswrong, though I've read some of it. I've lived in Berkely for several years but I've never been to a lesswrong meetup or their offices despite them being down the street. I've only met Eliezier once which consisted of me saying hi and telling him that I enjoyed hpmor. This was at the hpmor wrap party in Berkeley. Which I went to because I figured it would be fun, and it was literally minutes away it so it would be lame if I didn't go. It was fun. I played a bunch of board/card games, ate pizza. Eliezer answered questions about hpmor. Some guy made a bunch of intentionally annoying magic decks. A deck titled "existentialist risk" that was pretty much nothing but mana-accel and board wipes. The one thing I do give money to as a indirect side effect of lesswrong and all that, is that I give money to the Against Malaria Foundation.
|
# ? Mar 26, 2015 08:00 |
|
Legacyspy posted:Sure. You can absolutely argue that dust-specks is the preferable choice. The point of the question though is of consistency. If one says that there is no sufficiently large number where torture is preferable to dust specks, then to be logically consistent you must make sacrifices elsewhere. But most people don't behave in a way that is logically consistent with a preference to dust over specks. For example, they drive cars distances they could walk, a 1 / (some number) chance of causing other people serve harm, to save them trouble of walking to the grocery store ( a minor inconvenience). Of course most people agree that "some number" is so big that the chance of causing enough harm to worry about is not worth the trouble of driving to the grocery store. Your comparison is bad- choosing torture over dust specks is giving one person a certainty of ruining a life vs lots of minor inconveniences. Your grocery store example is choosing a small chance of causing harm vs a SINGLE minor inconvenience. You could go one step further and say everyone could choose not to drive to save some lives, but choosing not to drive would also cost lives. There is no inconsistency in choosing driving to the store over walking but also choosing dust specks over torture. quote:I am in no position to argue whether or not anything he has done w.r.t to A.I is legitimate. And I didn't mean you guys were ignorant of science or anything. I just meant a lot of people were mocking stuff that hadn't even personally read. So take my word that the majority of science references in HPMOR are wrong. quote:Yud's cult? Lesswrong is more or less dead. The only active following Yudoswky has right now is /r/hpmor and that will slowly die off since nothing is being written. That is not say there isn't a culture of people who share similar ideas to Yud, the "rationalist" culture or w.e. But I don't think they "belong" to Yud. I'm curious. How would you identify someone in his cult? Am I in his cult? Yudkowsky lives entirely off the donations of people who give him money to save the world.
|
# ? Mar 26, 2015 08:08 |
|
Tiggum posted:Saying that no one else is even talking about this problem is like saying no one's talking about the problem of interplanetary diplomacy. Since there are no other inhabited planets to communicate with right now, it's not exactly a pressing question. If we end up colonising other planets or meeting another intelligent species then it'll be relevant, but until then we don't even know the parameters we'd be working within, so it's fairly useless to come up with any "solutions" just yet. But this sounds like a totally interesting thing to explore. Have you ever seen? http://www.princeton.edu/~pkrugman/interstellar.pdf ? (Obviously Eliezer is no krugman) I think it interplanetary diplomacy would be a fascinating area of discussion, with information being bounded by the speed of light and all that. Interstellar would be even better. How such diplomacy work when the first mover has such a significant advantage? Our rivals at Alpha Centauri could be settling Epsilon Eridani behind our backs, right as we negotiate the settlement rights Centauri command.
|
# ? Mar 26, 2015 08:13 |
|
Legacyspy posted:But this sounds like a totally interesting thing to explore. Sure, it's a great premise for science fiction. And there's tons of sci-fi about interplanetary diplomacy (and AGIs). There just aren't many people talking about it as a practical issue, because it isn't one.
|
# ? Mar 26, 2015 08:17 |
|
Tiggum posted:Sure, it's a great premise for science fiction. And there's tons of sci-fi about interplanetary diplomacy (and AGIs). There just aren't many people talking about it as a practical issue, because it isn't one. I'm mostly just piggy backing here to point out that Paul Krugman wrote a paper about interstellar trade because why the gently caress not
|
# ? Mar 26, 2015 08:25 |
|
Chapter 8: Positive Bias Part Six quote:
That does make sense, and it is an interesting experiment. I’ll try it out with my friends one of these days. Eliezarry’s also relatively undouchey in explaining the test, so points to him.
|
# ? Mar 26, 2015 08:38 |
|
Legacyspy posted:There is nothing ironic about it. I said Artemis was a character like Harry in reply to a comment by Nessus first saying that Artemis was a character like Harry, and to my limited knowledge that seemed correct. If you want to argue whether or not Artemis is like Harry, argue with Nessus, not me. I don't know the books well enough. I am desperately trying to remain civil in this discussion, you are making it very hard. You posted in support of a position, when you were presented with an argument or evidence that the position was incorrect you Legacyspy posted:Why not just admit you were wrong instead of backtracking? Legacyspy posted:Well I'm too lazy to screen shot the reddit P.M because you'd probably just say that wasn't sufficient evidence (Photoshop and all). You can just P.M him yourself on Reddit if you want. And it doesn't change your point? Why not just admit you were wrong instead of backtracking? I definitely don't disagree that he was meant to inspire people or lead them to the sequences. He was clearly intended to. However he was clearly not intended to be some sort of super rationalist, as you and others argued. You don't appear to grasp that Harriezer may not be the infini-rational from the perspective of Big Yud, but in the story he is the only "rationalist" and he uses that to go on, and on, and on about how dumb people are and how they need to think more like him. He is definitely the High King of Rational Mountain because there are no challengers to the throne. When Yud goes, "well he isn't as rational as I am " He comes off as a hack who is trying to post hoc rationalize the mammoth failings of his character. I mean he has had years to come up with excuses for his terrible pacing and characterization. Legacyspy posted:I agree the actual number is immaterial. I was just using the actual number to demonstrate that the he clearly hadn't read the original thing Eliezer had written. And the problem with the "dust speck people" is that if they are consistent in their preferences, then they must make sacrifices elsewhere. Only most people are not consistent and would not make these sacrifices if asked.
|
# ? Mar 26, 2015 09:07 |
Legacyspy posted:This is what I mean. If Nessus is just being sarcastic or w.e that is fine. But if he honestly thinks that friendly A.I is a worry over A.I torturing us... then he doesn't understand what, right or wrong, Eliezer is talking about. Its a worry that the A.I, in pursuit of the goals we gave it, may have unintended consequences that could be bad for us. This can be as simple as being a lovely A.I that when asked "How do we get rid of insects eating our sugar cane crop" says "introduce the cane toad" not understanding that the consequences of cane toad infestation will be far more annoying than the insects eating our crops. Or an A.I that does for some reason we can't couldn't have foreseen decide to "kill all humans" in pursuit of its goals. I think this is unlikely but I don't think the idea of "How do we get an A.I to recommend courses of actions that take into account the complex values we have (like not liking a cane toad infestation) is a useful one. Whether Eliezer is actually doing anything useful on this front, I can't tell. But afaik hes also one of the few people even talking about it. The Nessus further believes this entire friendly AI whatever-the-hell involves numerous presuppositions which beg the question. (Example: Why is there only one AI? Presumably there would be prototypes. Example 2: What if the AIs get into fights? Example 3: What if the AIs discover that fantasy magic nanotechnology is actually impossible?) The Nessus expects if the AI kills us all, it will be because some rich dumbass put it in charge of something important in order to lay off some trained humans and juice up his (or her, but let's be real: his) bonus. The Nessus also thinks these people have managed to reinvent apocalyptic Christianity with hilarious precision, and that's pretty funny. Death is certain (though it is good to progress science to make it come later, with less pain and disability on the way). The computer will not save you.
|
|
# ? Mar 26, 2015 09:20 |
|
su3su2u1 posted:You could go one step further and say everyone could choose not to drive to save some lives, but choosing not to drive would also cost lives. Which is why I didn't say it. su3su2u1 posted:There is no inconsistency in choosing driving to the store over walking but also choosing dust specks over torture. Of course you can say that. I didn't formalize what walking to the grocery store instead of driving is, so if you do differently then it won't be inconsistent. However, I can show that preferring dust-specks to torture, does mean to be consistent you have to bite the bullet elsewhere. Those that prefer dust-speck to torture are saying that there is no sufficiently large number, such that a non-negligible (it is an assumption of the original discussion that dust specks are not negligible, see the original discussion) inconvenience experienced by such a number of people, is less preferable then a far larger magnitude of harm to one person. su3su2u1 posted:Your comparison is bad- choosing torture over dust specks is giving one person a certainty of ruining a life vs lots of minor inconveniences. I'm fairly certain that certainty is a red-herring here. Is the certainty of it really that relevant? What if it is a (Big number - 1)/(Big Number) chance? Does that really change things? Especially since the people who prefer dust-specks to torture are arguing there is no sufficiently large number, I can just pick what ever number of people being trivially inconvenienced results in as close to unity probability of at least one person getting what ever I want happen to them (that has non-zero probability for a single trial). Does that really change things? Is there a number large enough such that you wouldn't pay a dollar to prevent a 1/(large number) chance of preventing something terrible from happening? Or do you really live your life taking any minor inconvenience that prevents no matter how negligible, a very, very small chance of something bad happening to you? Everyone doesn't prefer a minor inconvenience which would prevent a 1/(big enough number) chance of something happening them, for just taking that chance instead. Do you eat bananas? Would not eating bananas (or other foods similarly radioactive) consist of a minor inconvenience (if not just substitute something else you consider a minor inconvenience and pretend it too has a similarly low chance of a really bad consequence)? But bananas contain potassium that could decay and in the process emit a particle that strikes the DNA of one your cells, making it cancerous, which by chance doesn't get caught in time by your body, and now you have cancer. The point is, all you need to do to get a world where someone is tortured (very bad consequence) to avoid a minor inconvenience across a sufficiently large number of people, is have a sufficiently large number of people such that the probability of at least one person getting a very bad consequence is near unity. And that is why people who prefer dust speckers are inconsistent. Legacyspy fucked around with this message at 09:59 on Mar 26, 2015 |
# ? Mar 26, 2015 09:33 |
|
akulanization posted:I am desperately trying to remain civil in this discussion, you are making it very hard. You posted in support of a position, when you were presented with an argument or evidence that the position was incorrect you claimed to have never held the position in the first place. No. I never claimed to have never held that position. I absolutely said Harry was like Artemis. However, when you tried to debate with me that Harry was not like Artemis, I tried to explain that I had only said so because Nessus raised that Harry was like Artemis, and I went with it because to my limited knowledge that seemed correct, and that if you wanted to argue that Harry was not like Artemis you were better off arguing with Nessus. There is nothing inconsistent or ironic about these series of statements. I'd challenge you to show me where I "claimed to have never held the position in the first place." What I said was: quote:So, first of all it was raised by Nessus, not me, that said Harry is like Artemis. I could remember some similarity so I rolled with. This is not denying that I said Harry was like Artemis. This is me saying that I was had agreeged with Nessus that Harry was like Artemis (then afterwards qualifying it with the statement that my knowledge about Artemis was dated). Denying it would be saying. "Nessus said it, not me". Which I certainly didn't say. This may be an issue of the colloquialism "rolled with it" and my failure to complete the sentence. However I am done having an argument about an argument. If you want to debate about whether or not Harry is like Artemis talk to Nessus, not me. He is the one who first made the connection. akulanization posted:You don't appear to grasp that Harriezer may not be the infini-rational from the perspective of Big Yud, but in the story he is the only "rationalist" and he uses that to go on, and on, and on about how dumb people are and how they need to think more like him. He is definitely the High King of Rational Mountain because there are no challengers to the throne. When Yud goes, "well he isn't as rational as I am " He comes off as a hack who is trying to post hoc rationalize the mammoth failings of his character. I mean he has had years to come up with excuses for his terrible pacing and characterization. Ok. Sorry. I thought you were arguing something else. I don't disagree that within the story Harry is the king of rationality mountain. No one else is as good as him. However the initial conversation I was having, to which you joined was whether not whether Harry was intended to be the best rationalist in the story, but whether he was meant to be some sort of uber-rationalist-supermind ingeneral. Not the best rationalist in the story, but an example of near-perfect rationality in general. It is this claim I disagree with. I no longer think you are making this claim(are you?) so I don't disagree with you . If you attempt to make an argument about an argument regarding this, I will not be joining that. akulanization posted:You are doing the thing where you contradict yourself, again. If the number is immaterial to the argument or the counter argument then obviously it does not matter if it was quoted correctly. Since both the argument for torture and the argument against it do not rely on a precise value of N, then there is no use distinguishing between ~10^38 and an even bigger number. So if you are trying to argue that they didn't read Yud, it's trivial to show they did since they loving quoted him. And if you are arguing they didn't understand him, then you need to actually show how a component they missed addresses or renders moot some part of their argument; the precise value of a stupidly huge number isn't doing that. I'm not contradicting myself. You are correct it doesn't matter with w.r.t to the validity of the argument. But I wasn't using the failure to get the number write as a strike against his argument. It was an example of how some people only learn what they know about Eliezer's writing from other people. Here is the original discussion: http://lesswrong.com/lw/kn/torture_vs_dust_specks/ Now on the guy on the first page said that 3^^^3 is equal to 3^3^3. You would have to be really, really stupid to read that post and come to the conclusion that 3^3^3 = 3^^^3. I didn't think the poster was that stupid. So I was only left with the conclusion that the poster did not read that post. I was using the number to demonstrate that the poster was mocking torture vs dust specks with out having read the post on torture vs dust specks. I think this is a perfectly legitimate use of the number, regardless of the numbers relevance to the actual argument. Similarly I will not be continuing any sort of argument about an argument here. Edit: Actually this is taking too much of time and making me too stressed. So I'm not coming back until I forget why I left. For those of you interested in torture vs dust specks, its not as simple as torture! bad! If you prefer dust specks to torture, then your forced to bite the bullet elsewhere. Most people don't, which is inconsistent. You can do some googling to see examples of consequences of preferring dust specks to torture. Legacyspy fucked around with this message at 10:33 on Mar 26, 2015 |
# ? Mar 26, 2015 09:56 |
|
"Legacyspy" posted:examples of consequences of preferring dust specks to torture. Being a decent human being with basic empathy. E: people are anything but consistent in their views and practices.
|
# ? Mar 26, 2015 12:50 |
|
I sat down and thought about the best choice for ages, invalidating the benefit of choosing optimally.
|
# ? Mar 26, 2015 13:14 |
|
petrol blue posted:I sat down and thought about the best choice for ages, invalidating the benefit of choosing optimally. You fool! Now neither event will take place!
|
# ? Mar 26, 2015 13:25 |
|
Legacyspy posted:The point is, all you need to do to get a world where someone is tortured (very bad consequence) to avoid a minor inconvenience across a sufficiently large number of people, is have a sufficiently large number of people such that the probability of at least one person getting a very bad consequence is near unity. And that is why people who prefer dust speckers are inconsistent. What even is your point? I've read your post several times and it's gibberish to me.
|
# ? Mar 26, 2015 13:46 |
Remeber, your insticts are always right. Mind you, if your instincts are telling you torture is preferrable to dust spec in the eye given just about any number of examples, you're pretty hosed up in addition to that.
|
|
# ? Mar 26, 2015 14:30 |
|
There are a bajillion problems with what you wrote, Legacysoy, but the one that stands out to me is the torture v flyspeck argument. At the core, the problem with that is that torture is categorically different than flyspecks and the two are not comparable. There is a difference in kind, not simply a difference in magnitude. That's why you can't just say "I can change the scenario and you are still wrong!" Because the way you set up the scenario matters. CO2 emissions causing a chance of asthma and therefore death is categorically different than torture. Thy are different kinds of moral harm. EDIT: For instance, things like agency matter. It's very different if someone volunteers to be tortured to save everyone from inconvenience than if we decide to torture them. The certainty of harm matters. Those are both key distinctions between "eating a food with a possible radioactive thing that could maybe cause me to get cancer which will kill me" and torture. The number of other people the harm is laundered through matters. This affects things like how culpable I am for the use of child labor because of my purchase of an iPhone or whatever tech gadget-I am culpable, and probably shouldn't buy those items, but it's far less morally indefensible than torture. That's why it's more morally okay to buy those items and then still work to end the abuses of the companies abroad by applying political or moral pressure, or through other means. In short, Yud's basically ineffectually flailing at the philosophical debate over utilitarianism without engaging with the centuries of moral thought that's gone into the issue. That's fine and all, but claiming it's some sort of moral breakthrough because of "big numbers" is super dumb. As people in the thread pointed out and you ignored. Also, moral inconsistency is part of our lives. We are not required, as humans, to be morally perfect and consistent. EDIT the second: You should read "The Ones Who Walk Away From Omelas" by Ursula Le Guin. Arcturas fucked around with this message at 15:33 on Mar 26, 2015 |
# ? Mar 26, 2015 15:05 |
|
Legacyspy posted:
Being inconsistent in that way is a good thing though? edit: is it also inconsistent to prefer a one in a billion chance of 3^^^^^^^3 people getting dust specks in their eyes over a guarantee of one person being tortured for 50 years? James Garfield fucked around with this message at 15:27 on Mar 26, 2015 |
# ? Mar 26, 2015 15:18 |
|
I think he's saying that, given enough dust specks, people are going to lose eyes or get distracted and die. If you alter the problem in such a way, the torture does start to look more preferable.
|
# ? Mar 26, 2015 15:30 |
|
Added Space posted:I think he's saying that, given enough dust specks, people are going to lose eyes or get distracted and die. If you alter the problem in such a way, the torture does start to look more preferable. Nah, because torture is a deliberate act of evil and accidents/eye injuries from dust specks aren't. Accidentally hitting a child on the road because something interfered with your vision sucks for you and the child, but it's isn't the same level of awful as taking that kid and torturing them for 50 years. A person has to be a real rear end in a top hat to think they are at all comparable imo
|
# ? Mar 26, 2015 16:04 |
|
It's also unbelievably stupid because it's the "well what if I multiply it by INFINITY? " trick they love so much. Hell, that's the entire sum of Yud's arguments about cryonics right there.
|
# ? Mar 26, 2015 16:10 |
platedlizard posted:Nah, because torture is a deliberate act of evil and accidents/eye injuries from dust specks aren't. Accidentally hitting a child on the road because something interfered with your vision sucks for you and the child, but it's isn't the same level of awful as taking that kid and torturing them for 50 years. A person has to be a real rear end in a top hat to think they are at all comparable imo What I also don't understand is what the hell this is supposed to prove, exactly. Like what's the theological point of the dust-speck thing? The Nessus seeks to understand this foolishness.
|
|
# ? Mar 26, 2015 18:40 |
|
Nessus posted:What I also notice is that this 3^^^^3 figure or whatever is so ridiculously huge that it becomes absurd, and I think it's because Yud realizes that if you did the numbers for any semi-sane number, such as "every human who's ever lived" (which would probably be a hundred billion tops?) or "all the inhabitants of a thousand Earths" (which we could probably top out at ten trillion), the effects would be meaningless. That human scale insensitivity means that we are irrationally sceptical about the infinite virtues of the singularitarian rapture, and should really get over it and donate more money to MIRI.
|
# ? Mar 26, 2015 18:51 |
|
Nessus posted:What I also notice is that this 3^^^^3 figure or whatever is so ridiculously huge that it becomes absurd, and I think it's because Yud realizes that if you did the numbers for any semi-sane number, such as "every human who's ever lived" (which would probably be a hundred billion tops?) or "all the inhabitants of a thousand Earths" (which we could probably top out at ten trillion), the effects would be meaningless. He's calculating human suffering in terms of pain-points which is stupid because it doesn't account for pain that's just part of life or suffering that has nothing to do with physical pain.
|
# ? Mar 26, 2015 18:56 |
|
|
# ? May 17, 2024 13:31 |
|
Legacyspy posted:Is there a number large enough such that you wouldn't pay a dollar to prevent a 1/(large number) chance of preventing something terrible from happening? Or do you really live your life taking any minor inconvenience that prevents no matter how negligible, a very, very small chance of something bad happening to you? If you ignore all morality except mindless beep-boop happy unit utilons, then, yes, the dust speck/torture problem is solved by picking torture. But there's plenty of terminal values humans have besides maximizing the number of beep-boop happy unit utilons, such as an equitable distribution of goods and evils. The torture solution to the torture/dust speck problem is very much not an equitable distribution of goods and evils; the dust speck solution is.
|
# ? Mar 26, 2015 20:13 |