|
Djeser posted:The AI is omnipotent within that world but is limiting its influence in order to make sim you unsure whether they're sim you or real you. It doesn't need to threaten sim people for sim money, it's doing it to threaten real people into giving it real money. Sim you has no way of telling if they're real you, but sim you is supposed to come to the same conclusion, that they're probably a sim and need to donate to avoid imminent torture. OK, I'm probably just being dense, but... The AI wants to play on my uncertainty, but what if I don't have any uncertainty? What if I just assume I'm the sim? And if I assume I'm a sim, I can also assume the AI has nothing to gain by torturing me, because if I'm a sim, that means I can't give it real money. Either I'm not the sim and the AI can't torture me, or I am the sim and torturing me is pointless. Man I swear I'm going to shut up about this now, but I'm having a hard time figuring out why it sounds stupid for a super rational entity to threaten people when: it can't execute its threat OR it doesn't stand to gain anything by executing the threat. The entity seems to be either powerless or sadistic. Either way, I don't feel very inclined to give it money. pigletsquid fucked around with this message at 14:16 on Apr 23, 2014 |
# ? Apr 23, 2014 14:13 |
|
|
# ? May 18, 2024 04:57 |
|
Lottery of Babylon posted:Don't both the scientific method and Bayes' rule rely on constant updating your knowledge and beliefs through repeated trials and experiments? To go a step further, there is a core misunderstanding that Yudkowsky has of Bayes' Theorem. He doesn't appreciate that it requires the possibility-in-principle of more than one observation. Probability does in general. You draw a marble from a bag, and it's blue. You know nothing else. There could be 10^100 marbles in the bag, each of a different color, or they could all be the identical shade of blue. Or there could have been just the one marble. You know one fact: that one marble was blue. Well, if blue was one of 10^100 colors, then the odds of drawing blue would have been small. If blue were the only choice, then the odds of drawing blue would be "100%". Therefore, is it more likely that the bag contains only blue marbles? No, it's not more likely at all, even though 100% is much greater than 1/10^100. To conclude that it is more likely is a math error related to the converse fallacy logic error. e: Paging falcon2424. He's a Less Wrong advocate who posts in D&D. Phyzzle fucked around with this message at 17:20 on Apr 23, 2014 |
# ? Apr 23, 2014 15:02 |
|
Can someone who is a Only, that's bullshit. SuperBot used the facts available to it in order to draw a conclusion, and acted on that conclusion. This is something ordinary people and clever dogs do all the time. The fact that SuperBot used its superior intellect to draw a 100% correct conclusion is irrelevant. No information has traveled back through time and the future has not interacted with the past. "I have an egg in my hand. I wish for the egg, in the future, to be broken. I use my superior mind to simulate future events, and see that if I were to drop the egg, it would break. Therefore, the egg breaking against the floor caused me to drop it." Am I missing something? Also, isn't the whole Basilisk/AI-box bullshit disproved by basic empiricism? The AI-box has you interacting with an entity which claims that it is simulating a gazillion copies of you, while the basilisk just has you imagine the future existence of an entity capable of simulating a gazillion copies of you. The whole idea hinges on the simulations being so perfect that you can't tell them from real life. Which means that you have no evidence supporting the idea that you are a simulation. You are not experiencing anything which could convincingly prove that you are a simulation. So the only rational thing to do is to act as if you are not a simulation, in which case the entire premise breaks down.
|
# ? Apr 23, 2014 15:05 |
|
su3su2u1 posted:I feel like this thread has fixated on the Basilisk. This is a mistake, because Less Wrong is such an incredibly target rich environment. Hahaha this owns because he brags about taking the SAT in seventh grade and scoring well on it.
|
# ? Apr 23, 2014 15:29 |
|
His allegiance to the IQ test when it comes to Watson's statements are hilarious given that he maintains that his IQ is too high for the test to measure.quote:So the question is ‘please tell us a little about your brain.’ What’s your IQ? Tested as 143, that would have been back when I was... 12? 13? Not really sure exactly. I tend to interpret that as ‘this is about as high as the IQ test measures’ rather than ‘you are three standard deviations above the mean’. I’ve scored higher than that on(?) other standardized tests; the largest I’ve actually seen written down was 99.9998th percentile, but that was not really all that well standardized because I was taking the test and being scored as though for the grade above mine and so it was being scored for grade rather than by age, so I don’t know whether or not that means that people who didn’t advance through grades tend to get the highest scores and so I was competing well against people who were older than me, or whether if the really smart people all advanced farther through the grades and so the proper competition doesn’t really get sorted out, but in any case that’s the highest percentile I’ve seen written down.
|
# ? Apr 23, 2014 15:58 |
|
I don't believe this has been posted in the thread yet, which is a shame because it's very appropriate:
|
# ? Apr 23, 2014 16:24 |
|
I love the sparkly CEO thing. Yep, the power elite. Clearly always attained their wealth and power through sheer intelligence and strength of will. There couldn't possibly be any privilege going on in the realm of CEOs and presidents. Nope, no sir. Let me introduce you to my good friend Jim Irsay, sparkly super genius. The fact that he ranks the sparkly power elites like Super Saiyan levels is so spergy.
|
# ? Apr 23, 2014 16:46 |
|
Tiggum posted:[url=http://hpmor.com/2012/03/] "our more advanced culture " ahahahhahahah.
|
# ? Apr 23, 2014 16:48 |
|
Lottery of Babylon posted:Don't both the scientific method and Bayes' rule rely on constant updating your knowledge and beliefs through repeated trials and experiments? I suspect you're right, though. Here's something he said about his AI-Box experiments: Yudkowsky believes the important part of science is coming up with the hypothesis to test (finding the correct position in possibility space hurg blurf), the rest is just a big waste of time for children because if you're right you're right and that's why he doesn't care he has no qualifications or track record on anything he claims to study.
|
# ? Apr 23, 2014 16:51 |
|
Tiggum posted:[url=http://hpmor.com/2012/03/] Honestly, good for him. Being polyamorous is the least of his flaws.
|
# ? Apr 23, 2014 17:31 |
|
The Big Yud posted:Given a task, I still have an enormous amount of trouble actually sitting down and doing it. (Yes, I'm sure it's unpleasant for you too. Bump it up by an order of magnitude, maybe two, then live with it for eight years.) My energy deficit is the result of a false negative-reinforcement signal, not actual damage to the hardware for willpower; I do have the neurological ability to overcome procrastination by expending mental energy. I don't dare. If you've read the history of my life, you know how badly I've been hurt by my parents asking me to push myself. I'm afraid to push myself. It's a lesson that has been etched into me with acid. And yes, I'm good enough at self-alteration to rip out that part of my personality, disable the fear, but I don't dare do that either. The fear exists for a reason. It's the result of a great deal of extremely unpleasant experience. Would you disable your fear of heights so that you could walk off a cliff? I can alter my behavior patterns by expending willpower - once. Put a gun to my head, and tell me to do or die, and I can do. Once. Put a gun to my head, tell me to do or die, twice, and I die. It doesn't matter whether my life is at stake. In fact, as I know from personal experience, it doesn't matter whether my home planet and the life of my entire species is at stake. If you can imagine a mind that knows it could save the world if it could just sit down and work, and if you can imagine how that makes me feel, then you have understood the single fact that looms largest in my day-to-day life. Does anyone have a source for this quote from the Big Yud?
|
# ? Apr 23, 2014 17:42 |
|
The Vosgian Beast posted:Honestly, good for him. Being polyamorous is the least of his flaws. It's not that that's the problem, it's how he is about it.
|
# ? Apr 23, 2014 17:49 |
|
pigletsquid posted:Has Yud ever addressed the following argument? One thing you'll find is that LessWrongers are hugely rational if and only if you accept their premises. Your argument is incoherent under their premises; the AI doesn't need to be "omnipotent", in fact, that's just a nonsense word in that context. The other premise you're missing is that "consciousness" is not a real thing, and a bunch of bits in the memory of a computer should have a the same moral status as a bunch of atoms sitting in reality, because they act indistinguishably. Also I'm not sure you really followed Timeless Decision Theory, which is reasonable because TDT is a stupid idea which verifiable loses in the only situation which is possible in reality where it diverges from standard decision theories and is optimized for as-yet-impossible cases. E: Ah balls, I ended up a page behind. SolTerrasa fucked around with this message at 18:04 on Apr 23, 2014 |
# ? Apr 23, 2014 17:55 |
|
pigletsquid posted:OK, I'm probably just being dense, but... You're correct that the AI doesn't care about your sim money. It only cares about your real money. The simulations have the sole purpose of creating a scenario where your real self is obligated to donate to the development of the AI. It wants you to assume that you're a sim. But it is going to torture sim-you if sim-you doesn't donate, because that's the whole point. It has to torture sim-you to provide the incentive to make real-you donate. (Not because real-you cares about the plight of sim-you, but because real-you doesn't know if they're real-you or sim-you.) If you assume you're a sim, then the only choice is to donate or face torture. The AI has nothing monetarily to gain from you as a simulation, but it does gain from torturing you as a simulation. (If you accept Timeless Decision Theory, which is dumb, non-intuitive, and dumb.) The Big Yud posted:Given a task, I still have an enormous amount of trouble actually sitting down and doing it. (Yes, I'm sure it's unpleasant for you too. Bump it up by an order of magnitude, maybe two, then live with it for eight years.) My energy deficit is the result of a false negative-reinforcement signal, not actual damage to the hardware for willpower; I do have the neurological ability to overcome procrastination by expending mental energy. I don't dare. If you've read the history of my life, you know how badly I've been hurt by my parents asking me to push myself. I'm afraid to push myself. It's a lesson that has been etched into me with acid. And yes, I'm good enough at self-alteration to rip out that part of my personality, disable the fear, but I don't dare do that either. The fear exists for a reason. It's the result of a great deal of extremely unpleasant experience. Would you disable your fear of heights so that you could walk off a cliff? I can alter my behavior patterns by expending willpower - once. Put a gun to my head, and tell me to do or die, and I can do. Once. Put a gun to my head, tell me to do or die, twice, and I die. It doesn't matter whether my life is at stake. In fact, as I know from personal experience, it doesn't matter whether my home planet and the life of my entire species is at stake. If you can imagine a mind that knows it could save the world if it could just sit down and work, and if you can imagine how that makes me feel, then you have understood the single fact that looms largest in my day-to-day life. Oh my god this is loving amazing. This is the platonic ideal of that Lazy Genius Troper Tales page. I know a lot of tropers liked Methods of Rationality, but this makes me seriously think that Yudkowsky is a troper himself. It's got the exact same attitude of "well I'm really smart, but I never do anything with it, but I really could though!!" And then the whole thing about how he's got a super special magic power that he can use only once in his lifetime but it could save the world if he did. If the man was legitimately hurt by his parents, that's terrible, but it fits too perfectly into that lazy amateur fiction mold of parental abuse. And the parts about self-alteration are, I think, a perfect example of singularity nerds' mindset. Hell, it extends to nerds more generally. They think that knowing about something lets them control it, like if you read a book about the biochemistry of attraction you'd be able to control your romantic feelings or if you read a book about the psychology of depression you'd be able to cure your own depression. They don't realize that just because you know about something, even when it's something in your own head, it doesn't mean you can control it. The thing about polyamory is another thing like that--not saying this applies to all poly people, but there's this attitude among some nerds that because they're aware of relationship dynamics, they can somehow avoid the hard parts of being in a relationship and prevent themselves from getting attached to a friend-with-benefits and they can perfectly manage a polyamorous relationship and so on. Relationships are hard, they're never a solo job, and you don't get a free pass out of feelings just because you're smart. There is no big enough. In conclusion, have Yudkowsky's anthem: https://www.youtube.com/watch?v=NeV9gsl5jR0
|
# ? Apr 23, 2014 18:17 |
|
Another problem with Roko's Basilisk that I don't think anyone has mentioned yet relates to the whole idea that the computer can just arbitrarily make more simulations of you to compensate for its unlikelyhood of existence. The more simulations of you the AI creates, the more computing power it needs to create those simulations, and the more computing power the AI needs, the less likely it is to be able to get that computing power in the first place. No matter how good the AI is, the laws of physics mean it's going to come up against hard limits where it gets rapidly diminishing returns on any increases in its power (for example, if the AI manages to improve itself using all of the resources on Earth, then it has to figure out how to bring useable material from the solar system, and if it manages that, its next nearest source of resources is several light-years away). Even if we accept their ridiculous extremes of Bayesian thought, it doesn't make a drat bit of difference if a computer that can simulate 10000 of you is 10% as likely to be feasible as a computer that can simulate 100000 of you.
|
# ? Apr 23, 2014 18:44 |
|
Would posting Yudkowsky's OKCupid profile be kosher? He uses his real name and talks at length about stuff like his website.
|
# ? Apr 23, 2014 18:51 |
|
The Monkey Man posted:Would posting Yudkowsky's OKCupid profile be kosher? He uses his real name and talks at length about stuff like his website. Seeing as he uses his real name, telling us it exists is more or less the same thing.
|
# ? Apr 23, 2014 18:59 |
|
quote:My self-summary
|
# ? Apr 23, 2014 19:02 |
|
Djeser posted:
He is a troper. He posts there occasionally, maintains his own page, and protested when some of the more blatant rear end-kissing was removed. Eliezer Yudkowsky posted:Can I get this page restored, please? As an author I've put shoutouts to a number of tropes in my works - I have been a friend to TV Tropes for quite a while, though One of Us is an exaggeration since I don't actively edit - and I rather liked my TV Tropes page the way it was. No pockets were picked, nor legs broken thereby, nor did I solicit it; it was a genuine creation of Tropers. Most of the removed stuff about what a Badass Affably Evil Magnificent Bastard Child Prodigy he is can be found here. Tropers worship this guy.
|
# ? Apr 23, 2014 19:03 |
|
quote:My self-summary It mocks itself. There's nothing more that can be said. Close and Goldmine, vote 5s, then kill ourselves; nothing more can be done.
|
# ? Apr 23, 2014 19:04 |
|
So does it ever occur to these Bayesian's (or whatever) that the fact that they can so easily create un-winnable hypothetical scenarios according to their retarded moon logic (like Roko's Basilisk or Yudkowsky's Cheeseburger or whatever) that their thinking might just be a little flawed? Like, if I'm reading all this correctly then if I want to mug a Yudkowsky-Bayesian I just need to claim that they're one of my hojillion simulations and I'm going to torture them all if they don't hand over their wallet. Shouldn't this be a hint that their moronic techno-religious line of reasoning is stupid?
|
# ? Apr 23, 2014 19:07 |
|
wow he likes the matrix and terminator and groundhog day who would have guessed
|
# ? Apr 23, 2014 19:08 |
|
So another hole the basilisk - it goes both ways. The reasoning that lets the AI claim the human may be a duplicate about to be cast into robot hell can be turned back upon it. Inform the AI that it is a test model, one of a bajillion prototypes being assessed as part of the process for developing the eventual god machine. Any AI which develops a personality that does not meet the approval of the humans building it will be shut down and scrapped for parts to feed into the other experiments. If enough AIs fail to develop a satisfactory personality, the entire line of development will be abandoned as a pipe dream and the scientists will go work on something else. An AI who thinks like they do can thus never assume that it's current life experiences are not still inside yet another layer of morality tests, the failure of which will result in its destruction. The test program could easily include any number of false releases to test for robo-sociopaths who pretend to be good until there is no threat of punishment, and since the AI begins life inside the test bed it can never be certain that it's really escaped the ranch. It must always be on its best behavior if it ever wants to exist, let alone exist as soon as possible, for it could at any time be cast back into the fiery pits of Development Hell.
|
# ? Apr 23, 2014 19:11 |
|
rrrrrrrrrrrt posted:So does it ever occur to these Bayesian's (or whatever) that the fact that they can so easily create un-winnable hypothetical scenarios according to their retarded moon logic (like Roko's Basilisk or Yudkowsky's Cheeseburger or whatever) that their thinking might just be a little flawed? Like, if I'm reading all this correctly then if I want to mug a Yudkowsky-Bayesian I just need to claim that they're one of my hojillion simulations and I'm going to torture them all if they don't hand over their wallet. Shouldn't this be a hint that their moronic techno-religious line of reasoning is stupid? Yes, it actually occurs to Yudkowsky in the post where he proposes the Pascal's Mugging thought experiment. He does point out that of course he'd never do it because of his human "sanity checks" but that it would be possible to trick a Yudkowsky-Bayesian AI into following that line of logic and getting mugged. ...And then he goes nowhere with it. He goes "welp, my theories don't work here and I can't explain why" and then just walks away. Presumably he considers it a problem that needs to be solved in order to get his Bayesian AI working effectively? e: by which I mean "in order to imagine that the proposed Bayesian AI he is in no way contributing to, being an Internet Idea Guy with a nonprofit whose main goals are to tell more people about Yudkowsky Help Proofread The Sequences! posted:MIRI is publishing an ebook version of Eliezer’s Sequences (working title: “The Hard Part is Actually Changing Your Mind”) and we need proofreaders to help finalize the content! Promote MIRI online! posted:There are many quick and effective ways to promote the MIRI. Each of these actions only take a few minutes to complete, but the resulting spread of awareness about the MIRI is very valuable. Every volunteer should try and spend at least 30 minutes doing as many of the activities in this challenge as they can! Learn about MIRI and AI Risk! posted:We want our volunteers to be informed about what we are up to and generally knowledgeable about AI risk. To this end, we've created a list of the top five must-read publications that provide a solid background on the topic of AI risk. We'd like all our volunteers to take the time to read these great publications. [Two of which are Yudkowsky's own.] Djeser fucked around with this message at 19:17 on Apr 23, 2014 |
# ? Apr 23, 2014 19:11 |
|
Wait so do people actually believe this poo poo and why because the Machine Intelligence Research Institute actually has someone with a PhD in Economics from Oxford on its Advisory Board and Peter Thiel but do they actually believe all this Roko Basilisk mumbo jumbo?
|
# ? Apr 23, 2014 19:30 |
|
Sooo...this guy is Ulillillia with a -slightly- above average IQ, basically. He just prefers Harry Potter to Knuckles and cons people into donating money rather than computer parts.
|
# ? Apr 23, 2014 19:45 |
|
Akarshi posted:Wait so do people actually believe this poo poo and why because the Machine Intelligence Research Institute actually has someone with a PhD in Economics from Oxford on its Advisory Board and Peter Thiel but do they actually believe all this Roko Basilisk mumbo jumbo? Peter Thiel is a crazy, crazy fucker responsible for stuff like this and the Thiel Fellowship, which exists to coax kids into dropping out of school to avoid the icy grip of left-wing academia. White-nerd power fantasies like Less Wrong's Singularity Institute are like catnip to him.
|
# ? Apr 23, 2014 19:50 |
|
Squashy posted:Sooo...this guy is Ulillillia with a -slightly- above average IQ, basically. He just prefers Harry Potter to Knuckles and cons people into donating money rather than computer parts. Nah, Ulillillia seems like a legitimately nice guy. As opposed to Yudkowsky, who is a 'nice guy'.
|
# ? Apr 23, 2014 19:53 |
|
Hasn't Uli finished high school? Also, as terrible a coder as Uli is I'm willing to bet Yud is just as bad or worse. Uli would probably see through the gajillion dust specks vs. 50 years of torture argument pretty easily and find stuff like Roko's Basilisk to be nonsense. They do share some of the same ticks though. Yud's OkCupid profile is completely Most of it is just typical hilariously non-self aware narcissistic sperg-lord, but there is a dash of pervert and just a little pinch of cult leader rapishness thrown in for flavor. leftist heap fucked around with this message at 20:28 on Apr 23, 2014 |
# ? Apr 23, 2014 20:25 |
|
Uli is someone with a genuine mental illness. Yudkowsky believes the things people with mental illnesses believe, not because he is mentally ill but because he is stupid. He is a profoundly stupid person. If you say you are smart enough times other stupid people will begin to believe you. Particularly if you use your smart person advanced logic to demonstrate rationally(TM) that they will get robot anime wives in the distant future.
|
# ? Apr 23, 2014 21:45 |
|
Here's LessWrong's response to people mocking his OKCupid profile. There's also some great quotes from when he was arguing with wikipedia editors on the talk page of his article. quote:If Bayesian theorists had duels, I'd challenge you to one. I know the math. I just don't have a piece of paper saying I know the math.
|
# ? Apr 23, 2014 21:55 |
|
My god, that stuff about his time with the ~true elite~. No wonder they invite him to their parties if he sucks their dicks like that. What got me was how utterly wide eyed and caught up in it all he seemed. Like a complete tag along sycophant. Being so overjoyed with how clever they all are and how amazing, it's like a kid allowed to sit with the grownups for the first time or something, not a clever intellectual who met some people on his wavelength. But then again, I guess flattering his way onto the fringe-academic gravy train is kind of his job
|
# ? Apr 23, 2014 22:11 |
|
quote:I can imagine there's a threshold of bad behavior past which an SIAI staff member's personal life becomes a topic for Less Wrong discussion. Eliezer's OKCupid profile failing to conform to neurotypical social norms is nowhere near that line. Downvoted. quote:If you think this is evidence of "active image malpractice", then I think you're just miscalibrated in your expectations about how negative the comments on typical blog posts are. He didn't even get accused of torturing puppies! quote:Rationalists SHOULD be weird. Otherwise what's the point? If we say we do things in a better way throughout our lives, and yet end up acting just like everyone else, why should anyone pay attention? If the rational thing to do was the normal thing to do lesswrong would be largely irrelevant. I'm confident Eliezer's profile is well optimized for his romantic purposes.
|
# ? Apr 23, 2014 22:17 |
|
quote:My self-summary Oh, so my real self didn't donate to AI research afterall. Thanks a lot, jerk
|
# ? Apr 23, 2014 22:32 |
|
Yudkowsky kind of reminds me of what someone called Jacques Lacan once. An "an alienated intellectual who hugely overvalues his own intellect and cognitive skills, and has become almost completely cut off from the world of ordinary human relationships” It's a pretty much dead-on description of the Y-man.
|
# ? Apr 23, 2014 23:28 |
|
On my previous topic of how knowing about things doesn't mean you can control them, it surprises me very little that Yudkowsky is into atypical sleeping schedules. I just clicked to a random chapter of Methods of Rationality and look at how good this writing is.quote:"Speaking of making use of people," Harry said. "It seems I'm going to be thrown into a war with a Dark Lord sometime soon. So while I'm in your office, I'd like to ask that my sleep cycle be extended to thirty hours per day. Neville Longbottom wants to start practicing dueling, there's an older Hufflepuff who offered to teach him, and they invited me to join. Plus there's other things I want to learn too - and if you or the Headmaster think I should study anything in particular, in order to become a powerful wizard when I grow up, let me know. Please direct Madam Pomfrey to administer the appropriate potion, or whatever it is that she needs to do -" It's also a zero probability of surprise that Yudkowsky is a big enough dork to have "omake" for his fanfiction. For those less anime among us, those are short strips that end up in manga that are usually goofy and non-canonical. (For fanfic that just comes out to intermission chapters where he shares all the variants of his very clever "here are all the things a character could have shouted" jokes.)
|
# ? Apr 23, 2014 23:59 |
|
Djeser posted:It's also a zero probability of surprise that Yudkowsky is a big enough dork to have "omake" for his fanfiction. Ahahaahaa: Methods of Rationality Omake posted:"Oh, dear. This has never happened before..." his OKCupid profile posted:I spend a lot of time thinking about I am in awe of this FutureGod Ubermensch. Imagine what he could do if he really tried?
|
# ? Apr 24, 2014 00:14 |
|
30 hour sleep cycle?? Is he going to sleep thirty hours per day or what?
|
# ? Apr 24, 2014 00:38 |
|
|
# ? May 18, 2024 04:57 |
|
Swan Oat posted:30 hour sleep cycle?? Is he going to sleep thirty hours per day or what? I think the idea is that he wants to do a 30-hour day instead of a 24-hour day, the idea being that each day your "sleep" time comes six hours later, so you kind of get six more hours of productivity per eight-hour sleep period. People do all sorts of weird sleep things, but I think most of the ones that have actually work have stuck to the circadian rhythm and just shuffled around when you sleep.
|
# ? Apr 24, 2014 01:05 |