|
Roko's Basilisk is similar to the Monty Hall Problem. If you accept the premises, it is in fact exactly the same as the Monty Hall Problem with an extremely high number of doors. The problem is that the premise of Roko's Basilisk is stupid bullshit, but people who try to understand it from Yudkowsky's perspective find it as confusing as the counterintuitive solution to the Monty Hall Problem.
|
# ? Apr 30, 2014 01:52 |
|
|
# ? Jun 1, 2024 20:32 |
|
AATREK CURES KIDS posted:Roko's Basilisk is similar to the Monty Hall Problem. If you accept the premises, it is in fact exactly the same as the Monty Hall Problem with an extremely high number of doors. The problem is that the premise of Roko's Basilisk is stupid bullshit, but people who try to understand it from Yudkowsky's perspective find it as confusing as the counterintuitive solution to the Monty Hall Problem. Is it? I'm not a statistician, I mostly like geometry and PDEs, but if so, then it does a pretty good job of illustrating just how important a sensible prior probability distribution is in Bayesian analysis.
|
# ? Apr 30, 2014 01:58 |
|
SALT CURES HAM posted:At the risk of playing devil's advocate, is there any real reason not to have your body frozen if the idea interests you? I'm having a hard time thinking of any negative consequences other than possibly getting fleeced out of some money if it's horseshit. Complete horseshit, there is a non-zero chance of your body getting unfrozen or never having been frozen properly, even if the freezing is perfect and perfectly maintained there's no particular reason to believe in future thawing-and-reviving tech being a thing ever, why would a future society even want to thaw out oldsicles?
|
# ? Apr 30, 2014 02:15 |
|
Reality TV, of course. https://www.youtube.com/watch?v=g7Lzr3cwaPs
|
# ? Apr 30, 2014 03:03 |
|
Seriously, apart from the fact that current tech freezes you in such a way that every cell explodes into hamburger, you're relying on a bunch of California futurists to keep a company running for centuries. People have been out to visit those kinds of facilities and found one bored night watchman and a bunch of partially thawed heads. It's loving absurd. Just learn to deal with your mortality, people, loving hell. e: wow, from that article things are even worse than my hyperbole -- and that's at the company Yuddy chose as the best one! Forums Barber fucked around with this message at 03:22 on Apr 30, 2014 |
# ? Apr 30, 2014 03:19 |
|
Forums Barber posted:Seriously, apart from the fact that current tech freezes you in such a way that every cell explodes into hamburger, you're relying on a bunch of California futurists to keep a company running for centuries. People have been out to visit those kinds of facilities and found one bored night watchman and a bunch of partially thawed heads. It's loving absurd. Just learn to deal with your mortality, people, loving hell. I just noticed that Alcor is based in loving Arizona. Yeah, if you have a facility in which you store the frozen bodies of people so that they may one day be revived, you should certainly build it in the middle of a loving desert.
|
# ? Apr 30, 2014 04:06 |
|
My favorite thing about cryogenics is that they store you upside down so in case they forget to top up the containers with liquid nitrogen for a couple of weeks they can just go "oh well, after the singularity they can just make him new feet anyway".
|
# ? Apr 30, 2014 04:10 |
|
Yudkowsky has very strong opinions on what makes a good scientist for a high school dropout.quote:A creationist AGI researcher? Why not? Sure, you can't really be enough of an expert on thinking to build an AGI, or enough of an expert at thinking to find the truth amidst deep dark scientific chaos, while still being, in this day and age, a creationist. But to think that his creationism is an anomaly, is should-universe thinking, as if desirable future outcomes could structure the present. Most scientists have the meme that a scientist's religion doesn't have anything to do with their research. Someone who thinks that it would be cool to solve the "human-level" AI problem and create a little voice in a box that answers questions, and who dreams they have a solution, isn't going to stop and say: "Wait! I'm a creationist! I guess that would make it pretty silly for me to try and build an AGI." Is someone gonna tell him about Schrodinger? David Bohm? Heck, even Penrose, with his views on consciousness and the universe? They aren't AGI researchers, but they were great scientists. And I find it hard to believe to that having spiritual views prevents a person from performing the conceptualization needed to develop AGI. Penrose posted:"I think I would say that the universe has a purpose, it's not somehow just there by chance ... some people, I think, take the view that the universe is just there and it runs along – it's a bit like it just sort of computes, and we happen somehow by accident to find ourselves in this thing. But I don't think that's a very fruitful or helpful way of looking at the universe, I think that there is something much deeper about it." Ooooh, I forgot the best quote: Yudowsky posted:Just normal people with no notion that it's wrong for an AGI researcher to be normal. Whelp. Hate Fibration fucked around with this message at 04:22 on Apr 30, 2014 |
# ? Apr 30, 2014 04:14 |
|
Forums Barber posted:e: wow, from that article things are even worse than my hyperbole -- and that's at the company Yuddy chose as the best one! quote:Williams' severed head was then frozen, and even used for batting practice by a technician trying to dislodge it from a tuna fish can. I would bold the grimly hilarious parts, but it's kind of ugly to bold an entire quote. These are the people who will tenderly care for the broken bodies of our libertarian saints. Eating their tuna as they wait patiently to revive them to negotiate with the EDIT: quote:Johnson writes that holes were drilled in Williams' severed head for the insertion of microphones, then frozen in liquid nitrogen while Alcor employees recorded the sounds of Williams' brain cracking 16 times as temperatures dropped to -321 degrees Fahrenheit. Really, everyone should just read the whole thing because holy poo poo. That Old Tree fucked around with this message at 06:21 on Apr 30, 2014 |
# ? Apr 30, 2014 06:15 |
|
About this whole cryonics thing. Transmetropolitan (which is an excellent comic in general) had a pretty good take on it all. They do get unfrozen, yes and are completely unable to handle the future, breaking down pretty much the instant they get outside and often staying that way. They're considered completely useless cast-offs, revived out of some strange sense of duty more then anything. It's just one of the many ways that it takes a sledgehammer to all that singularitarian ideology that LessWrong glorifies.
|
# ? Apr 30, 2014 06:39 |
|
NewMars posted:About this whole cryonics thing. Transmetropolitan (which is an excellent comic in general) had a pretty good take on it all. They do get unfrozen, yes and are completely unable to handle the future, breaking down pretty much the instant they get outside and often staying that way. They're considered completely useless cast-offs, revived out of some strange sense of duty more then anything. It's just one of the many ways that it takes a sledgehammer to all that singularitarian ideology that LessWrong glorifies. This. I can see a person maye making a 30 year or so jump and adjusting to a new life, but anything over a hundred and you end up as a complete outcast. Even a millionaire industry captain would find themselves with no contacts, no knowledge of the markets and trends, outdated skills and probably having trouble with basic communication. But hey, maybe a magical computer jesus will just shoot all the information you need directly into your brain and give you a body of a hot young supermodel. It could happen, right guys? Right?
|
# ? Apr 30, 2014 10:45 |
|
Yeah, that Transmetropolitan side story is awesome. Cryogenically frozen people are thawed out by some second-rate firm, not out of some sense of altruism, but because it's what they're being payed to do. They end up living in shelters or run-down hovels, unable to make a living or even function in the future society. Transmetropolitan is over all a very good comic, and it paints a believable, grim future where "post-scarcity" hasn't solved a single problem. They have all this wonderful technology that the LW crowd envisions, like sentient clouds of nano-machines and assemblers that can create any object you want out of old trash, but instead of ushering in a shining utopia it just means that the upper class can now eat genetically engineered baby seals and inject themselves with ebola for laughs.
|
# ? Apr 30, 2014 11:27 |
|
Reminds me of a quote from a different forum I'm quite fond of: "Transhumanism is the story of how advancing technology will solve all the problems of being human; cyberpunk is the story of how it won't."
|
# ? Apr 30, 2014 11:39 |
|
Mr. Sunshine posted:Yeah, that Transmetropolitan side story is awesome. Cryogenically frozen people are thawed out by some second-rate firm, not out of some sense of altruism, but because it's what they're being payed to do. They end up living in shelters or run-down hovels, unable to make a living or even function in the future society. There are some practical applications of cryogenics technology: Spider's ex-wife had herself frozen voluntarily, not to be awoken until he was dead.
|
# ? Apr 30, 2014 13:50 |
|
AlbieQuirky posted:Complete horseshit, there is a non-zero chance of your body getting unfrozen or never having been frozen properly, even if the freezing is perfect and perfectly maintained there's no particular reason to believe in future thawing-and-reviving tech being a thing ever, why would a future society even want to thaw out oldsicles? If historians could bring back and interview people who'd gone to the Globe Theater to see Shakespeare's original performances, or heard Bach performing in Vienna, or who'd studied under Socrates or Plato, they'd probably do it without a second's hesitation.
|
# ? Apr 30, 2014 14:33 |
|
This is a very good point. Unfortunately, the historically-relevant information gleaned from oldsicles would have some selection bias in the "these are the people who would have signed up for cryo" area. I doubt a ton of them would have experienced a lot of relevant culture first-hand. Though that would be an interesting way to select yourself for ideal rejuvenation: be a cultured and interesting individual who was present for major historical and cultural events. But considering Yuddites, like hell that's going to happen. They're just gonna tell the future historians about how deep Magical Girl Lyrical Nanoha and the third season of Supernatural are.
|
# ? Apr 30, 2014 15:13 |
|
Djeser posted:Though that would be an interesting way to select yourself for ideal rejuvenation: be a cultured and interesting individual who was present for major historical and cultural events. Or something. They didn't suggest that being boring would result in torture though, and freezing yourself seems pretty boring to me.
|
# ? Apr 30, 2014 15:39 |
|
ArchangeI posted:I just noticed that Alcor is based in loving Arizona. Yeah, if you have a facility in which you store the frozen bodies of people so that they may one day be revived, you should certainly build it in the middle of a loving desert. Hey now! Antarctica is a desert!
|
# ? Apr 30, 2014 15:56 |
|
I still don't get how the basilisk is supposedy to work without some kind of time travel or backwards causation. Let's assume that at some point in the future the omnipotent AI comes into existence. Depending on how many people were afraid of being tortured, this may happen sooner or later. But a soon as the AI exists, its creation process and causes for its existance are fixed in time. Basically the AI can be thankfull that a lot of long dead Less Wrong crazies believed in their own argument in the past, so that its creation happened earlier, but it has no possibility to change the brain processes in the past. So torturing simulations are just a waste of effort. Another problem with the argument is that it is just as likely that an evil AI will come into existence who likes to torture people who donated to AI research. Or some capricious god/space cuthulu/advanced aliens who gonna torture people for posting in this forum. Whatever your doing in your life, there is some chance that someone or something gonna torture you for it. I guess the smartest thing is to kill ourselves right now...
|
# ? Apr 30, 2014 16:44 |
|
gipskrampf posted:I still don't get how the basilisk is supposedy to work without some kind of time travel or backwards causation. Good thing that Timeless Decisison Theory is literally backwards causation!
|
# ? Apr 30, 2014 17:00 |
|
gipskrampf posted:I still don't get how the basilisk is supposedy to work without some kind of time travel or backwards causation. Let's assume that at some point in the future the omnipotent AI comes into existence. Depending on how many people were afraid of being tortured, this may happen sooner or later. But a soon as the AI exists, its creation process and causes for its existance are fixed in time. Basically the AI can be thankfull that a lot of long dead Less Wrong crazies believed in their own argument in the past, so that its creation happened earlier, but it has no possibility to change the brain processes in the past. So torturing simulations are just a waste of effort. Or I guess you could just say "I will not engage in acausal trade with things that will torture simulations of me, what the gently caress" and the AI will sigh and let you go, because there's no point in torturing you. You have to say it out loud in case the simulation the AI's monitoring has privacy protections that stop it peering inside your thoughts and it's just monitoring the air around you for speech and staring creepily at you from all possible angles. Yes, I have been told this by a... friend who is into the cult but hasn't actually donated. gipskrampf posted:Another problem with the argument is that it is just as likely that an evil AI will come into existence who likes to torture people who donated to AI research. Or some capricious god/space cuthulu/advanced aliens who gonna torture people for posting in this forum. Whatever your doing in your life, there is some chance that someone or something gonna torture you for it. I guess the smartest thing is to kill ourselves right now...
|
# ? Apr 30, 2014 17:08 |
|
Dabir posted:If historians could bring back and interview people who'd gone to the Globe Theater to see Shakespeare's original performances, or heard Bach performing in Vienna, or who'd studied under Socrates or Plato, they'd probably do it without a second's hesitation. That's true. But still doesn't suggest that mass unfreezing would be a social goal. Anybody who got unfrozen as a "window to the past" would be likely to have a lovely life in any case---Ishi, the last of the Yahi, seems like a possible example.
|
# ? Apr 30, 2014 17:16 |
|
chrisoya posted:Or I guess you could just say "I will not engage in acausal trade with things that will torture simulations of me, what the gently caress" and the AI will sigh and let you go, because there's no point in torturing you. You have to say it out loud in case the simulation the AI's monitoring has privacy protections that stop it peering inside your thoughts and it's just monitoring the air around you for speech and staring creepily at you from all possible angles.
|
# ? Apr 30, 2014 17:18 |
|
chrisoya posted:The fear of eternal torture at the robotic hands of a friendly and loving AI can't work if the AI won't actually torture people, so the AI has to torture people just in case. But its quite possible to be afraid of something illusory. Whatever simulations will happen in the future will not decide how afraid I am at this moment. Basically my decision to fund AI research is caused by my fear of torture (a thing in the present) and not by the future torture itself. Wait, what if I refuse the donate money, so the AI will not come into existence and will not torture me? Checkmate AI, your plan will backfire spectacularly.
|
# ? Apr 30, 2014 17:32 |
|
"An all powerful sentient AI emerges in the future" is not a yes or no question to Less Wrong, it is a when and how question.
|
# ? Apr 30, 2014 17:34 |
|
Djeser posted:Though that would be an interesting way to select yourself for ideal rejuvenation: be a cultured and interesting individual who was present for major historical and cultural events. This is the same idea that drives a lot of libertarians: in a truly free market, they assume that THEY'D be the heroic, beloved, all-powerful captains of industry, rather than one of seven billion wage slaves. Yudkowsky assumes not only that he'll get woken up hundreds of years from now, but that his new life in the future will be a desirable one. How many useful skills would somebody from 1800 have in today's job market? Yudkowsky just assumes that he's either going to be able to autodidactically bootstrap himself up, or that the technology will exist to instantly beam all necessary information into his brain, or that it'll be a post-scarcity society that doesn't operate on capitalist assumptions of wage-for-work anymore... or SOMETHING that's going to keep his post-freezing life from being "creepy homeless guy who rants to you about the 21st century on your way t work".
|
# ? Apr 30, 2014 17:39 |
|
If we don't do it, aliens will. Our AI god must be the first one created in the entire universe, or we're doomed.Jonny Angel posted:Yudkowsky just assumes that he's either going to be able to autodidactically bootstrap himself up
|
# ? Apr 30, 2014 17:42 |
|
Has Yudkowsky heard of our lord and savior Jesus Christ? He is Reason made flesh after all, and might fill the hole that Yudkowsky obviously feels in himself
|
# ? Apr 30, 2014 17:58 |
|
Afraid of basilisks and other benevolent torture-monsters? Less Wrong forums users offer helpfull tips how to kill yourself.Less Wrong posted:
http://lesswrong.com/lw/fq3/open_thread_december_115_2012/80v8#20121230
|
# ? Apr 30, 2014 18:08 |
|
Jonny Angel posted:This is the same idea that drives a lot of libertarians: in a truly free market, they assume that THEY'D be the heroic, beloved, all-powerful captains of industry, rather than one of seven billion wage slaves. Yudkowsky assumes not only that he'll get woken up hundreds of years from now, but that his new life in the future will be a desirable one. Option one: compound interest on investments made in the here and now. Of course, that'd require you to accurately predict the market for an undetermined period of the future after your death, but for an intellectual dinosaur like Yudkowsky that shouldn't be too hard. Option two: become trophy husband for rich woman (or man, we're not judging here) who wants to get with a guy who is an old school gentleman and knows how to treat a woman right.
|
# ? Apr 30, 2014 18:16 |
|
Jonny Angel posted:How many useful skills would somebody from 1800 have in today's job market? Yudkowsky just assumes that he's either going to be able to autodidactically bootstrap himself up, or that the technology will exist to instantly beam all necessary information into his brain, or that it'll be a post-scarcity society that doesn't operate on capitalist assumptions of wage-for-work anymore... or SOMETHING that's going to keep his post-freezing life from being "creepy homeless guy who rants to you about the 21st century on your way t work". Of course, for someone with Yudkowsky's caliber of smug, I can hardly imagine "well, to be honest we don't have much need for you, but here's some low-wage, low-skill work you can do probably for the rest of your life" being an acceptable outcome
|
# ? Apr 30, 2014 18:40 |
|
Polybius91 posted:I do recall there was a kid from an African tribe who came to America early in the 20th century and did pretty okay working at a factory for awhile. I specifically remember that being able to climb without riggings proved useful to him. That would be Ota Benga, who also had a stint in the Bronx Zoo. So, y'know, work in a factory or live in a zoo.
|
# ? Apr 30, 2014 18:43 |
|
Has Yudkowsky or any of his buddies ever explained how the AI is supposed to get enough information to perfectly reconstruct the lives and thought processes of an arbitrarily large number of humans? I mean, to perfectly simulate me, it has to be able to perfectly simulate everyone I've ever interacted with at least well enough to provide the same environment, plus everything I have ever done, plus every electrochemical interaction within my body. And for all that the world is currently significantly more surveilled and recorded than it was, say, 60 years ago, most of that information still isn't being recorded. I don't have an EEG monitoring my brain constantly. I don't have chemical analysis going on in my lower GI tract whenever I eat something. And this has to be a perfect simulation, because any imperfection will imply that Timeless Decision Theory is incorrect and that the simulation is, in fact, not exactly the same as I am and cannot be used to accurately predict or model behavior. So, even assuming infinite processing power and infinite time in which to process, where the gently caress is this information coming from? I mean, if it can perfectly reverse entropy and reconstruct the past, why didn't it pull this gambit on rich people throughout history? E: Like not even just the infinite simulation torture gambit but any acausal time bullshit interactions whatsoever. Mors Rattus fucked around with this message at 19:06 on Apr 30, 2014 |
# ? Apr 30, 2014 18:55 |
|
Maybe it just tortures every possible being that could conceivably be called human. Sure, that's a lot, but I bet it's smaller than 3^^^3 beings! As an added bonus, it would simulate you at every instant of your life. gipskrampf posted:Afraid of basilisks and other benevolent torture-monsters? Less Wrong forums users offer helpfull tips how to kill yourself. Or get therapy. Edit: quote:army1987 16 December 2012 01:02:44PM 6 points [-]
|
# ? Apr 30, 2014 19:48 |
|
Mods, gas this thread, remove all vestiges of it from the archives and permaban everybody posting in it. Only then the forums can be saved.
|
# ? Apr 30, 2014 20:32 |
|
quote:"how generalization from fictional evidence is bad" I think I will let this stand for itself, except to note that there was a Let's Read of Unintended Consequences and it is hilarious. (All emphasis in the above is mine.)
|
# ? Apr 30, 2014 20:44 |
|
Ooo, ooo, are we doing 'rationalists misunderstand fiction?'Robin Hanson posted:“The 38 most common fiction writing mistakes” offers advice to writers. But the rest of us can also learn useful details on how fiction can bias our thinking. Here are my summary of key ways it says fiction differs from reality (detailed quotes below) [cut by me - Ed]: And the comments! Guy 1 posted:Are you saying that exposure to stories wipes out our innate understanding of Bayesian probability? Guy 2 posted:Yes, i think that would be a reasonable way of putting it. Perhaps the cognitive load of understanding the story has the effect of lowering our intellectual standards in general, so that the big winning gamble each story encodes is consistently accepted uncritically, until a distorted sense of probability becomes habitual. Or perhaps rather than a lowering of intellectual standards, it is simply an issue of focus, and the implicit gamble is the unseen gorilla. And further down the page... Eliezer Yudkowsky, for it is he posted:And that, in a nutshell, is why I can't read real books anymore. Guy 3, in reply posted:Is Harry Potter a real book or an alt. book?
|
# ? Apr 30, 2014 20:59 |
|
gipskrampf posted:Afraid of basilisks and other benevolent torture-monsters? Less Wrong forums users offer helpfull tips how to kill yourself. It figures that they'd stumble upon the Atrocity Archives, since they seem to be nicking most of their other ideas from Sci-Fi.
|
# ? Apr 30, 2014 21:00 |
|
potatocubed posted:Ooo, ooo, are we doing 'rationalists misunderstand fiction?' So... are these people admitting that they don't read or are afraid of fiction because they essentially can't discern it from reality? I mean, it's couched in the argument that too much fiction could subconsciously impart or affect existing cognitive biases and render them unable -- or at least less able -- to recognize ~*The Truth*~, but it sure sounds like they're afraid of fiction. But hey, Buffy the Vampire Slayer is A-OK because...
|
# ? Apr 30, 2014 21:31 |
|
|
# ? Jun 1, 2024 20:32 |
|
Mr. Sunshine posted:It figures that they'd stumble upon the Atrocity Archives, since they seem to be nicking most of their other ideas from Sci-Fi. Indeed, the whole "AIs resurrect you by looking at your output over the course of your life, for inscrutable reasons, entropy be damned" comes from an earlier book by the same author.
|
# ? Apr 30, 2014 21:43 |