|
Reginald Bathwater posted:I'm pretty sure thats just how you play morrowind And if it worked there, who's to say it won't work even better in real life?
|
# ? Jun 21, 2014 11:58 |
|
|
# ? Jun 1, 2024 20:19 |
|
Literally Kermit posted:Launch 'em to space to terraform and colonize star systems. Even reluctant immortal folk will all eventually be like, "you know what, gently caress it, send me up" given enough centuries! That would make a great story. Guy freezes himself to be around for the great post-singularity utopia, finds himself stranded on a distant planet centuries later with only basic tools because the company sold his frozen body to the government as a colonist on an STL seed ship. Depending on your preference, they either band together and build a libertarian utopia or struggle to eek out an existence a medieval peasant would have called pitiful.
|
# ? Jun 21, 2014 12:13 |
|
A bunch of frozen upper-middle-class office workers with minimal outdoors experience are defrosted on an alien planet with few resources, and quickly turn on each other. Lord of the Files.
|
# ? Jun 21, 2014 12:17 |
|
Well, if we are talking already thawed out immortals, interstellar travel would literally be space catapulting a pod stocked with a food replicator, seed bank, and a copy of minecraft, starbound, and space engineers. (Poster bias, sorry) Dwarf fortress alpha updates will be sent via light-pulses, as soon as the head of Toady and his freeze-dried cat. These four games will allow any immortal scrub both the knowledge of their new trade, and a way to pass the time whilst the next ten thousand years tick by, because Brother Over-AI is on a budget and cryo is too loving expensive.
|
# ? Jun 21, 2014 12:42 |
|
ArchangeI posted:That would make a great story. Guy freezes himself to be around for the great post-singularity utopia, finds himself stranded on a distant planet centuries later with only basic tools because the company sold his frozen body to the government as a colonist on an STL seed ship. Depending on your preference, they either band together and build a libertarian utopia or struggle to eek out an existence a medieval peasant would have called pitiful. Larry Niven wrote a nice one about a totalitarian future state defrosting people to use as indentured servants. If you didn't play nice for them they just cored you out and loaded another one. They figured that just for them defrosting you, you owed them a life-debt.
|
# ? Jun 21, 2014 15:04 |
|
I remember a similar idea in a short story. A guy is resurrected, sees all the neat future technology. Then one of the doctors explains it's time for his heart surgery. "But there's nothing wrong with my heart!" "No, but there's something wrong with mine."
|
# ? Jun 21, 2014 16:33 |
|
It is really an RPG way of viewing things. In the real world "Intelligence" is an abstract measurement of several different factors because your brain does many different things, but to people who don't understand that it's a stat you can boost with the right items.
|
# ? Jun 21, 2014 16:39 |
|
AlbieQuirky posted:Which is loving hilarious, considering the track record of cryonics as an industry to date. One of the early cryonics dudes wrote a (mostly unintentionally) hilarious book about his experiences. The best part of the article is that the guy being interviewed, the one who wrote the book, is very proudly a Yudkowsky-level true believer in the potential of cryonics, and he's still saying "cryo tech today does not work, and the cryonics business is a total shitshow".
|
# ? Jun 21, 2014 17:07 |
|
So what if someone does end up inventing suspended animation technology, only it has nothing to do with cryogenics? Like there's the assumption that it's freezing or nothing, but if we allow, for the purposes of argument, that basically putting people in stasis and reviving them is possible, why does it have to be that? What if freezing is a dead end and the real breakthrough involves chemicals or electric signals to the nervous system or something totally left field?
|
# ? Jun 21, 2014 23:43 |
|
Or some kind of relativity whozit that puts you in a space where time is passing much slower.
|
# ? Jun 22, 2014 03:43 |
|
Maxwell Lord posted:So what if someone does end up inventing suspended animation technology, only it has nothing to do with cryogenics? Tough poo poo to corpsicles, I guess. They turn out as alive as they would if suspended animation was entirely bullshit. Freezing is just the most accessible from our end. It's easy to say 'we'll develop future technology to retrieve you from this currently irretrievable state'. It's harder to do something with slowed cellular activity or keeping a disembodied brain from brain death or whatever, because we haven't invented that yet. We have invented walk-in freezers, though. Everything else, we can't do step one. For cryonics, we can't do step two.
|
# ? Jun 22, 2014 03:44 |
|
LaughMyselfTo posted:Or some kind of relativity whozit that puts you in a space where time is passing much slower. That would require near-light-speed travel, unless someone is willing to build this device:
|
# ? Jun 22, 2014 08:55 |
My favorite thing about Big Yud is how @dril is basically a carbon copy of him. https://twitter.com/dril/status/479174309638729731 @dril posted:FULLY IMMERSED IN THE TIME LINE-- AH DEAR LORD https://twitter.com/dril/status/478533901816573952 @dril posted:ok. i basicly need one of the girls on this website to marry me by june 30 and i am absolutely under zero obligation to send you pics of me, https://twitter.com/dril/status/480410058602594305 @dril posted:surgury to become japanese. Surgeruy to become Japanese
|
|
# ? Jun 22, 2014 12:00 |
|
Maxwell Lord posted:So what if someone does end up inventing suspended animation technology, only it has nothing to do with cryogenics? It'd probably be faster and cheaper to develop a way to digitally copy contents of a brain and a subject's DNA and just clone-copy at a future time. At least there's a 'do-over' if there's errors "recompiling" as opposed to repairing cellular frost burns. That's where the evil AIs getcha, tho'.
|
# ? Jun 22, 2014 14:07 |
|
Segmentation Fault posted:My favorite thing about Big Yud is how @dril is basically a carbon copy of him. dril or Yudkowsky is the best game.
|
# ? Jun 22, 2014 14:20 |
|
My main point is that it kind of ruins the Pascal's Wager version of cryo support if it turns out that the end goal (immortality via suspended animation) is achieved in some other form altogether.
|
# ? Jun 22, 2014 14:23 |
|
Maxwell Lord posted:My main point is that it kind of ruins the Pascal's Wager version of cryo support if it turns out that the end goal (immortality via suspended animation) is achieved in some other form altogether. Cryogenics supporters were bitcoin miners before bitcoin was a thing. Lots of resources expended for something riddled with faults that no one currently, if ever, has a solution. It's the end goal that should be important, not sunk-cost fallacy on a technology that "might" work in the future.
|
# ? Jun 22, 2014 14:35 |
|
I'm just going to imagine that the benevolent AI overmind in the future is using Timeless Decision Making to get the Less Wrong crowd hooked on cryonics. Since it almost certainly won't work, they won't be around to crap up the future.
|
# ? Jun 22, 2014 16:21 |
|
Luigi's Discount Porn Bin posted:It does seem like a game strategy, doesn't it? The hoarding nerd impulse. "I can't use this useful thing now, what if I need it later??" *never uses it* Of course that's beside the point, because ultimately it's a flimsy justification for his laziness and general lack of motivation. He could put effort into things, IF ONLY...
|
# ? Jun 22, 2014 17:45 |
|
John Stossel has Zoltan Istvan on Fox News talking about cryoincs.
|
# ? Jun 23, 2014 03:32 |
|
Slate just put up an article about Roko's Basilisk. http://www.slate.com/articles/technology/bitwise/2014/07/roko_s_basilisk_the_most_terrifying_thought_experiment_of_all_time.html
|
# ? Jul 17, 2014 23:18 |
|
ZerodotJander posted:Slate just put up an article about Roko's Basilisk. Oh huh, I was wondering why some of that crowd were suddenly being pissy about Slate.
|
# ? Jul 18, 2014 00:58 |
The Vosgian Beast posted:Oh huh, I was wondering why some of that crowd were suddenly being pissy about Slate. Are they accusing Slate of endangering a bunch of people by causing them to think about it?
|
|
# ? Jul 18, 2014 08:40 |
|
It's great because if you come into the Slate article completely uninitiated and read Yudkowsky's first rant on the Basilisk, you'd of course assume he's joking and being sarcastic. Then the rug is pulled out from under you, these people genuinely believe this stuff.
|
# ? Jul 18, 2014 09:02 |
|
Wouldn't Yudkowsky's adoption of the Basilisk idea make LessWrong a cult or something close to it? Maybe that explains how eager he is to put it down.
|
# ? Jul 18, 2014 10:40 |
|
I had a thought about the whole "Beep boop, I'll simulate a thousand of you and torture you unless you do it and you don't know which one you are" thing. Now it seems to me that: 1. So far, extremely accurate measurements have confirmed that the curvature of the universe is flat, therefore boundless and infinite. 2. From observation, the probability of atoms coming together to form me exactly as I am now and my environment are non-zero. 3. An infinite number of chances for a non-zero probability mean an infinite number of occurrences. Therefore, there are a infinite number of real mes who aren't simulations and can't be tortured by the AI. There are an infinite number of HALs too, but since each HAL can only run a finite number of sims, there will only ever be an equally infinite amount of sims as non-sims. Therefore there is nothing the computer can do to affect the likelihood that I am a simulation. Furthermore, because there are an infinite number of permutations of us as well as duplicates, HAL can't even threaten me. There are already an infinite number of torture sims from alternate HALs, no matter how I choose. Timeless decision theory is thus even more garbage than was previously shown.
|
# ? Jul 18, 2014 12:32 |
|
chaosbreather posted:2. From observation, the probability of atoms coming together to form me exactly as I am now and my environment are non-zero. Infinite does not necessarily imply a normal distribution. This would only hold true if we could also demonstrate that the universe is normal, which is certainly not necessarily true. Even if it was true, proving it would be really really difficult... we don't even know for certain if pi is normal for crying out loud, even if it really looks like it!
|
# ? Jul 18, 2014 14:48 |
|
Iunnrais posted:Infinite does not necessarily imply a normal distribution. This would only hold true if we could also demonstrate that the universe is normal, which is certainly not necessarily true. Even if it was true, proving it would be really really difficult... we don't even know for certain if pi is normal for crying out loud, even if it really looks like it! That's true, the cosmological principle may be unfounded, but it is still a physical axiom, so we're pretty sure about it. In the case that it's not, there are other physical and cosmological theories that would necessitate infinite copies, including eternal inflation, the Everett interpretation of quantum physics, brane theory, the cyclic model, singularity universes and the mathematical ensemble. All peer-reviewed theories by established, accomplished scientists. Unlike anything that has ever been published on Less Wrong.
|
# ? Jul 18, 2014 15:41 |
|
Walter Kaufmann: "Even if there were exceedingly few things in a finite space in an infinite time, they would not have to repeat in the same configurations. Suppose there were three wheels of equal size, rotating on the same axis, one point marked on the circumference of each wheel, and these three points lined up in one straight line. If the second wheel rotated twice as fast as the first, and if the speed of the third wheel was 1/π of the speed of the first, the initial line-up would never recur."
|
# ? Jul 18, 2014 19:09 |
|
chaosbreather posted:I had a thought about the whole "Beep boop, I'll simulate a thousand of you and torture you unless you do it and you don't know which one you are" thing. Now it seems to me that: I still think the best response to that torture thing is "and what was my sim's reply based on?" and just repeat that for each layer ("and what was that sim's sim's sim's reply based on?") until the AI admits that it was either lying or rolling dice at some layer, or claims to have used literally infinite layers. If it rolled dice, then all of its simulation is based on massive extrapolation from an assumption. If it has infinite layers, the AI already has literally infinite computational power and really couldn't improve on its capabilities by being released, so it has no real need to request release. Also, smack it for lying if it's infinite, because that program can never halt.
|
# ? Jul 19, 2014 00:55 |
|
Honestly I don't see how the possibility of an AI torturing me in the future would make me inclined to support said AI. Really, it seems to me like any decent person should/would oppose it at all costs...
|
# ? Jul 19, 2014 00:57 |
|
The Iron Rose posted:Honestly I don't see how the possibility of an AI torturing me in the future would make me inclined to support said AI. Really, it seems to me like any decent person should/would oppose it at all costs...
|
# ? Jul 19, 2014 01:05 |
|
Roko's Ponzi.
|
# ? Jul 19, 2014 01:32 |
|
I hadn't heard of LessWrong before, read the Slate article about Roko's Basilisk (via Twitter), then searched the forums for "Roko's Basilisk" to find out if anyone was talking about it. I'm not surprised there's a mock thread; this is the weirdest, dumbest thing I've ever heard of.
|
# ? Jul 19, 2014 04:13 |
|
The more I read about Roko's Basilisk, the more it sounds like someone's Matrix fanfiction gone wrong. It makes about as much sense under close observation
|
# ? Jul 19, 2014 06:55 |
|
Don Gato posted:The more I read about Roko's Basilisk, the more it sounds like someone's Matrix fanfiction gone wrong. It makes about as much sense under
|
# ? Jul 19, 2014 15:33 |
|
Too good to be true argues that the scientific community is deliberately suppressing evidence that vaccines cause autism. Well, okay, the author doesn't actually seem to think vaccines cause autism (although he does make end with an odd comment that even 3/43 studies showing vaccines cause autism would be surprisingly low, when in fact it would be surprisingly high since that's over 5%). He just thinks that the evidence vaccines don't cause autism is too great, and that some evidence that vaccines cause autism should have randomly appeared on its own. As further proof, he notes that studies establishing a link between vaccines and autism are less likely to be published than studies that don't. Which obviously proves that the evidence is being suppressed, because we rationalists all know correlation implies causation. His proof is based on baseless assumptions, such as asserting without justification that all studies on a major hot-button issue are only using a 95% confidence threshold. (A quick search reveals that, surprise, that's not the case.) Even after stacking the deck with false assumptions, what he thinks is a resounding proof really isn't - he ends up with a p-value of .14 (which he misrepresents as .13), which too high to say anything with even 95% confidence. All in all, pretty standard for lesswrong - bad math, false assumptions, and taking the anti-vaccer side in the name of just asking questions. But what interested me was this line: quote:Having "95% confidence" tells you nothing about the chance that you're able to detect a link if it exists. It might be 50%. It might be 90%. This is the information black hole that priors disappear into when you use frequentist statistics. Uh, no, that's information you don't have regardless of how you talk about your statistics. That information depends on exactly what the link is and how strong the link is, and since you don't know the exact strength and nature of the link, you can't find that information. This isn't a frequentist-vs-bayesian issue, this is a "you just plain don't have that information" vs "let's just make some poo poo up" issue. What's with lesswrong's weird hate-on for frequentism?
|
# ? Jul 19, 2014 20:50 |
|
I think it's a brand loyalty thing. They've invested enough in Bayes that they don't want to think they settled on the wrong interpretation of statistics.
|
# ? Jul 19, 2014 21:19 |
|
But there isn't a wrong interpretation. There's no reason for there to be a wrong interpretation. In probability the different interpretations are just different ways of thinking about what probabilities mean. Sometimes one is more convenient for some situations than others, but neither is right or wrong because they're just different approaches. It's not like, say, quantum mechanics interpretations, where arguably either there are many worlds or there aren't, so many-worlds is either right or wrong (but we can't tell which). Bayesianism and frequentism don't make that sort of objective claim about physical reality, they're just different approaches. Yudkowsky claimed that his (wrong) solution to the "At least one of my children is male" problem was the natural result of Bayesianism and that the counterintuitive (correct) solution was the product of idiotic truth-denying frequentism. It's bizarre seeing them treat frequentism as a weird bogeyman.
|
# ? Jul 19, 2014 21:42 |
|
|
# ? Jun 1, 2024 20:19 |
|
Ah but you see, in that question the frequentist solution is wrong, because it doesn't psycho-analyze and then assign probabilities to why the person in the problem asked the question! Only Bayesianism can make use of all the information at hand!
|
# ? Jul 19, 2014 22:18 |