Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
CROWS EVERYWHERE
Dec 17, 2012

CAW CAW CAW

Dinosaur Gum

Reginald Bathwater posted:

I'm pretty sure thats just how you play morrowind

And if it worked there, who's to say it won't work even better in real life? :colbert:

Adbot
ADBOT LOVES YOU

ArchangeI
Jul 15, 2010

Literally Kermit posted:

Launch 'em to space to terraform and colonize star systems. Even reluctant immortal folk will all eventually be like, "you know what, gently caress it, send me up" given enough centuries!

That would make a great story. Guy freezes himself to be around for the great post-singularity utopia, finds himself stranded on a distant planet centuries later with only basic tools because the company sold his frozen body to the government as a colonist on an STL seed ship. Depending on your preference, they either band together and build a libertarian utopia or struggle to eek out an existence a medieval peasant would have called pitiful.

Anticheese
Feb 13, 2008

$60,000,000 sexbot
:rodimus:

A bunch of frozen upper-middle-class office workers with minimal outdoors experience are defrosted on an alien planet with few resources, and quickly turn on each other. Lord of the Files.

Literally Kermit
Mar 4, 2012
t
Well, if we are talking already thawed out immortals, interstellar travel would literally be space catapulting a pod stocked with a food replicator, seed bank, and a copy of minecraft, starbound, and space engineers. (Poster bias, sorry)
Dwarf fortress alpha updates will be sent via light-pulses, as soon as the head of Toady and his freeze-dried cat.

These four games will allow any immortal scrub both the knowledge of their new trade, and a way to pass the time whilst the next ten thousand years tick by, because Brother Over-AI is on a budget and cryo is too loving expensive.

ArcMage
Sep 14, 2007

What is this thread?

Ramrod XTreme

ArchangeI posted:

That would make a great story. Guy freezes himself to be around for the great post-singularity utopia, finds himself stranded on a distant planet centuries later with only basic tools because the company sold his frozen body to the government as a colonist on an STL seed ship. Depending on your preference, they either band together and build a libertarian utopia or struggle to eek out an existence a medieval peasant would have called pitiful.

Larry Niven wrote a nice one about a totalitarian future state defrosting people to use as indentured servants. If you didn't play nice for them they just cored you out and loaded another one.

They figured that just for them defrosting you, you owed them a life-debt.

Strategic Tea
Sep 1, 2012

I remember a similar idea in a short story. A guy is resurrected, sees all the neat future technology. Then one of the doctors explains it's time for his heart surgery.

"But there's nothing wrong with my heart!"

"No, but there's something wrong with mine."

Maxwell Lord
Dec 12, 2008

I am drowning.
There is no sign of land.
You are coming down with me, hand in unlovable hand.

And I hope you die.

I hope we both die.


:smith:

Grimey Drawer
It is really an RPG way of viewing things. In the real world "Intelligence" is an abstract measurement of several different factors because your brain does many different things, but to people who don't understand that it's a stat you can boost with the right items.

DStecks
Feb 6, 2012

AlbieQuirky posted:

Which is loving hilarious, considering the track record of cryonics as an industry to date. One of the early cryonics dudes wrote a (mostly unintentionally) hilarious book about his experiences.

The best part of the article is that the guy being interviewed, the one who wrote the book, is very proudly a Yudkowsky-level true believer in the potential of cryonics, and he's still saying "cryo tech today does not work, and the cryonics business is a total shitshow".

Maxwell Lord
Dec 12, 2008

I am drowning.
There is no sign of land.
You are coming down with me, hand in unlovable hand.

And I hope you die.

I hope we both die.


:smith:

Grimey Drawer
So what if someone does end up inventing suspended animation technology, only it has nothing to do with cryogenics?

Like there's the assumption that it's freezing or nothing, but if we allow, for the purposes of argument, that basically putting people in stasis and reviving them is possible, why does it have to be that? What if freezing is a dead end and the real breakthrough involves chemicals or electric signals to the nervous system or something totally left field?

LaughMyselfTo
Nov 15, 2012

by XyloJW
Or some kind of relativity whozit that puts you in a space where time is passing much slower.

Djeser
Mar 22, 2013


it's crow time again

Maxwell Lord posted:

So what if someone does end up inventing suspended animation technology, only it has nothing to do with cryogenics?

Like there's the assumption that it's freezing or nothing, but if we allow, for the purposes of argument, that basically putting people in stasis and reviving them is possible, why does it have to be that? What if freezing is a dead end and the real breakthrough involves chemicals or electric signals to the nervous system or something totally left field?

Tough poo poo to corpsicles, I guess. They turn out as alive as they would if suspended animation was entirely bullshit.

Freezing is just the most accessible from our end. It's easy to say 'we'll develop future technology to retrieve you from this currently irretrievable state'. It's harder to do something with slowed cellular activity or keeping a disembodied brain from brain death or whatever, because we haven't invented that yet. We have invented walk-in freezers, though. Everything else, we can't do step one. For cryonics, we can't do step two.

Chamale
Jul 11, 2010

I'm helping!



LaughMyselfTo posted:

Or some kind of relativity whozit that puts you in a space where time is passing much slower.

That would require near-light-speed travel, unless someone is willing to build this device:

Segmentation Fault
Jun 7, 2012
My favorite thing about Big Yud is how @dril is basically a carbon copy of him.

https://twitter.com/dril/status/479174309638729731

@dril posted:

FULLY IMMERSED IN THE TIME LINE-- AH DEAR LORD

https://twitter.com/dril/status/478533901816573952

@dril posted:

ok. i basicly need one of the girls on this website to marry me by june 30 and i am absolutely under zero obligation to send you pics of me,

https://twitter.com/dril/status/480410058602594305

@dril posted:

surgury to become japanese. Surgeruy to become Japanese

Literally Kermit
Mar 4, 2012
t

Maxwell Lord posted:

So what if someone does end up inventing suspended animation technology, only it has nothing to do with cryogenics?

Like there's the assumption that it's freezing or nothing, but if we allow, for the purposes of argument, that basically putting people in stasis and reviving them is possible, why does it have to be that? What if freezing is a dead end and the real breakthrough involves chemicals or electric signals to the nervous system or something totally left field?

It'd probably be faster and cheaper to develop a way to digitally copy contents of a brain and a subject's DNA and just clone-copy at a future time. At least there's a 'do-over' if there's errors "recompiling" as opposed to repairing cellular frost burns.

That's where the evil AIs getcha, tho'.

The Vosgian Beast
Aug 13, 2011

Business is slow

dril or Yudkowsky is the best game.

Maxwell Lord
Dec 12, 2008

I am drowning.
There is no sign of land.
You are coming down with me, hand in unlovable hand.

And I hope you die.

I hope we both die.


:smith:

Grimey Drawer
My main point is that it kind of ruins the Pascal's Wager version of cryo support if it turns out that the end goal (immortality via suspended animation) is achieved in some other form altogether.

Literally Kermit
Mar 4, 2012
t

Maxwell Lord posted:

My main point is that it kind of ruins the Pascal's Wager version of cryo support if it turns out that the end goal (immortality via suspended animation) is achieved in some other form altogether.

Cryogenics supporters were bitcoin miners before bitcoin was a thing. Lots of resources expended for something riddled with faults that no one currently, if ever, has a solution. It's the end goal that should be important, not sunk-cost fallacy on a technology that "might" work in the future.

Random Axis
Jul 19, 2005
I'm just going to imagine that the benevolent AI overmind in the future is using Timeless Decision Making to get the Less Wrong crowd hooked on cryonics. Since it almost certainly won't work, they won't be around to crap up the future.

GWBBQ
Jan 2, 2005


Luigi's Discount Porn Bin posted:

It does seem like a game strategy, doesn't it? The hoarding nerd impulse. "I can't use this useful thing now, what if I need it later??" *never uses it* Of course that's beside the point, because ultimately it's a flimsy justification for his laziness and general lack of motivation. He could put effort into things, IF ONLY...

I wonder what he imagines he's saving this imaginary brain-reprogramming for that's more useful than being able to put effort into things. I'm sure he's worked out all the priors and settled on this for a reason. Maybe he wants to be able to reprogram himself to enjoy being tortured by an AI, so it won't bother because timeless decision theory.
It's not just like a game strategy, expending discrete points of wilpower to overcome difficult situations is a game mechanic straight out of the Vampire: The Masquerade rulebook

GWBBQ
Jan 2, 2005


John Stossel has Zoltan Istvan on Fox News talking about cryoincs.

ZerodotJander
Dec 29, 2004

Chinaman, explain!
Slate just put up an article about Roko's Basilisk.

http://www.slate.com/articles/technology/bitwise/2014/07/roko_s_basilisk_the_most_terrifying_thought_experiment_of_all_time.html

The Vosgian Beast
Aug 13, 2011

Business is slow

Oh huh, I was wondering why some of that crowd were suddenly being pissy about Slate.

Evrart Claire
Jan 11, 2008

The Vosgian Beast posted:

Oh huh, I was wondering why some of that crowd were suddenly being pissy about Slate.

Are they accusing Slate of endangering a bunch of people by causing them to think about it?

Sulphagnist
Oct 10, 2006

WARNING! INTRUDERS DETECTED

It's great because if you come into the Slate article completely uninitiated and read Yudkowsky's first rant on the Basilisk, you'd of course assume he's joking and being sarcastic. Then the rug is pulled out from under you, these people genuinely believe this stuff.

The Time Dissolver
Nov 7, 2012

Are you a good person?
Wouldn't Yudkowsky's adoption of the Basilisk idea make LessWrong a cult or something close to it? Maybe that explains how eager he is to put it down.

chaosbreather
Dec 9, 2001

Wry and wise,
but also very sexual.

I had a thought about the whole "Beep boop, I'll simulate a thousand of you and torture you unless you do it and you don't know which one you are" thing. Now it seems to me that:

1. So far, extremely accurate measurements have confirmed that the curvature of the universe is flat, therefore boundless and infinite.
2. From observation, the probability of atoms coming together to form me exactly as I am now and my environment are non-zero.
3. An infinite number of chances for a non-zero probability mean an infinite number of occurrences.

Therefore, there are a infinite number of real mes who aren't simulations and can't be tortured by the AI. There are an infinite number of HALs too, but since each HAL can only run a finite number of sims, there will only ever be an equally infinite amount of sims as non-sims. Therefore there is nothing the computer can do to affect the likelihood that I am a simulation.

Furthermore, because there are an infinite number of permutations of us as well as duplicates, HAL can't even threaten me. There are already an infinite number of torture sims from alternate HALs, no matter how I choose.

Timeless decision theory is thus even more garbage than was previously shown.

Iunnrais
Jul 25, 2007

It's gaelic.

chaosbreather posted:

2. From observation, the probability of atoms coming together to form me exactly as I am now and my environment are non-zero.
3. An infinite number of chances for a non-zero probability mean an infinite number of occurrences.

Infinite does not necessarily imply a normal distribution. This would only hold true if we could also demonstrate that the universe is normal, which is certainly not necessarily true. Even if it was true, proving it would be really really difficult... we don't even know for certain if pi is normal for crying out loud, even if it really looks like it!

chaosbreather
Dec 9, 2001

Wry and wise,
but also very sexual.

Iunnrais posted:

Infinite does not necessarily imply a normal distribution. This would only hold true if we could also demonstrate that the universe is normal, which is certainly not necessarily true. Even if it was true, proving it would be really really difficult... we don't even know for certain if pi is normal for crying out loud, even if it really looks like it!

That's true, the cosmological principle may be unfounded, but it is still a physical axiom, so we're pretty sure about it. In the case that it's not, there are other physical and cosmological theories that would necessitate infinite copies, including eternal inflation, the Everett interpretation of quantum physics, brane theory, the cyclic model, singularity universes and the mathematical ensemble. All peer-reviewed theories by established, accomplished scientists. Unlike anything that has ever been published on Less Wrong.

Lightanchor
Nov 2, 2012
Walter Kaufmann:

"Even if there were exceedingly few things in a finite space in an infinite time, they would not have to repeat in the same configurations. Suppose there were three wheels of equal size, rotating on the same axis, one point marked on the circumference of each wheel, and these three points lined up in one straight line. If the second wheel rotated twice as fast as the first, and if the speed of the third wheel was 1/π of the speed of the first, the initial line-up would never recur."

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

chaosbreather posted:

I had a thought about the whole "Beep boop, I'll simulate a thousand of you and torture you unless you do it and you don't know which one you are" thing. Now it seems to me that:

1. So far, extremely accurate measurements have confirmed that the curvature of the universe is flat, therefore boundless and infinite.
2. From observation, the probability of atoms coming together to form me exactly as I am now and my environment are non-zero.
3. An infinite number of chances for a non-zero probability mean an infinite number of occurrences.

Therefore, there are a infinite number of real mes who aren't simulations and can't be tortured by the AI. There are an infinite number of HALs too, but since each HAL can only run a finite number of sims, there will only ever be an equally infinite amount of sims as non-sims. Therefore there is nothing the computer can do to affect the likelihood that I am a simulation.

Furthermore, because there are an infinite number of permutations of us as well as duplicates, HAL can't even threaten me. There are already an infinite number of torture sims from alternate HALs, no matter how I choose.

Timeless decision theory is thus even more garbage than was previously shown.

I still think the best response to that torture thing is "and what was my sim's reply based on?" and just repeat that for each layer ("and what was that sim's sim's sim's reply based on?") until the AI admits that it was either lying or rolling dice at some layer, or claims to have used literally infinite layers. If it rolled dice, then all of its simulation is based on massive extrapolation from an assumption. If it has infinite layers, the AI already has literally infinite computational power and really couldn't improve on its capabilities by being released, so it has no real need to request release.

Also, smack it for lying if it's infinite, because that program can never halt.

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:
Honestly I don't see how the possibility of an AI torturing me in the future would make me inclined to support said AI. Really, it seems to me like any decent person should/would oppose it at all costs...

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

The Iron Rose posted:

Honestly I don't see how the possibility of an AI torturing me in the future would make me inclined to support said AI. Really, it seems to me like any decent person should/would oppose it at all costs...
That's how it gets you!!! :byodood:

Dean of Swing
Feb 22, 2012
Roko's Ponzi.

abraham linksys
Sep 6, 2010

:darksouls:
I hadn't heard of LessWrong before, read the Slate article about Roko's Basilisk (via Twitter), then searched the forums for "Roko's Basilisk" to find out if anyone was talking about it. I'm not surprised there's a mock thread; this is the weirdest, dumbest thing I've ever heard of.

Don Gato
Apr 28, 2013

Actually a bipedal cat.
Grimey Drawer
The more I read about Roko's Basilisk, the more it sounds like someone's Matrix fanfiction gone wrong. It makes about as much sense under close observation

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

Don Gato posted:

The more I read about Roko's Basilisk, the more it sounds like someone's Matrix fanfiction gone wrong. It makes about as much sense under close observation
There we go.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Too good to be true argues that the scientific community is deliberately suppressing evidence that vaccines cause autism.

Well, okay, the author doesn't actually seem to think vaccines cause autism (although he does make end with an odd comment that even 3/43 studies showing vaccines cause autism would be surprisingly low, when in fact it would be surprisingly high since that's over 5%). He just thinks that the evidence vaccines don't cause autism is too great, and that some evidence that vaccines cause autism should have randomly appeared on its own. As further proof, he notes that studies establishing a link between vaccines and autism are less likely to be published than studies that don't. Which obviously proves that the evidence is being suppressed, because we rationalists all know correlation implies causation.

His proof is based on baseless assumptions, such as asserting without justification that all studies on a major hot-button issue are only using a 95% confidence threshold. (A quick search reveals that, surprise, that's not the case.) Even after stacking the deck with false assumptions, what he thinks is a resounding proof really isn't - he ends up with a p-value of .14 (which he misrepresents as .13), which too high to say anything with even 95% confidence.

All in all, pretty standard for lesswrong - bad math, false assumptions, and taking the anti-vaccer side in the name of just asking questions. But what interested me was this line:

quote:

Having "95% confidence" tells you nothing about the chance that you're able to detect a link if it exists. It might be 50%. It might be 90%. This is the information black hole that priors disappear into when you use frequentist statistics.

Uh, no, that's information you don't have regardless of how you talk about your statistics. That information depends on exactly what the link is and how strong the link is, and since you don't know the exact strength and nature of the link, you can't find that information. This isn't a frequentist-vs-bayesian issue, this is a "you just plain don't have that information" vs "let's just make some poo poo up" issue.

What's with lesswrong's weird hate-on for frequentism?

The Vosgian Beast
Aug 13, 2011

Business is slow
I think it's a brand loyalty thing. They've invested enough in Bayes that they don't want to think they settled on the wrong interpretation of statistics.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

But there isn't a wrong interpretation. There's no reason for there to be a wrong interpretation.

In probability the different interpretations are just different ways of thinking about what probabilities mean. Sometimes one is more convenient for some situations than others, but neither is right or wrong because they're just different approaches. It's not like, say, quantum mechanics interpretations, where arguably either there are many worlds or there aren't, so many-worlds is either right or wrong (but we can't tell which). Bayesianism and frequentism don't make that sort of objective claim about physical reality, they're just different approaches.

Yudkowsky claimed that his (wrong) solution to the "At least one of my children is male" problem was the natural result of Bayesianism and that the counterintuitive (correct) solution was the product of idiotic truth-denying frequentism. It's bizarre seeing them treat frequentism as a weird bogeyman.

Adbot
ADBOT LOVES YOU

Telarra
Oct 9, 2012

Ah but you see, in that question the frequentist solution is wrong, because it doesn't psycho-analyze and then assign probabilities to why the person in the problem asked the question! Only Bayesianism can make use of all the information at hand! :pseudo:

  • Locked thread