Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Zonekeeper
Oct 27, 2007



Liquid Communism posted:

I think it's less that and more that magical healing is a panacea.

It works well enough that pretty much anything that doesn't instantly kill you can be recovered from.

Mental afflictions seems to be one of the few things they can't really fix - the only lifers at St. Mungo's we see are the Longbottoms and Lockheart. Plus magical curses like Lycanthropy can only be managed, not cured.

I imagine the long lifespan of wizards is at least partly the result of being able to cure muggle diseases like cancer and whatnot.

Adbot
ADBOT LOVES YOU

chessmaster13
Jan 10, 2015
I read the whole Book three times. As with every piece of art (even if it's derivative art) you have to look at the other works of the author to really appreciate it.

Each chapter works as a story supported explanation for one of Yudokowskis
blog entries on less wrong.

What did I take away from it?

1. Actually think for 5 minutes about a problem before giving up.
2. Divide a Task into subtasks, do what needs to be done.
3. Your golden.

Bonus:
A good time reading the whole thing!

Dabir
Nov 10, 2012

What did I take away from it?

-Nothing to do with the actual book
-The most basic possible description of problem-solving

Thanks!

Cingulate
Oct 23, 2012

by Fluffdaddy

chessmaster13 posted:

1. Actually think for 5 minutes about a problem before giving up.
2. Divide a Task into subtasks, do what needs to be done.
3. Your golden.
That's a pretty bad ink-to-content ratio tbh

chessmaster13
Jan 10, 2015

Cingulate posted:

That's a pretty bad ink-to-content ratio tbh

It was enjoyable nonetheless. It's similar to TED talks. Makes you feel smarter rather then making you smarter. And this is totally okay for a work of fiction.

Loel
Jun 4, 2012

"For the Emperor."

There was a terrible noise.
There was a terrible silence.



chessmaster13 posted:

It was enjoyable nonetheless. It's similar to TED talks. Makes you feel smarter rather then making you smarter. And this is totally okay for a work of fiction.

But.. isn't that contrary to the idea of Less Wrong? :v:

chessmaster13
Jan 10, 2015

LowellDND posted:

But.. isn't that contrary to the idea of Less Wrong? :v:

Maybe, maybe not. E.Y. is a smart guy. Is he as smart as he thinks is? I don't know.
But reading Less Wrong and HPMOR doesn't make you smarter in my opinion.
It's a fallacy caused by a book/blog about avoiding fallacies.

there wolf
Jan 11, 2015

by Fluffdaddy

Oh wow.

I picked this up from a used bookstore when I decided to reread the whole series last year, and was a little embarrassed to be getting the 'adult' version of such a famous kid's book. That train one would have been beyond embarrassing.

As for HPMOR, a lot of people didn't get the the nonsensical and silly magic system in the original Harry Potter was in the tradition of a certain kind of kid's lit as much as the overall story was in the tradition of Campbell and the monomyth. Yud's one of the few I've ever held it against because he's constantly having his Harry mention other fantasy books and tropes. There's nothing charming about bragging about how you read all these adult novels when you were still in single digits, and grossly misinterpreting a children's book because you never bothered to read children's books.

Hate Fibration
Apr 8, 2013

FLÄSHYN!

chessmaster13 posted:

Maybe, maybe not. E.Y. is a smart guy. Is he as smart as he thinks is? I don't know.

I actually do not think Yudkowsky is very smart. He seems to only grapple with concepts in a very superficial way. Very little of what he says illustrates a deeper understanding than one that pretty much any literate person can gain from reading wikipedia articles and arguing with people on the internet. Moreover, he tends to have a very large number of blind spots that he fails to see. His grasp of the nature and uses of Bayes' Theorem is possibly the most egregious example of this. What Yudkowsky is is imaginative. Note, that I did not say creative, his bag of tricks seems quite small (Bayes' Theorem^TM, recursion, tropes/memes). He is very prone to indulging in fanciful ideas and grand ambitions. And there is a certain charisma that goes along with that. Especially if you are a true believer, which Yudkowsky, very ironically, is.

Night10194
Feb 13, 2012

We'll start,
like many good things,
with a bear.

Hate Fibration posted:

I actually do not think Yudkowsky is very smart. He seems to only grapple with concepts in a very superficial way. Very little of what he says illustrates a deeper understanding than one that pretty much any literate person can gain from reading wikipedia articles and arguing with people on the internet. Moreover, he tends to have a very large number of blind spots that he fails to see. His grasp of the nature and uses of Bayes' Theorem is possibly the most egregious example of this. What Yudkowsky is is imaginative. Note, that I did not say creative, his bag of tricks seems quite small (Bayes' Theorem^TM, recursion, tropes/memes). He is very prone to indulging in fanciful ideas and grand ambitions. And there is a certain charisma that goes along with that. Especially if you are a true believer, which Yudkowsky, very ironically, is.

I think a lot of this is the function of him having very little formal education and thus being very unused to engaging with topics past the point that they start to feel challenging. He's never been forced to try to press on on something, and so sticks to 'easy' surface analysis that makes him feel smart (and look smart to his followers).

chessmaster13
Jan 10, 2015

Night10194 posted:

I think a lot of this is the function of him having very little formal education and thus being very unused to engaging with topics past the point that they start to feel challenging. He's never been forced to try to press on on something, and so sticks to 'easy' surface analysis that makes him feel smart (and look smart to his followers).

I'll have to think about this. What really puts me of is that there are people who "are followers". Doesn't seem right to me.

Night10194
Feb 13, 2012

We'll start,
like many good things,
with a bear.

chessmaster13 posted:

I'll have to think about this. What really puts me of is that there are people who "are followers". Doesn't seem right to me.

The guy runs an actual cult.

Pavlov
Oct 21, 2012

I've long been fascinated with how the alt-right develops elaborate and obscure dog whistles to try to communicate their meaning without having to say it out loud
Stepan Andreyevich Bandera being the most prominent example of that

Night10194 posted:

The guy runs an actual cult.

I met at least two of his cult members on my college campus even!

They were both noticeably autistic. This surprised me exactly none.

sarehu
Apr 20, 2007

(call/cc call/cc)
The main thing I learned from reading HPMOR is that Yud watches lots of anime.

Colin Mockery
Jun 24, 2007
Rawr



Pavlov posted:

I met at least two of his cult members on my college campus even!

They were both noticeably autistic. This surprised me exactly none.

One of his cult members dated a friend of mine for a little bit once. Before meeting him, I didn't know cult members could be so boring.

Tunicate
May 15, 2012

Pavlov posted:

I met at least two of his cult members on my college campus even!

They were both noticeably autistic. This surprised me exactly none.

Jerks put signs all over the campus for their meetup. Tons and tons since they were having a meetup for Less Wrong people who weren't students, and I guess none of them knew how to use a gps or google maps.

They didn't get official approval like you're supposed to, and left the signs up everywhere, so I got my petty revenge by reporting them to student government.

chessmaster13
Jan 10, 2015

Horking Delight posted:

One of his cult members dated a friend of mine for a little bit once. Before meeting him, I didn't know cult members could be so boring.

Hubbart is the cult leader for celebrities.
E.Y. is the cult leader for lonely outsiders.
This world is never stops to amaze me :D

Darth Walrus
Feb 13, 2012

chessmaster13 posted:

Hubbart is the cult leader for celebrities.
E.Y. is the cult leader for lonely outsiders.
This world is never stops to amaze me :D

Correction - a cult leader for lonely outsiders. As I've mentioned before, fanfiction cults are not new.

chessmaster13
Jan 10, 2015

Darth Walrus posted:

Correction - a cult leader for lonely outsiders. As I've mentioned before, fanfiction cults are not new.

Wasn't Shades of Grey a Twilight fanfiction? I'm not in the target audience and never consumed any of those works, but I often saw middle aged women with vampire related bumper stickers, or female friends with a bunch of vampire stuff.


But on the bright side, at least Twilight, Harry Potter, Shades of Grey and HPMOR got people to *read* longer texts, the step from there to actually good literature is not that far.
Better having people read anything then wasting their lives away in front of reality TV.

divabot
Jun 17, 2015

A polite little mouse!

chessmaster13 posted:

But on the bright side, at least Twilight, Harry Potter, Shades of Grey and HPMOR got people to *read* longer texts, the step from there to actually good literature is not that far.
Better having people read anything then wasting their lives away in front of reality TV.

ehh dunno about that. I'm pretty much wasting my life with all the Worm fanfic I read. It's the written equivalent of TV.

Clipperton
Dec 20, 2011
Grimey Drawer

I don't get this at all. Like, in that situation I'd hand over the lunch money too, but I'd also hand it over if he announced he was going to torture some random bystander to death instead of a copy of me. If anything, that'd work better because for all I know the random guy is a saint, whereas I know all too well that I can be kind of a dick sometimes.

Why is the fact that he's torturing AN EXACT REPLICA OF ME supposed to be so scary?

Cingulate
Oct 23, 2012

by Fluffdaddy

Clipperton posted:

Why is the fact that he's torturing AN EXACT REPLICA OF ME supposed to be so scary?
From an evopsych perspective, it makes sense - you share a lot of genes with your clone.

Personally, I don't get that specific panic either.

Hyper Crab Tank
Feb 10, 2014

The 16-bit retro-future of crustacean-based transportation
The relevant part to Yudkowsky is how it ties in with the whole Roko's Basilisk bullshit that they threw so many panic tantrums over at Less Wrong... the premise there is a bit different, though. In that one, a post-singularity AI from the future is perfectly simulating a copy of your brain and is threatening to torture the copy if you don't comply with it. The reason the fate of simulation-you in the far future is supposed to be terrifying to meat-you in the present is that because the simulation is perfect, you - the consciousness processing all these qualia and trying to make a decision - can't be sure whether you really are meat-you, or if you are simulation-you. Either way you have to make a decision, and since the simulation is perfect, the decision made by meat-you and simulation-you is going to be the same. So, to avoid the risk of finding out that you were simulation-you all along and are about to be tortured, you have to comply with the AI, even though it may turn out you were actually meat-you and there's no torture forthcoming anyway.

There a staggering laundry list of problems with the whole scenario, but just the fact that sufficiently sold LessWrongers completely flipped their poo poo over this thing is hilarious enough.

Clipperton
Dec 20, 2011
Grimey Drawer

Hyper Crab Tank posted:

Either way you have to make a decision, and since the simulation is perfect, the decision made by meat-you and simulation-you is going to be the same. So, to avoid the risk of finding out that you were simulation-you all along and are about to be tortured, you have to comply with the AI, even though it may turn out you were actually meat-you and there's no torture forthcoming anyway.

Hang on, I don't get that either. If the simulation-you decides not to comply, what's the point of going through with the simulated torture? It won't change what real-you decides and it's just wasting RAM that the AI could use on other stuff.

Sorry if this is a derail. Does the Basilisk stuff ever come up in MOR?

Transcendent
Jun 24, 2002

Clipperton posted:

Hang on, I don't get that either. If the simulation-you decides not to comply, what's the point of going through with the simulated torture? It won't change what real-you decides and it's just wasting RAM that the AI could use on other stuff.

Sorry if this is a derail. Does the Basilisk stuff ever come up in MOR?

No, Yud avoids telling people about it as much as possible because he believes it will cause them to suffer/not give him money or something. He has deleted references to it on Less Wrong. Someone else did write a sequel to HPMOR with it though, focusing on Ginny: https://www.fanfiction.net/s/11117811/1/Ginny-Weasley-and-the-Sealed-Intelligence.

For some reason she also has a male magical soul in it.

Hyper Crab Tank
Feb 10, 2014

The 16-bit retro-future of crustacean-based transportation

Clipperton posted:

Hang on, I don't get that either. If the simulation-you decides not to comply, what's the point of going through with the simulated torture? It won't change what real-you decides and it's just wasting RAM that the AI could use on other stuff.

Sorry if this is a derail. Does the Basilisk stuff ever come up in MOR?

The idea is that since the simulation is perfect, real-you and simulation-you will necessarily come to the same conclusion. You have to decide on a course of action not knowing whether you will be tortured or not.

The entire Basilisk argument is actually way longer and more screwed up than the short bit I mentioned up there, and noteworthy really only because of how insane the reaction was. Even mentioning the problem in Less Wrong circles is grounds for instant banning and removal of your posts, because of what Yudkowsky refers to as an "info-hazard" - that is, knowledge that is in itself harmful to whoever learns about it. There is a reason for why that's relevant here, which ties into the whole argument, if you'd like me to expand and it's not too much of a derail.

Clipperton
Dec 20, 2011
Grimey Drawer

Hyper Crab Tank posted:

The idea is that since the simulation is perfect, real-you and simulation-you will necessarily come to the same conclusion. You have to decide on a course of action not knowing whether you will be tortured or not.

Sure, but if simulation-me decides not to comply, what then does the AI gain out of torturing simulation-me at all?

Hyper Crab Tank posted:

There is a reason for why that's relevant here, which ties into the whole argument, if you'd like me to expand and it's not too much of a derail.

I'm interested!

Dabir
Nov 10, 2012

Clipperton posted:

Sure, but if simulation-me decides not to comply, what then does the AI gain out of torturing simulation-me at all?


I'm interested!

It has to follow through on the threat otherwise the gambit is meaningless.

Killstick
Jan 17, 2010
Either you are simulation-you, in which case your decision is meaningless OR
You are YOU-you, in which case the AI can't torture you.

Also just knowing that this problem exists puts you on the AI's radar, because the premise is that the AI will punish everyone retroactively (by clone torture) who didn't dedicate their lives to AI research to bring about the AI. But people who didn't know that they have to do that are safe, it's only the ones that know they should and don't who will (have their clones) be tortured. That's why Yud removed all references to Roko's basilisk, he imagines he's savign all those people from future retroactive clone torture by keeping it a secret.

Hope this helps :suicide:

Evrart Claire
Jan 11, 2008
Roko's Basilisk seems to get brought up every dozen pages or so in any thread that mentions Yud.

The simplest explanation is that it's Pascal's Wager for singularity nerds.

Hyper Crab Tank
Feb 10, 2014

The 16-bit retro-future of crustacean-based transportation
Okay, so just in case you're the kind of person who really, really believes in the runaway singularitarian stuff Yudkowsky comes up with, you might want to not read this, I guess? Personally, I think it falls apart about as easily as wet tissue paper, but anyway. It's been a few years since I read about it and I may have forgotten/misinterpreted some details, so chime in if I get something wrong.

First, you need to agree with a few premises that Yudkowsky and pals take for granted. Namely: 1) Singularity is a real thing, 2) Processing power and AI capabilities will grow exponentially post-singularity, 3) Singularity is inevitable, even if it takes millions of years to get there.

Okay, so consider a post-singularity AI in the future. This AI has the ability to simulate human brains perfectly. The simulation is all digital, or quantum-mechanical, or whatever fancy post-singularity technology the AI has available to it. Furthermore, the AI can perfectly simulate any person that has ever existed in the past, somehow. Further, assume this is a "good" AI - an AI that wants to maximize human welfare as much as possible. According to Yudkowsky's utilitarianism, this AI's #1 priority is to exist as soon as possible, because humanity's population growth is essentially exponential and the sooner the AI exists, the sooner it can shepherd humanity.

So, the AI wants to incentivize people - in the past - to dedicate as much effort as possible to ensure its own creation. Remember, according to LW utilitarianism, any amount of suffering today is worth it if it safeguards the exponentially greater amount of not-suffering brought on by the good AI existing. Anyway, this good AI wants to make people contribute to its creation, but all those people are long dead. No problem - the AI cannot talk to the real live human, but it doesn't have to. It can simulate human brains perfectly. The AI simply decides that if the simulated human does not act in accordance with the AI's wishes to make it exist, it will infinitely torture the simulation forever and ever in ways unimaginably painful. Okay, so what? you ask. Why should the human do something because an AI in the future, which the human is not even aware of, decides to arbitrarily torture a simulation? How can the AI punishing a simulation in the future possibly affect the past, no less?

As mentioned before, you, the qualia-experiencing consciousness reading these words, don't know if you are an assemblage of meaty neurons in a brain, or a computer simulation. But since the simulation is perfect, your actions and the AI's actions are the same. Therefore, whatever choice you make now will decide whether the simulation, which might be you, gets tortured or not. Okay, but still, so what - the simulation is in the future, you're not aware of it, or what it wants. This AI, which is trying to enact some kind of anti-casual extortion on humans millions of years in the past, is doomed to fail because your behavior is not influenced by events in the future that you are not aware of.

Except I just explained the concept to you. This is why it's called a "basilisk" - by reading the explanation, you've become aware of the concept of anti-casual extortion and the fact than an AI could be doing this. And because you are now aware of it, suddenly the AI's extortion attempt can actually work. If you think like Yudkowsky, I've doomed you to existential crisis and possible infinite torture because I explained it to you. If you had never read this, you wouldn't have known about the mechanism involved, and so you would be immune to anti-casual extortion. Now you have no chocie but to devote all your time, money and effort to ensuring the creation of post-singularity AI (preferably by donating to the Singularity Institute), or you will go to future post-singularity computer superhell and suffer forever.

That's the part that freaked people out so much. I mean, really freaked them out. People went so far as to try and erase as much evidence of themselves as they possibly could from all kinds of records, so to deprive the future AI of material with which to reconstruct and predict their brains. Yudkowsky himself deemed it an "information hazard" and erased all the information on Less Wrong about it so as to not expose people to it.

Like I said, there are a shitload of reasons for why all this is bullshit and you don't need to worry. Even if you believe 100% in all the lead-up, the simplest solution is to just say "no." Just resolve not to do anything at all based on this knowledge, and the AI - which knows that you decided not to do anything - knows anti-casual extortion is ineffective and won't send you to post-singularity superhell.

Hyper Crab Tank fucked around with this message at 16:43 on Sep 7, 2015

MikeJF
Dec 20, 2003




Horking Delight posted:

Wizards are inherently magical. They are to muggles what dragons are to lizards. Neville Longbottom is dropped out a window as a child and, despite being bad at deliberate magic, is still "magic enough" that he bounces harmlessly off the floor. Harry regrows his hair overnight when he gets a forced haircut as a child. There's (legal) rules and a system in place for the handling of unconscious/accidental magic.

Hell, I'm pretty sure that JKR once said that if someone just tried to shoot Voldemort, the gun would jam. Wizards live in a world where mundane threats barely matter. They're continuously warping reality in their favor. It's probably why they're so poo poo on health and safety.

chessmaster13
Jan 10, 2015

Hyper Crab Tank posted:

That's the part that freaked people out so much. I mean, really freaked them out. People went so far as to try and erase as much evidence of themselves as they possibly could from all kinds of records, so to deprive the future AI of material with which to reconstruct and predict their brains. Yudkowsky himself deemed it an "information hazard" and erased all the information on Less Wrong about it so as to not expose people to it.


People really freaked out over the wrath of something that doesn't exist yet and might never exist and even if it came to existence might think:

"Na, why should I spend any kind of recourse on torturing somebody from the past"?

This whole concept is pure and simple lunacy.

Clipperton
Dec 20, 2011
Grimey Drawer

Dabir posted:

It has to follow through on the threat otherwise the gambit is meaningless.

I think that's what I'm not following--I would say that once you refuse to comply, the gambit has failed, regardless of whether the simu-torture happens.

Although being able to create perfect virtual copies of human minds, and then kill them, does raise interesting extortion possibilities. Start with one copy, hit ctrl-a/ctrl-c/ctrl-v twenty-five times, and whoever you threaten will then be responsible for MORE DEATHS THAN HITLER if they don't marry you/give up their parking space/bring you a beer from the fridge. Still no need to bring clones into it though, I reckon you'd have better success with virtual babies, or pugs.

e:

Hyper Crab Tank posted:

Like I said, there are a shitload of reasons for why all this is bullshit and you don't need to worry. Even if you believe 100% in all the lead-up, the simplest solution is to just say "no." Just resolve not to do anything at all based on this knowledge, and the AI - which knows that you decided not to do anything - knows anti-casual extortion is ineffective and won't send you to post-singularity superhell.

Pretty sure this what I've been trying to say. Thanks!

Added Space
Jul 13, 2012

Free Markets
Free People

Curse you Hayard-Gunnes!
It had to do with Yud's proposed solution to Newcomb's Paradox. In this thought experiment, a mysterious Oracle want to play a game with you. In box A is $1000, and box B is closed and may or may not contain $1 million. You can choose to take box B alone, or to take both A and B. The Oracle is going to predict your response. If he thinks you'll take box B by itself, he'll have already filled it with the $1 million. If he thinks you'll take both, then box B will be empty.

There's a dispute over the proper course of action that divides closely related branches of skepticism, induced by the fictional nature of the the paradox and the Oracle's predictive powers. One camp says there's no reasonable way his predictive powers could work, so any claims to them are nonsense. It doesn't matter what your choice is, since by the time you make it the box is already either full or empty. You might as well take both to get the extra $1000.

The second camp would stand back and let others play the game and track his hit rate. Assuming there is a hit rate better then chance, it wouldn't matter how he was making the prediction. If it made no sense considering what we know about the universe, all that means is that what we know about the universe is wrong. Considering the gap between $1 million and 0 is huge, even 1% over chance is a high enough hit rate to risk playing along. He knows what you're going to guess, somehow, and you have to accept the evidence of it.

Yud's proposal, which I don't completely understand, says that the information is somehow going back in time. Part of it is something like a code of honor; you always know what the Paladin will choose to do, so you can predict his behavior. So long as you make the decision that is best for your future self, your future self will act in a consistent and predictable way. There's also a bunch of nonsense I can't quite follow tacked on. In the basilisk situation, somehow the torture of your electronic future selves would be passed back to the original, and should influence your behavior.

Hyper Crab Tank
Feb 10, 2014

The 16-bit retro-future of crustacean-based transportation

Added Space posted:

In the basilisk situation, somehow the torture of your electronic future selves would be passed back to the original, and should influence your behavior.

It hinges on the (incredibly improbable) idea of you basing your decisions on what you predict a super-AI will do, and the super-AI then predicting what you will have already done based on what it predicts that you predicted about it. Yeah.

e: Have you ever seen the wine scene from The Princess Bride? That's basically what's going on: two intelligences trying to predict each other recursively until they both, assuming they are perfectly capable of predicting each other (ahem), come to a conclusion on what to do, acasually, even though they've never met and never will.

Hyper Crab Tank fucked around with this message at 17:12 on Sep 7, 2015

Added Space
Jul 13, 2012

Free Markets
Free People

Curse you Hayard-Gunnes!

Hyper Crab Tank posted:

It hinges on the (incredibly improbable) idea of you basing your decisions on what you predict a super-AI will do, and the super-AI then predicting what you will have already done based on what it predicts that you predicted about it. Yeah.

e: Have you ever seen the wine scene from The Princess Bride? That's basically what's going on: two intelligences trying to predict each other recursively until they both, assuming they are perfectly capable of predicting each other (ahem), come to a conclusion on what to do, acasually, even though they've never met and never will.

I think you're describing one facet of the Halting problem, where predicting predictions becomes provably impossible. According to Yud, the halting problem would not exist between two copies of the same entity, because through some kind of philosophical mysticism they'd be connected and reach the same conclusion. In the basilisk problem, you might not have done what the AI wanted until the electronic copy facing torture mystically passed that information back to you.

You see, it's not acausal. That's why he made so much hay over the Comed-Tea in the story; it's only acausal if you discount the possibility of the cause going backward in time.

Added Space fucked around with this message at 17:20 on Sep 7, 2015

Loel
Jun 4, 2012

"For the Emperor."

There was a terrible noise.
There was a terrible silence.



He has this whole 'timeless physics' thing going on too. http://lesswrong.com/lw/qr/timeless_causality/

http://rationalwiki.org/wiki/Roko%27s_basilisk I find this to be a pretty good deconstruction of the basilisk.

anilEhilated
Feb 17, 2014

But I say fuck the rain.

Grimey Drawer
So where does Yud stand on experience/learning? Brain isn't all there is to a person and an AI can't simulate you without simulating your (untranslatable) past experiences?
I just love how he's read enough psychology to know about heuristics (which are simple, short and generally salient) but doesn't know gently caress all past that.

Adbot
ADBOT LOVES YOU

90s Cringe Rock
Nov 29, 2006
:gay:
It can extrapolate and fill in the blanks - accurately! - by studying records of you. Blog posts, photos, things someone else posts about you on Twitter, emails...

It's really powerful and smart, you see, and big enough to devote an entire Jupiter-sized brain to reconstructing you, and everyone else.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply