Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Control Volume
Dec 31, 2008

During school breaks I had a 26 hour sleep schedule because my body hates me and what eventually happens is that you stall out at around 5am and crash or you force yourself through and fall asleep in the middle of the day to reset back to a sort of normal schedule. Basically you're really tired a lot.

Maybe the singularity fixes this clear flaw of human behavior.

Adbot
ADBOT LOVES YOU

AlbieQuirky
Oct 9, 2012

Just me and my 🌊dragon🐉 hanging out
I knew he was poly, which, whatever, different strokes. But "more advanced culture" and having a "slave" are just so cringeworthy.

Microcline
Jul 27, 2012

Squashy posted:

Sooo...this guy is Ulillillia with a -slightly- above average IQ, basically. He just prefers Harry Potter to Knuckles and cons people into donating money rather than computer parts.

Comparing Ulillillia to Yudkowsky is like comparing Ulillillia to Chris-Chan. Ulillillia seems to have shown at least a basic grasp of computer science in his programming attempts and analysis of video game bugs, along with an understanding of the limits of his knowledge. Yudkowsky's theories aren't just incorrect, they're not even consistent with each other.

I know Dunning-Kruger gets overplayed on Something Awful, but Yudkowsky is the perfect intersection of Troper Dunning-Kruger and Bay startup/venture capital Dunning-Kruger.

Ugly In The Morning
Jul 1, 2010
Pillbug

AlbieQuirky posted:

I knew he was poly, which, whatever, different strokes. But "more advanced culture" and having a "slave" are just so cringeworthy.

His mention of getting off on sadism certainly puts the infinite AI torture in a new light, that's for drat sure.

PsychoInternetHawk
Apr 4, 2011

Perhaps, if one wishes to remain an individual in the midst of the teeming multitudes, one must make oneself grotesque.
Grimey Drawer

Djeser posted:

The AI is omnipotent within that world but is limiting its influence in order to make sim you unsure whether they're sim you or real you. It doesn't need to threaten sim people for sim money, it's doing it to threaten real people into giving it real money. Sim you has no way of telling if they're real you, but sim you is supposed to come to the same conclusion, that they're probably a sim and need to donate to avoid imminent torture.

What I don't understand about this is that there's no incentive for this future AI to actually go through with torturing sims, because we in the past have no way of knowing whether it's actually doing so or not. Like, I understand the idea that sure, our concept of the future determines our actions that then create it, but once that future actually arrives, whatever actions are taken won't change the past. If only half the potential amount donated towards Imaginary Future AI happens, that AI still has no incentive to torture sims, because doing so won't increase the amount already donated. The AI could do absolutely nothing and whatever effect the thought problem posed by it has is already done with.

Asking a sim to donate doesn't change anything-either the AI already knows the sim's response, so if it goes through with the torture it's just being a dickbag. If I am a sim, and an accurate one, then if I won't donate there was nothing I could have done to the contrary and torture was inevitable. So why go through the formality of asking?

WickedHate
Aug 1, 2013

by Lowtax

Ugly In The Morning posted:

His mention of getting off on sadism certainly puts the infinite AI torture in a new light, that's for drat sure.

If you don't donate to his foundation, an AI will make a simulation of you and tickle it.

Pigbog
Apr 28, 2005

Unless that is Spider-man if Spider-man were a backyard wrestler or Kurt Cobain, your costume looks shitty.
It will program a simulation of you that inflates like a balloon, and also will look like a my little pony.

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET
I'm pretty sure the optimal situation for his AI torture routine is to not actually torture anyone but have about two-to-three-hundred years of dumbasses waiting in the ranks to donate because Roko's basilisk seemed convincing enough. This is optimal if you're an AI that somehow ends up created through these turns of events or if you're the thumbsitting owner of an AI research program that's promising to develop one sometime soon.

It's like the threat of hell without all of the moral implications of actually sending people to hell!

Lightanchor
Nov 2, 2012
Simulations of you can't feel anything and are not possible in the first place.

Particularly damning for the Timeless ridiculousness that a "rationalist" should accept: there's no reason to suppose that, given a state of the universe A, there is only one possible sequence of events that brings about state A.

Ratoslov
Feb 15, 2012

Now prepare yourselves! You're the guests of honor at the Greatest Kung Fu Cannibal BBQ Ever!

Lightanchor posted:

Simulations of you can't feel anything and are not possible in the first place.

Remember, there's no such thing as a possibility of 1 or 0 in Yudlowskiland, so 'possible but exceedingly unlikely' is the most we can say.

Which is to say, it's impossible to describe the possibility of (a && ~a) or (a given a), but whatever.

Chamale
Jul 11, 2010

I'm helping!



PsychoInternetHawk posted:

What I don't understand about this is that there's no incentive for this future AI to actually go through with torturing sims, because we in the past have no way of knowing whether it's actually doing so or not. Like, I understand the idea that sure, our concept of the future determines our actions that then create it, but once that future actually arrives, whatever actions are taken won't change the past. If only half the potential amount donated towards Imaginary Future AI happens, that AI still has no incentive to torture sims, because doing so won't increase the amount already donated. The AI could do absolutely nothing and whatever effect the thought problem posed by it has is already done with.

Asking a sim to donate doesn't change anything-either the AI already knows the sim's response, so if it goes through with the torture it's just being a dickbag. If I am a sim, and an accurate one, then if I won't donate there was nothing I could have done to the contrary and torture was inevitable. So why go through the formality of asking?

Perhaps the AI could create an AI that would always go through with the torture, even when it's not in the AI's interest, to create a credible threat. For more information on this topic please see Dr. Strangelove.

Alien Arcana
Feb 14, 2012

You're related to soup, Admiral.

Swan Oat posted:

30 hour sleep cycle?? Is he going to sleep thirty hours per day or what? :psyduck:

The Harry Potter of MoR has a natural 30-hour wake-sleep cycle and is completely incapable of functioning on a normal 24-hour schedule.

As far as I can tell there is no narrative reason for this, other than that it makes him extra-special.

Strategic Tea
Sep 1, 2012

Also it makes him into Old Snake. Which I thought was the kind irrational impossible thing Yudkowsky hated in the original books, but sacrifice of war :words:

Ugly In The Morning
Jul 1, 2010
Pillbug

AATREK CURES KIDS posted:

Perhaps the AI could create an AI that would always go through with the torture, even when it's not in the AI's interest, to create a credible threat. For more information on this topic please see Dr. Strangelove.

Yeah, but if it's a simulation, then whether or not it goes through with it isn't going to change the credibility of the threat. THere's no way to verify if a simulation is actually being tortured, or if the AI is just claiming it's torturing a bunch of them. It's quite a bit different from being able to see missile silos and troop movements. The intangibility really fucks with the credibility.

Also, this guy was pretty much hosed the day his parents named him "Eliezer". The only career path for a name like that is "insufferable turbonerd".

90s Cringe Rock
Nov 29, 2006
:gay:
He could be torturing people right now if only his parents had named him Elaizer.

Djeser
Mar 22, 2013


it's crow time again

PsychoInternetHawk posted:

What I don't understand about this is that there's no incentive for this future AI to actually go through with torturing sims, because we in the past have no way of knowing whether it's actually doing so or not. Like, I understand the idea that sure, our concept of the future determines our actions that then create it, but once that future actually arrives, whatever actions are taken won't change the past. If only half the potential amount donated towards Imaginary Future AI happens, that AI still has no incentive to torture sims, because doing so won't increase the amount already donated. The AI could do absolutely nothing and whatever effect the thought problem posed by it has is already done with.

Asking a sim to donate doesn't change anything-either the AI already knows the sim's response, so if it goes through with the torture it's just being a dickbag. If I am a sim, and an accurate one, then if I won't donate there was nothing I could have done to the contrary and torture was inevitable. So why go through the formality of asking?

I bolded that part because Timeless Decision Theory argues that future actions can and do affect the past. Not in the sense of "well a prediction affects what you do" but literally saying "the events of the future have a direct impact on you in the present as long as you can predict them with perfect accuracy". It's nonintuitive and loving dumb, but in the impossible case where you've got a perfectly accurate prediction, it works.

Djeser posted:

Timeless Decision Theory works on the idea of predictions being able to influence events. For instance, you can predict (or, as Less Wrong prefers, simulate) your best friend's interests, which allows you to buy him a present he really likes. You, in the past, simulated your friend getting the present you picked out, so in a sense, the future event (your friend getting a present he likes) affected the past event (you getting him a present) because you were able to accurately predict his actions.

TDT takes this further by assuming that you have a perfect simulation, as if you were able to look into a crystal ball and see exactly what the future was. For example, you bring your friend two presents, and you tell him that he can either take the box in your left hand, or both boxes. You tell him the box in your right hand has some candy in it, and the box in your left hand either has an iPod or nothing. You looked into your crystal ball, and if you saw he picked both boxes, then you put nothing in the left box, but if he picked the left box, you put an iPod in there.

If you didn't have a crystal ball, the obvious solution would be to pick both. His decision now doesn't affect the past, so the optimal strategy is to pick both. But since you have a crystal ball, his decision now affects the past, and he picks the left box, because otherwise you would have seen him picking two boxes, and he wouldn't get the iPod.

Remember, they also believe that any rational AI will also agree with their theories on Bayesian probability and Timeless Decision Theory.

pigletsquid
Oct 19, 2012
Someone needs to pretend they're a rational AI and tell the Yuddites they're all sims who'll get tortured if they don't stop being smug douchebags, because their smug douchebaggery is jeopardising the creation of a rational AI in some vaguely-defined butterfly effect way.

YggiDee
Sep 12, 2007

WASP CREW
Doesn't this have the same basic flaw as Pascal’s Wager, where it assumes that Yudkowsky's 'charity' is the only one that could result in a magic super AI? What if this one goes under and 20, 30 years down the road there's some other AI research tank, and every cent given to Yudkowsky is actually slowing us down?

Namarrgon
Dec 23, 2008

Congratulations on not getting fit in 2011!

YggiDee posted:

Doesn't this have the same basic flaw as Pascal’s Wager, where it assumes that Yudkowsky's 'charity' is the only one that could result in a magic super AI? What if this one goes under and 20, 30 years down the road there's some other AI research tank, and every cent given to Yudkowsky is actually slowing us down?

Don't be ridiculous. The premise where Yudkowsky fails at something he tries is clearly false.

Dean of Swing
Feb 22, 2012
Can Super AI create a simulation that even it can't torture? Thats the real question.

The Vosgian Beast
Aug 13, 2011

Business is slow

Ugly In The Morning posted:

Also, this guy was pretty much hosed the day his parents named him "Eliezer". The only career path for a name like that is "insufferable turbonerd".

They were probably hoping for "rabbi"

Numerical Anxiety
Sep 2, 2011

Hello.
To be fair, the one true prophet of the imminent robogod is kinda that, but for the sense of humor required.

Shyrka
Feb 10, 2005

Small Boss likes to spin!
You know, sure it seems pretty bad that the super-AI is running gazillions of simulations of me being tortured, but that assumes super-AI is the only game in town. What if super-duper AI comes out later and to make amends for the cruelty of its predecessor runs bajillions of simulations of me engaging in fruitful and wholesome activities? What if in the deep future after all black holes have evaporated there are a few hojillion Boltzmann Brain simulations of me just kinda tooling about and not doing anything really bad or good?

Taking all of that together the odds that I'm in a simulation are certainly astronomical, but the odds I'm in a torture simulation are pretty low. Seems to me like these guys aren't getting the big picture.

90s Cringe Rock
Nov 29, 2006
:gay:

YggiDee posted:

Doesn't this have the same basic flaw as Pascal’s Wager, where it assumes that Yudkowsky's 'charity' is the only one that could result in a magic super AI? What if this one goes under and 20, 30 years down the road there's some other AI research tank, and every cent given to Yudkowsky is actually slowing us down?

No, no, his charity isn't going to build the AI. It exists to tell the people building AIs to make friendly ones, and stop people creating unfriendly ones by mistake in some unspecified way. Shut up and multiply. Read the sequences. Donate optimally. Donate now. A life is surely worth 12.5 cents of your money!

Bubble Bobby
Jan 28, 2005
This guy has a slave and is probably INTJ. Seems pretty cool to me.

Zohar
Jul 14, 2013

Good kitty
Apparently Eliezer Yudkowsky contributed a couple of chapters to a volume titled 'Global Catastrophic Risks'. Turns out there's also a chapter in it with sections about 'the singularity and techno-millennialism', 'techno-apocalypticism' and 'symptoms of dysfunctional millennialism' :ironicat:

LordSaturn
Aug 12, 2007

sadly unfunny

Somebody tell me about "the sequences". I don't know what they are and gently caress going to the wiki.

The Vosgian Beast
Aug 13, 2011

Business is slow

LordSaturn posted:

Somebody tell me about "the sequences". I don't know what they are and gently caress going to the wiki.

Yudkowsky's blog posts on Less Wrong. They are called sequences because many of them are grouped up by topic.

Alien Arcana
Feb 14, 2012

You're related to soup, Admiral.

The Vosgian Beast posted:

Yudkowsky's blog posts on Less Wrong. They are called sequences because many of them are grouped up by topic.

There's a long sequence on quantum mechanics that I read a few years back, before I knew who Yudowski was. I might go back through them when I get the time - I'll report to the thread if I do.

The one thing I specifically remember is that he has a pet solution to the wave-function collapse problem (basically a variant on many-worlds) that he presents as settled scientific fact, to the point that he specifically mocks the Copehagen interpretation as nonsense.

I remember that because it was the point where the Bells of Cognitive Dissonance began ringing in my head: though by no means a physicist myself, I was pretty sure I would have heard if a definitive explanation had been reached for wave-function collapse. Up until that point I'd been assuming the author of the sequence was an expert of some kind. (I'm... a little gullible sometimes.)

Alien Arcana fucked around with this message at 21:15 on Apr 24, 2014

Jazu
Jan 1, 2006

Looking for some URANIUM? CLICK HERE
The singularity assumes you can make an AI smarter than you. Then you can multiple copies in parallel in a data center, and moore's law applies, so they can improve themselves very quickly.

So now you have an all-knowing oracle in a box. You say, "what should we do?". And it gives you some answers. And then there is literally no way you can possibly know if those answers are right. You can't debug something that is supposed to completely transcend you.

So you can only make use of an infinitely intelligent AI if, given the correct answer to any question, you could be convinced it was true through a reasonably short rational argument.

:rimshot:

Goon Danton
May 24, 2012

Don't forget to show my shitposts to the people. They're well worth seeing.

Wait, so it's accepted as truth in LessWrong that 0 and 1 are not valid probabilities, and that it is possible to simulate a future event with perfect accuracy. How the gently caress does he square that circle? If I run a perfect simulation of a future event, the probability of that event happening as I predicted it is _______. Does he ever address this?

Djeser
Mar 22, 2013


it's crow time again

As far as I can tell, no, but that's because he considers Timeless Decision Theory to be applicable to all cases of predictions, including as low as a 65% chance of accurately predicting a yes or no style question. The problem with this is, as other people have pointed out, without perfect accuracy, it's not weird timetraveling predictive decision bullshit. It's just predictions, which are covered under normal theories of decision-making.

The Vosgian Beast
Aug 13, 2011

Business is slow

Alien Arcana posted:

There's a long sequence on quantum mechanics that I read a few years back, before I knew who Yudowski was. I might go back through them when I get the time - I'll report to the thread if I do.

The one thing I specifically remember is that he has a pet solution to the wave-function collapse problem (basically a variant on many-worlds) that he presents as settled scientific fact, to the point that he specifically mocks the Copehagen interpretation as nonsense.

I remember that because it was the point where the Bells of Cognitive Dissonance began ringing in my head: though by no means a physicist myself, I was pretty sure I would have heard if a definitive explanation had been reached for wave-function collapse. Up until that point I'd been assuming the author of the sequence was an expert of some kind. (I'm... a little gullible sometimes.)

Really, it's a crying shame Yudkowsky has never been up for a Nobel prize in physics.

Mr. Sunshine
May 15, 2008

This is a scrunt that has been in space too long and become a Lunt (Long Scrunt)

Fun Shoe

Djeser posted:

As far as I can tell, no, but that's because he considers Timeless Decision Theory to be applicable to all cases of predictions, including as low as a 65% chance of accurately predicting a yes or no style question. The problem with this is, as other people have pointed out, without perfect accuracy, it's not weird timetraveling predictive decision bullshit. It's just predictions, which are covered under normal theories of decision-making.

But even with perfect accuracy, it's still just a prediction. A prediction that will come true 100% of the time, sure, but still a perfectly ordinary prediction. What does TDT do that ordinary decision theory doesn't?

Atmus
Mar 8, 2002

Mr. Sunshine posted:

But even with perfect accuracy, it's still just a prediction. A prediction that will come true 100% of the time, sure, but still a perfectly ordinary prediction. What does TDT do that ordinary decision theory doesn't?

Nothing.

Other than getting a weirdo neckbeard Internet-Famous

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost
Hey guys, I've got this drawing of Yudkowsky here, and I'm gonna draw an X on it unless he pays me money. This is exactly the same as me torturing the drawing, because I say that the drawing feels pain and it thinks it is Yudkowsky (see the thought bubble in the upper left corner). And as far as drawings go this is an exact replica of Yudkowsky to the point where you can't really be sure he's not just a drawing and that I'm not gonna draw the X on HIM. So he has to give me money otherwise he might have an X drawn on him.

See attached on the back of the drawing my long list of rules (in easily visible crayon) for why drawings feel 3^^^^3 torturebux worth of pain when an x is drawn on them*. Also I can make photocopies if I need to.

* this is the truth.

ol qwerty bastard
Dec 13, 2005

If you want something done, do it yourself!

The Vosgian Beast posted:

Really, it's a crying shame Yudkowsky has never been up for a Nobel prize in physics.

Mentioning Nobel Prizes jogged my memory and reminded me of a part in Harry Potter and the Methods of Rationality where Eliezer reveals that he has some Views about race!

quote:

And now even within Ravenclaw, his only remaining competitors were Padma Patil (whose parents came from a non-English-speaking culture and thus had raised her with an actual work ethic), Anthony Goldstein (out of a certain tiny ethnic group that won 25% of the Nobel Prizes),[...]

What a shock he thinks his own ethnic group is inherently more intelligent

Evrart Claire
Jan 11, 2008
Some of the discussion comments on those pages are amazing.

quote:



Torture,

Consider three possibilities:

(a) A dusk speck hits you with probability one, (b) You face an additional probability 1/( 3^^^3) of being tortured for 50 years, (c) You must blink your eyes for a fraction of a second, just long enough to prevent a dusk speck from hitting you in the eye.

Most people would pick (c) over (a). Yet, 1/( 3^^^3) is such a small number that by blinking your eyes one more time than you normally would you increase your chances of being captured by a sadist and tortured for 50 years by more than 1/( 3^^^3). Thus, (b) must be better than (c). Consequently, most people should prefer (b) to (a).

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

ol qwerty bastard posted:

Mentioning Nobel Prizes jogged my memory and reminded me of a part in Harry Potter and the Methods of Rationality where Eliezer reveals that he has some Views about race!

Not race, dude. He's talking about 'culture' and 'ethnicity.' It's different, because if he was talking about race he'd be racist and words hurt his delicate lazy feelings.

Adbot
ADBOT LOVES YOU

potatocubed
Jul 26, 2012

*rathian noises*

Mr. Sunshine posted:

But even with perfect accuracy, it's still just a prediction. A prediction that will come true 100% of the time, sure, but still a perfectly ordinary prediction. What does TDT do that ordinary decision theory doesn't?

It does terrible things with causality and human psychology.

Thinking of the real world for a moment, consider the possibility that you live in a fully deterministic universe. Every decision you make is predetermined, and a sufficiently cunning person (or AI) with access to the right data could flawlessly model your entire life. None of this helps answer the question "What would you like for breakfast?"

This is because we have limited knowledge of the future, so even if the universe is fully determined we have to live on the assumption that it isn't.

Now, TDT takes that away. You now have 100% complete knowledge of the future. What do you have for breakfast?

The answer is easy - you look ahead to your next breakfast and see the confluence of events which led to you choosing toast. So you choose to eat toast. Now, as your prediction of the future is perfect you must eat toast. You're going to eat toast for breakfast for no other reason than because you chose to eat toast, and you chose to eat toast because you're going to eat toast, and so on ad infinitum.

Every decision becomes a stable time loop, the outcome only the outcome because you know what the outcome is. God knows what happens to a human consciousness exposed to this sort of thinking, but I expect it would implode in short order.

But for more fun, imagine what happens when two of these future-predicting TDT minds meet. Essentially, any interaction between them becomes an infinite recursion of '...if you do that then I'd do this...' on both sides. In some (most?) situations they might arrive at a stable equilibrium (let's split the cake 50/50) and they'd do it instantly, but it's fairly trivial to construct a game theoretical position with no stable equilibrium-- in fact, I'll do it now.

quote:

AI Alice has two buttons marked X and Y. She wins if she and Bob press different buttons. AI Bob also has two buttons marked X and Y. He wins if they press the same buttons. The two AIs know about each other but cannot communicate.

Two TDT minds cannot solve that problem without establishing a wider context i.e. one of them being okay with losing, or sabotaging the other's buttons or something. One TDT mind can, because its opponent will not have perfect knowledge of the future, but two are basically hosed.

Of course, you could argue that we have free will - in which case perfect future predictions become impossible and so does TDT.

  • Locked thread