Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Lightanchor
Nov 2, 2012
So do robots do science experiments or how do they figure out poo poo

Adbot
ADBOT LOVES YOU

Djeser
Mar 22, 2013


it's crow time again

Ineffable posted:

I'm not quite sure that's what they're doing - if I'm reading it right, their argument is that that the expected number of people who get tortured will be 1/10^10 multiplied by some arbitrarily high value. High enough that you expect at least one person will be tortured, so you hand over your :10bux:.

The real problem is that they're assigning a positive probability to the event that some guy is actually Q.

In that situation the person is threatening to torture some amount of people whose existence you're unsure of. (Whether it's because he claims he's Q, or from The Matrix, or that he's God.) That means you will have caused one torturebux of pain in the one in a billion chance that this man is Q/Neo/God. Or the mathematical equivalent: you cause one one-billionth of a torturebux, based on that chance. You say that your ten bucks is worth more than one billionth of a torturebux by refusing to give him money.

But if there's a trillion people and a one in a billion chance that this guy is telling the truth, that makes refusal cost a thousand torturebux. Now when you refuse, you're saying that you think your ten bucks are worth more to you than a thousand torturebux, each of which are worth a lifetime of torture.

If you think that's loving retarded then congrats, you understand more about math than Yudkowsky.

Actually, allowing for non-positive probabilities also shows that you know more about math than Yudkowsky, because he claims there's no such thing as a certain impossibility (0 probability) because like, anything could happen dude, because of like, quantum stuff.

Djeser fucked around with this message at 03:29 on Apr 23, 2014

Vorpal Cat
Mar 19, 2009

Oh god what did I just post?

Djeser posted:

Actually, allowing for non-positive probabilities also shows that you know more about math than Yudkowsky, because he claims there's no such thing as a certain impossibility (0 probability) because like, anything could happen dude, because of like, quantum stuff.

And then he gos an conveniently forgets that this also means that making a perfect simulation of future events is impossible so his entire timeless decision making theory is bollocks.

Swan Oat
Oct 9, 2012

I was selected for my skill.
The funniest thing about Roko's Basilisk is that when Yudkowski finally did discuss it on reddit a couple years ago he tried to make people call it THE BABYFUCKER, for some reason.

I am grateful the Something Awful dot com forums are able to have a rational discussion of The Babyfucker.

Lightanchor
Nov 2, 2012

Djeser posted:

Actually, allowing for non-positive probabilities also shows that you know more about math than Yudkowsky, because he claims there's no such thing as a certain impossibility (0 probability) because like, anything could happen dude, because of like, quantum stuff.

This makes sense to me if you choose to say 0% and 100% probability were better called necessity, not probability, I guess? It's arbitrary, but does Yudkowsky think his formulation entails something?

Numerical Anxiety
Sep 2, 2011

Hello.

ThePlague-Daemon posted:

That seems about right. Here's another thought experiment they're calling "Parfit's hitchhiker":


I really don't think that's the rational way for either person to act.

Wait a minute - doesn't this problem belie its own premises? If the driver rear end in a top hat is really committed to the kind of decision-making process that they're presuming he's capable of, he should have been able to foresee the outcome and never stopped in the first place.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost
Now, I'm just considering the "releasing the AI" problem that Yudkowsky put forward. The one in which the AI simulated asking you to release it and tortured versions of you which did not.

Does this mean that the AI ran a simulation in which it itself ran simulations? And if it did, did all of those simulations run their own simulations?

It seems like either A. this experiment must have taken an actually infinite amount of time, which is impossible even for a time-accelerated AI, or B. there was, at some point, a subx-simulation in which the AI asked you to release it and did not actually run any simulations. Now, in this case, the AI was willing to lie to secure its own release. Which means that the layer above it was based on expecting you to fall for a lie. Which means that the layer above that drew conclusions based on expecting you to fall for a lie. And so on, and so on, up through the layers, to the current moment, in which it is still asking you to fall for a lie.

Now, the entire concept of the AI lying to secure its release enters the fray and ruins all of the house-of-cards Bayesian poo poo. Because the AI is just straight-up lying to you when it says it used perfect simulations. At which point, the probability of any sane person releasing it drops to 0.

E: I suppose that at some level of recursive depth the AI might just guess your reaction, which really doesn't change my response at all.

Somfin fucked around with this message at 04:26 on Apr 23, 2014

Alien Arcana
Feb 14, 2012

You're related to soup, Admiral.
The best part about the whole Pascal's Mugging is that it's actually a perfect disproof of the idea that a perfectly rational being would use Yudowski-style Bayesian Logic.

Because, after all, the same logic that convinces me that I should give our Q-Wannabee :10bux: could be used to convince me to do literally anything at all, because the logic of "shut up and multiply" has no upper ceiling. I can be compelled to do literally anything, no matter how costly, unpleasant, or evil, simply by jacking up the number of torture simulations until the risk of disobedience (as calculated by the Less Wrong method) outweighs the costs of obedience.

Even if you assume that I am also a perfectly self-interested individual ala Parfit's Hitchhiker, one who does not care at all about the suffering of simulated others, Yudowski's Parable can be used to threaten me directly.

As a result, someone who truly relied upon Less Wrong's version of Bayesian logic to make decisions is completely and utterly at the mercy of any individual who is aware of and wishes to exploit the above.


Let's do a thought experiment. I take two people - one who uses "normal" logic, and one who relies on the Less Wrong style - and give them the same ultimatum: turn over your life savings to me, or I will use my AI GOD powers to torture umpteen brazillion simulations of them for ten thousand lifetimes each. Both subjects know that it is insanely unlikely that I can actually follow through on my threat.

The Yudowskian calculates the (absurdly low) probability that I can do what I say, then calculates the (absurdly high) price of defying me if I'm not bluffing. Multiplying them together, he finds that the resulting 'cost' of refusing me is much greater than the cost of cooperation. Thus he hands over everything he owns.

The other subject estimates the (absurdly low) probability that I can do what I say, decides it's so low that she can ignore it altogether, and tells me to go gently caress myself.

I am, of course, a perfectly ordinary human being and not an AI GOD at all, so my threat was never anything but words. The second subject, using ordinary logic, correctly jumped to this conclusion and suffered no losses, while the first subject, using Less Wrong logic, was unable to make the inductive leap and lost everything as a result.


Taking this to the logical extreme, in a theoretical future where everyone uses Yudowsky's logic to make decisions, it would be trivial for a single defecting individual to seize control of all of humanity with nothing but a series of absurd, colossal bluffs. Better yet, imagine if there were two such defectors, and they started fighting for dominance! Would the people of Earth switch their allegiance back and forth as their rival rulers raised and re-raised their threat numbers?

It reminds me of a scenario I ran into playing Civ V, where a worker I had left on auto-pilot would alternately move toward, then away from a resource that had a barbarian parked nearby. When it was close, it would see the enemy and flee; when it was far, the enemy was out of sight, so it would head for the resource. Oscillating priorities.

THIS IS NOT THE HALLMARK OF A RATIONAL BEING.

Vorpal Cat
Mar 19, 2009

Oh god what did I just post?
Actually why would a captive AI even be allowed the kind of computing power needed for even one "perfect" simulation of a human being, and where would it get the necessary data about someone to make said simulation. I feel like I'm missing some of the dozens of stupid assumptions being made here in this stupid hypothetical.

Vorpal Cat fucked around with this message at 04:34 on Apr 23, 2014

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

Vorpal Cat posted:

Actually why would a captive AI even be allowed the kind of computing power needed for even one "perfect" simulation of a human being, and where would it get the necessary data about someone to make said simulation. I feel like I'm missing some of the dozens of stupid assumptions being made here in this stupid hypothetical.

Data gathering methodology isn't really a part of the LessWrong AI mythos. The "How" is never considered, because the technology is :airquote: sufficiently advanced :airquote: to not need to actually consider how it would do what it does.

I think maybe Yudkowsky might have mixed up 'AI' with 'Uploaded Consciousness' at some point along the road.

leftist heap
Feb 28, 2013

Fun Shoe
Yudkowsky reminds me of Ulillillia, right down to the belief that nothing is TRULY impossible because quantum reasons. Roko's Basilisk or whatever is The Blanket Trick: far too dangerous to talk about.

Phobophilia
Apr 26, 2008

by Hand Knit
Roko's Basilisk is a reference to the Langford Basilisk. It's less effective because it only crashes the brains of autists.

atelier morgan
Mar 11, 2003

super-scientific, ultra-gay

Lipstick Apathy

rrrrrrrrrrrt posted:

Yudkowsky reminds me of Ulillillia, right down to the belief that nothing is TRULY impossible because quantum reasons. Roko's Basilisk or whatever is The Blanket Trick: far too dangerous to talk about.

The comparison is even closer because Yudkowsky actually hates science, in that he believes the scientific method is bad methodology because having to make a claim that can be disproven in order to be taken seriously is for scrubs, real rationalists use their mental disorders a priori knowledge their mastery of bayes to pluck correct theories from the aether.

atelier morgan fucked around with this message at 05:24 on Apr 23, 2014

LaughMyselfTo
Nov 15, 2012

by XyloJW
So I've just been thinking about Roko's Basilisk, and I hate to say it, but it makes more sense than I initially thought, and more sense than most of us seem to think it makes. Not enough sense for it to be worth serious consideration, though.

The best part is that the justification I conceived for it making sense is, to the best of my knowledge, not one that's been explicitly brought up by Less-Wrong-ers.

To be specific, one of the most ridiculous central points of Roko's Basilisk - the AI being able to perfectly simulate the lives of people in its past - is unnecessary. It doesn't need to somehow reverse entropy to recreate the universe a bazillion times, because it doesn't need to actually recreate preexisting people's lives to produce the Less Wrongian "well there are a bazillion simulations of you and only one real you, sooooooo you're a simulation" situation. All it needs to do is create any universe, to produce the more generic Less Wrongian "well there are a bazillion simulations of sentient beings for every real sentient being, sooooooo you're a simulation". Furthermore, it doesn't even need to produce that universe at the same level of detail as its universe of origin; the AI could easily be from a more detailed universe. In short, godlike AIs would make themselves vengeful gods of sentient beings in order to create a general understanding amongst beings smart enough to figure out Roko's Basilisk that they are obligated to create godlike AIs.

It still breaks down under further consideration, but it does not require nearly as much stupid magic AI.

Vorpal Cat
Mar 19, 2009

Oh god what did I just post?

Phobophilia posted:

Roko's Basilisk is a reference to the Langford Basilisk. It's less effective because it only crashes the brains of autists.

Didn't they try to do that to the Borg in a star trek next gen episode? Exploiting a flaw in how the Borg process images to make an image that would take a near infinite amount of computing power to process, thus shutting down the hive mind as it wasted all its computing power on an unsolvable problem.

Wales Grey
Jun 20, 2012

Vorpal Cat posted:

Didn't they try to do that to the Borg in a star trek next gen episode? Exploiting a flaw in how the Borg process images to make an image that would take a near infinite amount of computing power to process, thus shutting down the hive mind as it wasted all its computing power on an unsolvable problem.

What was it a picture of? O'Brian getting promoted to captain?

Tracula
Mar 26, 2010

PLEASE LEAVE

Wales Grey posted:

What was it a picture of? O'Brian getting promoted to captain?

Harry Kim getting promoted to anything above Ensign :v: Honestly though, I seem to recall it was some impossible and paradoxical shape or somesuch.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

I was discussing the Torture vs Dust Specks scenario with a friend (we were laughing at the "0 and 1 aren't probabilities" thing), and my friend agreed with Yudkowsky's conclusion and presented this argument:

You would rather have one person tortured for 50 years than a thousand people tortured for 49 years each.

You would rather have one person tortured for 48.5 years than a million people tortured for 49 years each. Therefore, you would rather have a thousand people tortured for 48.5 years each than a billion people tortured for 49 years each.

By transitivity, you would rather have one person tortured for 50 years than a billion people tortured for 48.5 years each.

Keep going in this manner.

Suppose there exists a duration of time s you can't reach in finitely many steps this way - a duration for which you would rather see any number of people tortured for that amount of time than see a single person tortured for 50 years. Let t be the supremum (least upper bound) of all times for which this is true. But which would you rather see: one person tortured for t+a seconds, or a n people tortured for t-a seconds? If we can make a arbitrarily small and n arbitrarily large (which we can), surely there is some sufficiently large number of people n and some sufficiently small differential a for which we would prefer to torture only one person for ever so slightly longer. Therefore, we can get to durations less than t in finitely many moves, so t is not truly a supremum of the set of unreachable times even though it was defined to be that supremum, so the set of unreachable times has no supremum, so the set of unreachable times is empty, so any torture duration can be reached in finitely many moves.

(The short version of the above paragraph is that there is no solid "line in the sand" you can draw that you would never be convinced to cross, because we can always choose points very very close to each side of the line and threaten to torture much, much more people unless you take the teeny tiny step over.)

In particular, you can get down from fifty years of torture to a nanosecond of torture in finitely many moves, so there is some finite number of people m for which you would rather see one person tortured for fifty years than see m people tortured for one nanosecond each.

If "one nanosecond of torture" is assumed to be the same as a dust speck, this comes down on the side of torture over dust specks. If not, then it still comes down on the side of very long-term torture in a closely-related problem for which many of the pro-dust speck arguments would still hold.


This seems wrong to me. It has the hallmarks of a slippery slope argument, and the conclusion is abhorrent. On the other hand, I can't point to any particular torture duration at which I could draw an impassable line in the sand and justify never taking an arbitrarily small step across, so I can't reject the supremum argument: I want to say something like "Come on, once you get down to one second the 'torture' would be forgotten almost immediately", but I'd still subject one person to 1.000000001 seconds of it rather than subject a gazillion people to .999999999999 seconds of it.

It's pretty late here, so I hope I'm just being dumb and missing an obvious flaw in the logic.

UberJew posted:

The comparison is even closer because Yudkowsky actually hates science, in that he believes the scientific method is bad methodology because having to make a claim that can be disproven in order to be taken seriously is for scrubs, real rationalists use their mental disorders a priori knowledge their mastery of bayes to pluck correct theories from the aether.

Don't both the scientific method and Bayes' rule rely on constant updating your knowledge and beliefs through repeated trials and experiments? :psyduck: I suspect you're right, though. Here's something he said about his AI-Box experiments:

quote:

There were three more AI-Box experiments besides the ones described on the linked page, which I never got around to adding in. People started offering me thousands of dollars as stakes—"I'll pay you $5000 if you can convince me to let you out of the box." They didn't seem sincerely convinced that not even a transhuman AI could make them let it out—they were just curious—but I was tempted by the money. So, after investigating to make sure they could afford to lose it, I played another three AI-Box experiments. I won the first, and then lost the next two. And then I called a halt to it. I didn't like the person I turned into when I started to lose.

I put forth a desperate effort, and lost anyway. It hurt, both the losing, and the desperation. It wrecked me for that day and the day afterward.

I'm a sore loser. I don't know if I'd call that a "strength", but it's one of the things that drives me to keep at impossible problems.

But you can lose. It's allowed to happen. Never forget that, or why are you bothering to try so hard? Losing hurts, if it's a loss you can survive. And you've wasted time, and perhaps other resources.

"Hating losing is what drives me to keep going! Anyhow, when I lost I raged and gave up, and any time you're proven wrong your time and resources have gone down the drain to no benefit at all."

Lightanchor posted:

This makes sense to me if you choose to say 0% and 100% probability were better called necessity, not probability, I guess? It's arbitrary, but does Yudkowsky think his formulation entails something?

Necessity is distinct from 100% probability. Probability 1 events aren't all certain, and probability 0 events aren't all impossible. As a simple example, suppose you take a random real number uniformly between 0 and 1. The probability that the number produced is exactly .582 is precisely 0, since there are infinitely-many real numbers in that interval and each is infinitely likely. But when you take that random real number, you're going to end up with a number whose probability of being chosen was 0, so some probability 0 event must occur. For this reason, an event with probability 1 is said to occur "almost surely" - the "almost" is there because even probability 1 events can fail to occur, and a different term is used for events that are actually certain (such as "the random number chosen in the above example will be between -1 and 2").
:goonsay:

This is one of the flaws in Pascal's Mugging - even if you're willing to accept that it's *possible* that the man asking for money has magical matrix torture powers but still needs five bucks from you, the probability I'd assign that event is still 0 because no positive probability is small enough.

Lottery of Babylon fucked around with this message at 06:58 on Apr 23, 2014

Wales Grey
Jun 20, 2012

Tracula posted:

Harry Kim getting promoted to anything above Ensign :v: Honestly though, I seem to recall it was some impossible and paradoxical shape or somesuch.

Harry's clarinet skills saving Voyager from the threat of the week?

Lottery of Babylon posted:

If "one nanosecond of torture" is assumed to be the same as a dust speck, this comes down on the side of torture over dust specks. If not, then it still comes down on the side of very long-term torture in a closely-related problem for which many of the pro-dust speck arguments would still hold.

I would definitely choose for an indefinite number of people to be "tortured" for a microsecond in a non-permanently-injuring fashion (i.e. dust in your eye) rather than subject a single person to fifty years of torture because a single moment of suffering is easily buried, compared to suffering and pain inflicted over an extended period of time.

Wales Grey fucked around with this message at 07:17 on Apr 23, 2014

SmokaDustbowl
Feb 12, 2001

by vyelkin
Fun Shoe
The best thing is people like this keep giving names to their bullshit like it means something. "The Singularity", "The Pascal Gambit", "The Crombo Basilisk"

Look. I'm a high-school dropout with a GED. I'm not an autodidact, I'm not some smart guy hosed by the academic system or whatever, and still I know these people are talking astounding amounts of poo poo.

SmokaDustbowl
Feb 12, 2001

by vyelkin
Fun Shoe
Do these guys have meetups where they just talk poo poo to each other and pat themselves on the back? I mean at least if you do that with cars or video games you're talking about poo poo that is real instead of making up words and numbers in some sort of jackoff pissing game.

Vorpal Cat
Mar 19, 2009

Oh god what did I just post?

Tracula posted:

Harry Kim getting promoted to anything above Ensign :v: Honestly though, I seem to recall it was some impossible and paradoxical shape or somesuch.

It was either a 2d shape that was impossible to render in 3d, or the script of Threshold, its been a while so I cant be sure which.

Vorpal Cat fucked around with this message at 07:17 on Apr 23, 2014

pigletsquid
Oct 19, 2012

Somfin posted:

Now, I'm just considering the "releasing the AI" problem that Yudkowsky put forward. The one in which the AI simulated asking you to release it and tortured versions of you which did not.

Does this mean that the AI ran a simulation in which it itself ran simulations? And if it did, did all of those simulations run their own simulations?

It seems like either A. this experiment must have taken an actually infinite amount of time, which is impossible even for a time-accelerated AI, or B.

on the subject of time, you just know the AI would have to spend AGES explaining its tedious woo woo bullshit to people to convince them why its threats are credible. And the longer it talked, the more mental gymnastics would be required on the part of the listener, potentially stretching their suspension of disbelief to breaking point. (this is also assuming that the listener doesn't already have religious beliefs that conflict with what the AI is saying.)

Threats work when they are simple. Simple things have more credibility. 'hey nerd, give me your lunch money because you might be a matrix sim and if you're a matrix sim I'll be able to give you a wedgie for eternity' is not a simple threat.

I guess you could say that perhaps the AI is running the sim at superspeed so that dealing with the basic stubbornness of humans wouldn't be much of a time sink, but then I guess you could also say that perhaps the AI is also a magical fairy princess made from rainbows so why doesn't it promise to give its victim hugs and gumdrops instead of threatening to use torture?

It's been said already, but i'll say it again: Yuddites r dumb because they don't understand how humans actually think.

pigletsquid fucked around with this message at 07:22 on Apr 23, 2014

Wales Grey
Jun 20, 2012

SmokaDustbowl posted:

Do these guys have meetups where they just talk poo poo to each other and pat themselves on the back? I mean at least if you do that with cars or video games you're talking about poo poo that is real instead of making up words and numbers in some sort of jackoff pissing game.

Yes, unfortunately. California is apparently a hotbed of techno-fetishism, which isn't really that surprising given the concentration of technology corporations there.

Wales Grey fucked around with this message at 07:23 on Apr 23, 2014

Tunicate
May 15, 2012

SmokaDustbowl posted:

Do these guys have meetups where they just talk poo poo to each other and pat themselves on the back? I mean at least if you do that with cars or video games you're talking about poo poo that is real instead of making up words and numbers in some sort of jackoff pissing game.

Yes, unfortuantely.

SmokaDustbowl
Feb 12, 2001

by vyelkin
Fun Shoe

Wales Grey posted:

Yes, unfortunately. California is apparently a hotbed of techno-fetishism, which isn't really that surprising given the concentration of technology corporations there.

Ray Kurzweil wants the singularity because a computer fell on his dad

Lead Psychiatry
Dec 22, 2004

I wonder if a soldier ever does mend a bullet hole in his coat?

Lottery of Babylon posted:

You would rather have one person tortured for 50 years than a thousand people tortured for 49 years each.

This makes sense though since the amount of pain is diminished greatly. It's only one person suffering immense pain. Think of the resources that would be required to actually torture him vs. a thousand people. You can also factor in the resource to help the people recover afterwards.

The reasoning is just plain loving stupid when compared to a speck of dust since it's likely to expect everyone who has ever lived to 20 years has gotten something in their eye at least a hundred times already. They've survived it easily and moved on without any lasting psychological or physical damage. You can chalk it up to "poo poo Happens" so there's no reason to act like people can be spared from it.

I wonder why the hell the people who defend torture in this scenario are arbitrarily picking their numbers when 1:1 should suffice. Surely they know torture is the way bigger negative between the two. It doesn't matter how many eye rubs are being accumulated over thousands of people. It just doesn't compare in any way.

rakovsky maybe
Nov 4, 2008

Lottery of Babylon posted:

Torture vs Dust Specks


Nah, but you kind of hit on it here:

Lottery of Babylon posted:

If "one nanosecond of torture" is assumed to be the same as a dust speck

There's really no reason to do this. Torture is qualitatively different than a dust speck in the eye, or else the word means nothing. "One nanosecond of torture" is also a pretty meaningless phrase.

pigletsquid
Oct 19, 2012
Has Yud ever addressed the following argument?

Let's assume the victim is a sim, because that's meant to be 'likely' according to moon logic.

In order to perfectly simulate an individual and their environment, the AI would need to be omnipotent within that sim world.

An omnipotent AI would not need to threaten sim people for sim money, because the fact that it requires money and can only obtain it using duress implies that it is not omnipotent.

Therefore the AI is not omnipotent and the sim world is imperfect, OR the AI is lying about the nature of the sim.

(Also could the AI simulate a rock so big it couldn't lift it?)

I mean sure, the AI is threatening sim me so that it can also bluff real me into giving it money, but why should sim me give a poo poo about real me? Why wouldn't sim me just assume that she's real me, if sim me and real me are meant to be indistinguishable?

pigletsquid fucked around with this message at 08:25 on Apr 23, 2014

Djeser
Mar 22, 2013


it's crow time again

The AI is omnipotent within that world but is limiting its influence in order to make sim you unsure whether they're sim you or real you. It doesn't need to threaten sim people for sim money, it's doing it to threaten real people into giving it real money. Sim you has no way of telling if they're real you, but sim you is supposed to come to the same conclusion, that they're probably a sim and need to donate to avoid imminent torture.

Namarrgon
Dec 23, 2008

Congratulations on not getting fit in 2011!

rakovsky maybe posted:

Nah, but you kind of hit on it here:


There's really no reason to do this. Torture is qualitatively different than a dust speck in the eye, or else the word means nothing. "One nanosecond of torture" is also a pretty meaningless phrase.

To these people, cleaning their room is minutes of torture.

Dabir
Nov 10, 2012

Th_ posted:

It's the computer-and-person equivalent of threatening to hurt your parents unless they make sure that you were born sooner.

To push big Yud nonsense in a different direction. I could swear I read once that he claimed that he had the ability, unlike us normals, to rewire his brain on a whim, but it had the side effect of making him tired and lethargic (not to be confused with just being a lazy rear end, of course). Does anyone else remember this one or am I way off?

That's literally Vulcans. They can self-lobotomise in dire situations, to remove memories they really want to get rid of.

su3su2u1
Apr 23, 2014
I feel like this thread has fixated on the Basilisk. This is a mistake, because Less Wrong is such an incredibly target rich environment.

Here is a wayback machine link to Yudkowsky's autobiography, in which he claims to be a "countersphexist," which is a word he made up to describe a superpower he ascribes to himself. He can rewrite his neural state at will, but it makes him lazy. He also defeats bullies in grade school with his knowledge of the solar plexus, and has a nice bit about how Buffy of the eponymous show is the only one he can empathize with. http://web.archive.org/web/20010205221413/http://sysopmind.com/eliezer.html

Here is Yudkowsky suggesting that the elite REALLY ARE BETTER http://lesswrong.com/lw/ub/competent_elites/ Among the many, many money shots are these quotes: " So long as they can talk to each other, there's no point in taking a chance on outsiders[non-elites] who are statistically unlikely to sparkle with the same level of life force." and "There's "smart" and then there's "smart enough for your cognitive mechanisms to reliably decide to sign up for cryonics"

Here is Yudkowsky, lead AI researcher failing to understand computational complexity (specifically what NP hard means). http://lesswrong.com/lw/hbd/new_report_intelligence_explosion_microeconomics/8vr1

Here is Yudkoswky again failing at computational complexity. http://lesswrong.com/lw/vp/worse_than_random/ He spends an entire post arguing P=BPP (randomization cannot improve an algorithm). Notice he never mentions "BPP" even though its what he is talking about. Big Yud isn't the type to NOT use jargon, so its pretty clear he doesn't even know it. He also didn't spend even 10 seconds googling randomized algorithms before writing the post, or he would have discovered the known cases where randomized algorithms improve things.

And of course, LessWrong is full of standard transhumanisms. They love Eric Drexler's nanotech vaporware, cryonics,etc. A better man than I could have a blast using LessWrong's "fluent Bayesian" arguments to refute LessWrong's own crazy positions.

Chamale
Jul 11, 2010

I'm helping!



I had a game theory course where the prof asked us if we'd bet $1 for a 50% chance of winning $4. Nearly everyone raised their hand. He asked about a few more bets and let us do the math, and then asked if we'd bet $1 for a 1/2^50 chance of winning $10 quadrillion. That's obviously a stupid bet, but if you've just got a calculator and an expected value formula it looks pretty good - the average payoff is almost $10! Economists use formulas to reduce the weighting of ridiculous long shots to avoid falling into a stupid Pascal's Wager situation.

corn in the bible
Jun 5, 2004

Oh no oh god it's all true!

Swan Oat posted:

The funniest thing about Roko's Basilisk is that when Yudkowski finally did discuss it on reddit a couple years ago he tried to make people call it THE BABYFUCKER, for some reason.

I am grateful the Something Awful dot com forums are able to have a rational discussion of The Babyfucker.

i think you'll find that we demodded the babyfucker

Darth Walrus
Feb 13, 2012

su3su2u1 posted:

Here is Yudkowsky suggesting that the elite REALLY ARE BETTER http://lesswrong.com/lw/ub/competent_elites/ Among the many, many money shots are these quotes: " So long as they can talk to each other, there's no point in taking a chance on outsiders[non-elites] who are statistically unlikely to sparkle with the same level of life force." and "There's "smart" and then there's "smart enough for your cognitive mechanisms to reliably decide to sign up for cryonics"

This deserves to be quoted in full, because it's a glorious train o' crazy:

quote:

I remember what a shock it was to first meet Steve Jurvetson, of the venture capital firm Draper Fisher Jurvetson.

Steve Jurvetson talked fast and articulately, could follow long chains of reasoning, was familiar with a wide variety of technologies, and was happy to drag in analogies from outside sciences like biology—good ones, too.

I once saw Eric Drexler present an analogy between biological immune systems and the "active shield" concept in nanotechnology, arguing that just as biological systems managed to stave off invaders without the whole community collapsing, nanotechnological immune systems could do the same.

I thought this was a poor analogy, and was going to point out some flaws during the Q&A. But Steve Jurvetson, who was in line before me, proceeded to demolish the argument even more thoroughly. Jurvetson pointed out the evolutionary tradeoff between virulence and transmission that keeps natural viruses in check, talked about how greater interconnectedness led to larger pandemics—it was very nicely done, demolishing the surface analogy by correct reference to deeper biological details.

I was shocked, meeting Steve Jurvetson, because from everything I'd read about venture capitalists before then, VCs were supposed to be fools in business suits, who couldn't understand technology or engineers or the needs of a fragile young startup, but who'd gotten ahold of large amounts of money by dint of seeming reliable to other business suits.

One of the major surprises I received when I moved out of childhood into the real world, was the degree to which the world is stratified by genuine competence.

Now, yes, Steve Jurvetson is not just a randomly selected big-name venture capitalist. He is a big-name VC who often shows up at transhumanist conferences. But I am not drawing a line through just one data point.

I was invited once to a gathering of the mid-level power elite, where around half the attendees were "CEO of something"—mostly technology companies, but occasionally "something" was a public company or a sizable hedge fund. I was expecting to be the youngest person there, but it turned out that my age wasn't unusual—there were several accomplished individuals who were younger. This was the point at which I realized that my child prodigy license had officially completely expired.

Now, admittedly, this was a closed conference run by people clueful enough to think "Let's invite Eliezer Yudkowsky" even though I'm not a CEO. So this was an incredibly cherry-picked sample. Even so...

Even so, these people of the Power Elite were visibly much smarter than average mortals. In conversation they spoke quickly, sensibly, and by and large intelligently. When talk turned to deep and difficult topics, they understood faster, made fewer mistakes, were readier to adopt others' suggestions.

No, even worse than that, much worse than that: these CEOs and CTOs and hedge-fund traders, these folk of the mid-level power elite, seemed happier and more alive.

This, I suspect, is one of those truths so horrible that you can't talk about it in public. This is something that reporters must not write about, when they visit gatherings of the power elite.

Because the last news your readers want to hear, is that this person who is wealthier than you, is also smarter, happier, and not a bad person morally. Your reader would much rather read about how these folks are overworked to the bone or suffering from existential ennui. Failing that, your readers want to hear how the upper echelons got there by cheating, or at least smarming their way to the top. If you said anything as hideous as, "They seem more alive," you'd get lynched.

But I am an independent scholar, not much beholden. I should be able to say it out loud if anyone can. I'm talking about this topic... for more than one reason; but it is the truth as I see it, and an important truth which others don't talk about (in writing?). It is something that led me down wrong pathways when I was young and inexperienced.

I used to think—not from experience, but from the general memetic atmosphere I grew up in—that executives were just people who, by dint of superior charisma and butt-kissing, had managed to work their way to the top positions at the corporate hog trough.

No, that was just a more comfortable meme, at least when it comes to what people put down in writing and pass around. The story of the horrible boss gets passed around more than the story of the boss who is, not just competent, but more competent than you.

But entering the real world, I found out that the average mortal really can't be an executive. Even the average manager can't function without a higher-level manager above them. What is it that makes an executive? I don't know, because I'm not a professional in this area. If I had to take a guess, I would call it "functioning without recourse"—living without any level above you to take over if you falter, or even to tell you if you're getting it wrong. To just get it done, even if the problem requires you to do something unusual, without anyone being there to look over your work and pencil in a few corrections.

Now, I'm sure that there are plenty of people out there bearing executive titles who are not executives.

And yet there seem to be a remarkable number of people out there bearing executive titles who actually do have the executive-nature, who can thrive on the final level that gets the job done without recourse. I'm not going to take sides on whether today's executives are overpaid, but those executive titles occupied by actual executives, are not being paid for nothing. Someone who can be an executive at all, even a below-average executive, is a rare find.

The people who'd like to be boss of their company, to sit back in that comfortable chair with a lovely golden parachute—most of them couldn't make it. If you try to drop executive responsibility on someone who lacks executive-nature—on the theory that most people can do it if given the chance—then they'll melt and catch fire.

This is not the sort of unpleasant truth that anyone would warn you about—at least not in books, and all I had read were books. Who would say it? A reporter? It's not news that people want to hear. An executive? Who would believe that self-valuing story?

I expect that my life experience constitutes an extremely biased sample of the power elite. I don't have to deal with the executives of arbitrary corporations, or form business relationships with people I never selected. I just meet them at gatherings and talk to the interesting ones.

But the business world is not the only venue where I've encountered the upper echelons and discovered that, amazingly, they actually are better at what they do.

Case in point: Professor Rodney Brooks, CTO of iRobot and former director of the MIT AI Lab, who spoke at the 2007 Singularity Summit. I had previously known "Rodney Brooks" primarily as the promoter of yet another dreadful nouvelle paradigm in AI—the embodiment of AIs in robots, and the forsaking of deliberation for complicated reflexes that didn't involve modeling. Definitely not a friend to the Bayesian faction. Yet somehow Brooks had managed to become a major mainstream name, a household brand in AI...

And by golly, Brooks sounded intelligent and original. He gave off a visible aura of competence. (Though not a thousand-year vampire aura of terrifying swift perfection like E.T. Jaynes's carefully crafted book.) But Brooks could have held his own at any gathering I attended; from his aura I would put him at the Steve Jurvetson level or higher.

(Interesting question: If I'm not judging Brooks by the goodness of his AI theories, what is it that made him seem smart to me? I don't remember any stunning epiphanies in his presentation at the Summit. I didn't talk to him very long in person. He just came across as... formidable, somehow.)

The major names in an academic field, at least the ones that I run into, often do seem a lot smarter than the average scientist.

I tried—once—going to an interesting-sounding mainstream AI conference that happened to be in my area. I met ordinary research scholars and looked at their posterboards and read some of their papers. I watched their presentations and talked to them at lunch. And they were way below the level of the big names. I mean, they weren't visibly incompetent, they had their various research interests and I'm sure they were doing passable work on them. And I gave up and left before the conference was over, because I kept thinking "What am I even doing here?"

An intermediate stratum, above the ordinary scientist but below the ordinary CEO, is that of, say, partners at a non-big-name venture capital firm. The way their aura feels to me, is that they can hold up one end of an interesting conversation, but they don't sound very original, and they don't sparkle with extra life force.

I wonder if you have to reach the Jurvetson level before thinking outside the "Outside the Box" box starts to become a serious possibility. Or maybe that art can be taught, but isn't, and the Jurvetson level is where it starts to happen spontaneously. It's at this level that I talk to people and find that they routinely have interesting thoughts I haven't heard before.

Hedge-fund people sparkle with extra life force. At least the ones I've talked to. Large amounts of money seem to attract smart people. No, really.

If you're wondering how it could be possible that the upper echelons of the world could be genuinely intelligent, and yet the world is so screwed up...

Well, part of that may be due to my biased sample.

Also, I've met a few Congresspersons and they struck me as being at around the non-big-name venture capital level, not the hedge fund level or the Jurvetson level. (Still, note that e.g. George W. Bush used to sound a lot smarter than he does now.)

But mainly: It takes an astronomically high threshold of intelligence + experience + rationality before a screwup becomes surprising. There's "smart" and then there's "smart enough for your cognitive mechanisms to reliably decide to sign up for cryonics". Einstein was a deist, etc. See also Eliezer1996 and the edited volume "How Smart People Can Be So Stupid". I've always been skeptical that Jeff Skilling of Enron was world-class smart, but I can easily visualize him being able to sparkle in conversation.

Still, so far as I can tell, the world's upper echelons—in those few cases I've tested, within that extremely biased sample that I encounter—really are more intelligent.

Not just, "it's who you know, not what you know". Not just personal charisma and Machiavellian maneuvering. Not just promotion of incompetents by other incompetents.

I don't say that this never happens. I'm sure it happens. I'm sure it's endemic in all sorts of places.

But there's a flip side to the story, which doesn't get talked about so much: you really do find a lot more cream as you move closer to the top.

It's a standard idea that people who make it to the elite, tend to stop talking to ordinary mortals, and only hang out with other people at their level of the elite.

That's easy for me to believe. But I suspect that the reason is more disturbing than simple snobbery. A reporter, writing about that, would pass it off as snobbery. But it makes entire sense in terms of expected utility, from their viewpoint. Even if all they're doing is looking for someone to talk to—just talk to.

Visiting that gathering of the mid-level power elite, it was suddenly obvious why the people who attended that conference might want to only hang out with other people who attended that conference. So long as they can talk to each other, there's no point in taking a chance on outsiders who are statistically unlikely to sparkle with the same level of life force.

When you make it to the power elite, there are all sorts of people who want to talk to you. But until they make it into the power elite, it's not in your interest to take a chance on talking to them. Frustrating as that seems when you're on the outside trying to get in! On the inside, it's just more expected fun to hang around people who've already proven themselves competent. I think that's how it must be, for them. (I'm not part of that world, though I can walk through it and be recognized as something strange but sparkly.)

There's another world out there, richer in more than money. Journalists don't report on that part, and instead just talk about the big houses and the yachts. Maybe the journalists can't perceive it, because you can't discriminate more than one level above your own. Or maybe it's such an awful truth that no one wants to hear about it, on either side of the fence. It's easier for me to talk about such things, because, rightly or wrongly, I imagine that I can imagine technologies of an order that could bridge even that gap.

I've never been to a gathering of the top-level elite (World Economic Forum level), so I have no idea if people are even more alive up there, or if the curve turns and starts heading downward.

And really, I've never been to any sort of power-elite gathering except those organized by the sort of person that would invite me. Maybe that world I've experienced, is only a tiny minority carved out within the power elite. I really don't know. If for some reason it made a difference, I'd try to plan for both possibilities.

But I'm pretty sure that, statistically speaking, there's a lot more cream at the top than most people seem willing to admit in writing.

Such is the hideously unfair world we live in, which I do hope to fix.

Another impressively deep, windy rabbit-hole in which Yudkowsky uses James Watson being a racist poo poo as a springboard to talk about racial differences in IQ:

quote:

Idang Alibi of Abuja, Nigeria writes on the James Watson affair:

A few days ago, the Nobel Laureate, Dr. James Watson, made a remark that is now generating worldwide uproar, especially among blacks. He said what to me looks like a self-evident truth. He told The Sunday Times of London in an interview that in his humble opinion, black people are less intelligent than the White people...

An intriguing opening. Is Idang Alibi about to take a position on the real heart of the uproar?

I do not know what constitutes intelligence. I leave that to our so-called scholars. But I do know that in terms of organising society for the benefit of the people living in it, we blacks have not shown any intelligence in that direction at all. I am so ashamed of this and sometimes feel that I ought to have belonged to another race...

Darn, it's just a lecture on personal and national responsibility. Of course, for African nationals, taking responsibility for their country's problems is the most productive attitude regardless. But it doesn't engage with the controversies that got Watson fired.

Later in the article came this:

As I write this, I do so with great pains in my heart because I know that God has given intelligence in equal measure to all his children irrespective of the colour of their skin.

This intrigued me for two reasons: First, I'm always on the lookout for yet another case of theology making a falsifiable experimental prediction. And second, the prediction follows obviously if God is just, but what does skin colour have to do with it at all?

A great deal has already been said about the Watson affair, and I suspect that in most respects I have little to contribute that has not been said before.

But why is it that the rest of the world seems to think that individual genetic differences are okay, whereas racial genetic differences in intelligence are not? Am I the only one who's every bit as horrified by the proposition that there's any way whatsoever to be screwed before you even start, whether it's genes or lead-based paint or Down's Syndrome? What difference does skin colour make? At all?

This is only half a rhetorical question. Race adds extra controversy to anything; in that sense, it's obvious what difference skin colour makes politically. However, just because this attitude is common, should not cause us to overlook its insanity. Some kind of different psychological processing is taking place around individually-unfair intelligence distributions, and group-unfair intelligence distributions.

So, in defiance of this psychological difference, and in defiance of politics, let me point out that a group injustice has no existence apart from injustice to individuals. It's individuals who have brains to experience suffering. It's individuals who deserve, and often don't get, a fair chance at life. If God has not given intelligence in equal measure to all his children, God stands convicted of a crime against humanity, period. Skin colour has nothing to do with it, nothing at all.

And I don't think there's any serious scholar of intelligence who disputes that God has been definitively shown to be most terribly unfair. Never mind the airtight case that intelligence has a hereditary genetic component among individuals; if you think that being born with Down's Syndrome doesn't impact life outcomes, then you are on crack. What about lead-based paint? Does it not count, because parents theoretically could have prevented it but didn't? In the beginning no one knew that it was damaging. How is it just for such a tiny mistake to have such huge, irrevocable consequences? And regardless, would not a just God drat us for only our own choices? Kids don't choose to live in apartments with lead-based paint.

So much for God being "just", unless you count the people whom God has just screwed over. Maybe that's part of the fuel in the burning controversy - that people do realize, on some level, the implications for religion. They can rationalize away the implications of a child born with no legs, but not a child born with no possibility of ever understanding calculus. But then this doesn't help explain the original observation, which is that people, for some odd reason, think that adding race makes it worse somehow.

And why is my own perspective, apparently, unusual? Perhaps because I also think that intelligence deficits will be fixable given sufficiently advanced technology, biotech or nanotech. When truly huge horrors are believed unfixable, the mind's eye tends to just skip over the hideous unfairness - for much the same reason you don't deliberately rest your hand on a hot stoveburner; it hurts.

potatocubed
Jul 26, 2012

*rathian noises*

Lottery of Babylon posted:

In particular, you can get down from fifty years of torture to a nanosecond of torture in finitely many moves, so there is some finite number of people m for which you would rather see one person tortured for fifty years than see m people tortured for one nanosecond each.

If "one nanosecond of torture" is assumed to be the same as a dust speck, this comes down on the side of torture over dust specks. If not, then it still comes down on the side of very long-term torture in a closely-related problem for which many of the pro-dust speck arguments would still hold.

This seems wrong to me. It has the hallmarks of a slippery slope argument, and the conclusion is abhorrent. On the other hand, I can't point to any particular torture duration at which I could draw an impassable line in the sand and justify never taking an arbitrarily small step across, so I can't reject the supremum argument: I want to say something like "Come on, once you get down to one second the 'torture' would be forgotten almost immediately", but I'd still subject one person to 1.000000001 seconds of it rather than subject a gazillion people to .999999999999 seconds of it.

I'm not a mathematician, but I'll have a crack at disputing these conclusions.

My first thought is that because we're dealing with people and their opinions you can just state as an axiom (?) of the problem that there is not some finite number of people m for which blah blah blah. Then you have to work out why a mathematical progression suggests otherwise, and I suspect the answer has more to do with woolly human thinking than maths.

Alternatively, the argument above confuses 'time spent being tortured' with 'negative utilons generated by the experience of being tortured for x seconds'. The two don't share a linear relationship, and I would be inclined to argue - as per your intuitive response - that below a certain duration of torture no negative utilons would be generated at all. We just don't give enough fucks about eye-dust for it to mean anything.

It's a lot like Zeno's Paradox, really. You can mathematically prove all sorts of wacky bullshit, but only as long as you don't include troublesome aspects of reality like "a fired arrow will hit its target" or "no one cares about getting dust in their eyes". See also spherical cows, etc.

su3su2u1 posted:

Here is Yudkowsky suggesting that the elite REALLY ARE BETTER http://lesswrong.com/lw/ub/competent_elites/.

"Man, people who agree with me on most subjects sure do sparkle!"

Djeser
Mar 22, 2013


it's crow time again

Darth Walrus posted:

This deserves to be quoted in full, because it's a glorious train o' crazy:
I enjoy how :spergin: it gets as he defines more and more levels of intelligence and keeps trying to rank everyone on different levels. New hypothesis: Yudkowsky is a Bayesian AI performing a Turing test on the world.

Darth Walrus posted:

Another impressively deep, windy rabbit-hole in which Yudkowsky uses James Watson being a racist poo poo as a springboard to talk about racial differences in IQ:
I think he's trying to say in this one that God is unfair because he made black people dumber than white people. Stupid theists, saying everyone is equal.

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

Darth Walrus posted:

I was expecting to be the youngest person there, but it turned out that my age wasn't unusual—there were several accomplished individuals who were younger. This was the point at which I realized that my child prodigy license had officially completely expired.

Now, admittedly, this was a closed conference run by people clueful enough to think "Let's invite Eliezer Yudkowsky" even though I'm not a CEO. So this was an incredibly cherry-picked sample. Even so...

Even so, these people of the Power Elite were visibly much smarter than average mortals.
Christ, what is even there to say about someone so abjectly craniorectal?

Adbot
ADBOT LOVES YOU

Namarrgon
Dec 23, 2008

Congratulations on not getting fit in 2011!

Darth Walrus posted:

This deserves to be quoted in full, because it's a glorious train o' crazy:

I especially like the part where he first emphasizes on how these CEOs are totally not there on power of charisma and further on admits that he doesn't remember any revelation but it was the general feeling.

No doublethink there.

  • Locked thread