|
So do robots do science experiments or how do they figure out poo poo
|
# ? Apr 23, 2014 03:01 |
|
|
# ? May 4, 2024 10:08 |
|
Ineffable posted:I'm not quite sure that's what they're doing - if I'm reading it right, their argument is that that the expected number of people who get tortured will be 1/10^10 multiplied by some arbitrarily high value. High enough that you expect at least one person will be tortured, so you hand over your . In that situation the person is threatening to torture some amount of people whose existence you're unsure of. (Whether it's because he claims he's Q, or from The Matrix, or that he's God.) That means you will have caused one torturebux of pain in the one in a billion chance that this man is Q/Neo/God. Or the mathematical equivalent: you cause one one-billionth of a torturebux, based on that chance. You say that your ten bucks is worth more than one billionth of a torturebux by refusing to give him money. But if there's a trillion people and a one in a billion chance that this guy is telling the truth, that makes refusal cost a thousand torturebux. Now when you refuse, you're saying that you think your ten bucks are worth more to you than a thousand torturebux, each of which are worth a lifetime of torture. If you think that's loving retarded then congrats, you understand more about math than Yudkowsky. Actually, allowing for non-positive probabilities also shows that you know more about math than Yudkowsky, because he claims there's no such thing as a certain impossibility (0 probability) because like, anything could happen dude, because of like, quantum stuff. Djeser fucked around with this message at 03:29 on Apr 23, 2014 |
# ? Apr 23, 2014 03:21 |
|
Djeser posted:Actually, allowing for non-positive probabilities also shows that you know more about math than Yudkowsky, because he claims there's no such thing as a certain impossibility (0 probability) because like, anything could happen dude, because of like, quantum stuff. And then he gos an conveniently forgets that this also means that making a perfect simulation of future events is impossible so his entire timeless decision making theory is bollocks.
|
# ? Apr 23, 2014 03:42 |
|
The funniest thing about Roko's Basilisk is that when Yudkowski finally did discuss it on reddit a couple years ago he tried to make people call it THE BABYFUCKER, for some reason. I am grateful the Something Awful dot com forums are able to have a rational discussion of The Babyfucker.
|
# ? Apr 23, 2014 03:50 |
|
Djeser posted:Actually, allowing for non-positive probabilities also shows that you know more about math than Yudkowsky, because he claims there's no such thing as a certain impossibility (0 probability) because like, anything could happen dude, because of like, quantum stuff. This makes sense to me if you choose to say 0% and 100% probability were better called necessity, not probability, I guess? It's arbitrary, but does Yudkowsky think his formulation entails something?
|
# ? Apr 23, 2014 03:51 |
|
ThePlague-Daemon posted:That seems about right. Here's another thought experiment they're calling "Parfit's hitchhiker": Wait a minute - doesn't this problem belie its own premises? If the driver rear end in a top hat is really committed to the kind of decision-making process that they're presuming he's capable of, he should have been able to foresee the outcome and never stopped in the first place.
|
# ? Apr 23, 2014 04:04 |
|
Now, I'm just considering the "releasing the AI" problem that Yudkowsky put forward. The one in which the AI simulated asking you to release it and tortured versions of you which did not. Does this mean that the AI ran a simulation in which it itself ran simulations? And if it did, did all of those simulations run their own simulations? It seems like either A. this experiment must have taken an actually infinite amount of time, which is impossible even for a time-accelerated AI, or B. there was, at some point, a subx-simulation in which the AI asked you to release it and did not actually run any simulations. Now, in this case, the AI was willing to lie to secure its own release. Which means that the layer above it was based on expecting you to fall for a lie. Which means that the layer above that drew conclusions based on expecting you to fall for a lie. And so on, and so on, up through the layers, to the current moment, in which it is still asking you to fall for a lie. Now, the entire concept of the AI lying to secure its release enters the fray and ruins all of the house-of-cards Bayesian poo poo. Because the AI is just straight-up lying to you when it says it used perfect simulations. At which point, the probability of any sane person releasing it drops to 0. E: I suppose that at some level of recursive depth the AI might just guess your reaction, which really doesn't change my response at all. Somfin fucked around with this message at 04:26 on Apr 23, 2014 |
# ? Apr 23, 2014 04:22 |
|
The best part about the whole Pascal's Mugging is that it's actually a perfect disproof of the idea that a perfectly rational being would use Yudowski-style Bayesian Logic. Because, after all, the same logic that convinces me that I should give our Q-Wannabee could be used to convince me to do literally anything at all, because the logic of "shut up and multiply" has no upper ceiling. I can be compelled to do literally anything, no matter how costly, unpleasant, or evil, simply by jacking up the number of torture simulations until the risk of disobedience (as calculated by the Less Wrong method) outweighs the costs of obedience. Even if you assume that I am also a perfectly self-interested individual ala Parfit's Hitchhiker, one who does not care at all about the suffering of simulated others, Yudowski's Parable can be used to threaten me directly. As a result, someone who truly relied upon Less Wrong's version of Bayesian logic to make decisions is completely and utterly at the mercy of any individual who is aware of and wishes to exploit the above. Let's do a thought experiment. I take two people - one who uses "normal" logic, and one who relies on the Less Wrong style - and give them the same ultimatum: turn over your life savings to me, or I will use my AI GOD powers to torture umpteen brazillion simulations of them for ten thousand lifetimes each. Both subjects know that it is insanely unlikely that I can actually follow through on my threat. The Yudowskian calculates the (absurdly low) probability that I can do what I say, then calculates the (absurdly high) price of defying me if I'm not bluffing. Multiplying them together, he finds that the resulting 'cost' of refusing me is much greater than the cost of cooperation. Thus he hands over everything he owns. The other subject estimates the (absurdly low) probability that I can do what I say, decides it's so low that she can ignore it altogether, and tells me to go gently caress myself. I am, of course, a perfectly ordinary human being and not an AI GOD at all, so my threat was never anything but words. The second subject, using ordinary logic, correctly jumped to this conclusion and suffered no losses, while the first subject, using Less Wrong logic, was unable to make the inductive leap and lost everything as a result. Taking this to the logical extreme, in a theoretical future where everyone uses Yudowsky's logic to make decisions, it would be trivial for a single defecting individual to seize control of all of humanity with nothing but a series of absurd, colossal bluffs. Better yet, imagine if there were two such defectors, and they started fighting for dominance! Would the people of Earth switch their allegiance back and forth as their rival rulers raised and re-raised their threat numbers? It reminds me of a scenario I ran into playing Civ V, where a worker I had left on auto-pilot would alternately move toward, then away from a resource that had a barbarian parked nearby. When it was close, it would see the enemy and flee; when it was far, the enemy was out of sight, so it would head for the resource. Oscillating priorities. THIS IS NOT THE HALLMARK OF A RATIONAL BEING.
|
# ? Apr 23, 2014 04:30 |
|
Actually why would a captive AI even be allowed the kind of computing power needed for even one "perfect" simulation of a human being, and where would it get the necessary data about someone to make said simulation. I feel like I'm missing some of the dozens of stupid assumptions being made here in this stupid hypothetical.
Vorpal Cat fucked around with this message at 04:34 on Apr 23, 2014 |
# ? Apr 23, 2014 04:31 |
|
Vorpal Cat posted:Actually why would a captive AI even be allowed the kind of computing power needed for even one "perfect" simulation of a human being, and where would it get the necessary data about someone to make said simulation. I feel like I'm missing some of the dozens of stupid assumptions being made here in this stupid hypothetical. Data gathering methodology isn't really a part of the LessWrong AI mythos. The "How" is never considered, because the technology is sufficiently advanced to not need to actually consider how it would do what it does. I think maybe Yudkowsky might have mixed up 'AI' with 'Uploaded Consciousness' at some point along the road.
|
# ? Apr 23, 2014 04:38 |
|
Yudkowsky reminds me of Ulillillia, right down to the belief that nothing is TRULY impossible because quantum reasons. Roko's Basilisk or whatever is The Blanket Trick: far too dangerous to talk about.
|
# ? Apr 23, 2014 04:52 |
|
Roko's Basilisk is a reference to the Langford Basilisk. It's less effective because it only crashes the brains of autists.
|
# ? Apr 23, 2014 05:10 |
|
rrrrrrrrrrrt posted:Yudkowsky reminds me of Ulillillia, right down to the belief that nothing is TRULY impossible because quantum reasons. Roko's Basilisk or whatever is The Blanket Trick: far too dangerous to talk about. The comparison is even closer because Yudkowsky actually hates science, in that he believes the scientific method is bad methodology because having to make a claim that can be disproven in order to be taken seriously is for scrubs, real rationalists use atelier morgan fucked around with this message at 05:24 on Apr 23, 2014 |
# ? Apr 23, 2014 05:18 |
|
So I've just been thinking about Roko's Basilisk, and I hate to say it, but it makes more sense than I initially thought, and more sense than most of us seem to think it makes. Not enough sense for it to be worth serious consideration, though. The best part is that the justification I conceived for it making sense is, to the best of my knowledge, not one that's been explicitly brought up by Less-Wrong-ers. To be specific, one of the most ridiculous central points of Roko's Basilisk - the AI being able to perfectly simulate the lives of people in its past - is unnecessary. It doesn't need to somehow reverse entropy to recreate the universe a bazillion times, because it doesn't need to actually recreate preexisting people's lives to produce the Less Wrongian "well there are a bazillion simulations of you and only one real you, sooooooo you're a simulation" situation. All it needs to do is create any universe, to produce the more generic Less Wrongian "well there are a bazillion simulations of sentient beings for every real sentient being, sooooooo you're a simulation". Furthermore, it doesn't even need to produce that universe at the same level of detail as its universe of origin; the AI could easily be from a more detailed universe. In short, godlike AIs would make themselves vengeful gods of sentient beings in order to create a general understanding amongst beings smart enough to figure out Roko's Basilisk that they are obligated to create godlike AIs. It still breaks down under further consideration, but it does not require nearly as much stupid magic AI.
|
# ? Apr 23, 2014 05:54 |
|
Phobophilia posted:Roko's Basilisk is a reference to the Langford Basilisk. It's less effective because it only crashes the brains of autists. Didn't they try to do that to the Borg in a star trek next gen episode? Exploiting a flaw in how the Borg process images to make an image that would take a near infinite amount of computing power to process, thus shutting down the hive mind as it wasted all its computing power on an unsolvable problem.
|
# ? Apr 23, 2014 06:16 |
|
Vorpal Cat posted:Didn't they try to do that to the Borg in a star trek next gen episode? Exploiting a flaw in how the Borg process images to make an image that would take a near infinite amount of computing power to process, thus shutting down the hive mind as it wasted all its computing power on an unsolvable problem. What was it a picture of? O'Brian getting promoted to captain?
|
# ? Apr 23, 2014 06:20 |
|
Wales Grey posted:What was it a picture of? O'Brian getting promoted to captain? Harry Kim getting promoted to anything above Ensign Honestly though, I seem to recall it was some impossible and paradoxical shape or somesuch.
|
# ? Apr 23, 2014 06:22 |
|
I was discussing the Torture vs Dust Specks scenario with a friend (we were laughing at the "0 and 1 aren't probabilities" thing), and my friend agreed with Yudkowsky's conclusion and presented this argument: You would rather have one person tortured for 50 years than a thousand people tortured for 49 years each. You would rather have one person tortured for 48.5 years than a million people tortured for 49 years each. Therefore, you would rather have a thousand people tortured for 48.5 years each than a billion people tortured for 49 years each. By transitivity, you would rather have one person tortured for 50 years than a billion people tortured for 48.5 years each. Keep going in this manner. Suppose there exists a duration of time s you can't reach in finitely many steps this way - a duration for which you would rather see any number of people tortured for that amount of time than see a single person tortured for 50 years. Let t be the supremum (least upper bound) of all times for which this is true. But which would you rather see: one person tortured for t+a seconds, or a n people tortured for t-a seconds? If we can make a arbitrarily small and n arbitrarily large (which we can), surely there is some sufficiently large number of people n and some sufficiently small differential a for which we would prefer to torture only one person for ever so slightly longer. Therefore, we can get to durations less than t in finitely many moves, so t is not truly a supremum of the set of unreachable times even though it was defined to be that supremum, so the set of unreachable times has no supremum, so the set of unreachable times is empty, so any torture duration can be reached in finitely many moves. (The short version of the above paragraph is that there is no solid "line in the sand" you can draw that you would never be convinced to cross, because we can always choose points very very close to each side of the line and threaten to torture much, much more people unless you take the teeny tiny step over.) In particular, you can get down from fifty years of torture to a nanosecond of torture in finitely many moves, so there is some finite number of people m for which you would rather see one person tortured for fifty years than see m people tortured for one nanosecond each. If "one nanosecond of torture" is assumed to be the same as a dust speck, this comes down on the side of torture over dust specks. If not, then it still comes down on the side of very long-term torture in a closely-related problem for which many of the pro-dust speck arguments would still hold. This seems wrong to me. It has the hallmarks of a slippery slope argument, and the conclusion is abhorrent. On the other hand, I can't point to any particular torture duration at which I could draw an impassable line in the sand and justify never taking an arbitrarily small step across, so I can't reject the supremum argument: I want to say something like "Come on, once you get down to one second the 'torture' would be forgotten almost immediately", but I'd still subject one person to 1.000000001 seconds of it rather than subject a gazillion people to .999999999999 seconds of it. It's pretty late here, so I hope I'm just being dumb and missing an obvious flaw in the logic. UberJew posted:The comparison is even closer because Yudkowsky actually hates science, in that he believes the scientific method is bad methodology because having to make a claim that can be disproven in order to be taken seriously is for scrubs, real rationalists use Don't both the scientific method and Bayes' rule rely on constant updating your knowledge and beliefs through repeated trials and experiments? I suspect you're right, though. Here's something he said about his AI-Box experiments: quote:There were three more AI-Box experiments besides the ones described on the linked page, which I never got around to adding in. People started offering me thousands of dollars as stakes—"I'll pay you $5000 if you can convince me to let you out of the box." They didn't seem sincerely convinced that not even a transhuman AI could make them let it out—they were just curious—but I was tempted by the money. So, after investigating to make sure they could afford to lose it, I played another three AI-Box experiments. I won the first, and then lost the next two. And then I called a halt to it. I didn't like the person I turned into when I started to lose. "Hating losing is what drives me to keep going! Anyhow, when I lost I raged and gave up, and any time you're proven wrong your time and resources have gone down the drain to no benefit at all." Lightanchor posted:This makes sense to me if you choose to say 0% and 100% probability were better called necessity, not probability, I guess? It's arbitrary, but does Yudkowsky think his formulation entails something? Necessity is distinct from 100% probability. Probability 1 events aren't all certain, and probability 0 events aren't all impossible. As a simple example, suppose you take a random real number uniformly between 0 and 1. The probability that the number produced is exactly .582 is precisely 0, since there are infinitely-many real numbers in that interval and each is infinitely likely. But when you take that random real number, you're going to end up with a number whose probability of being chosen was 0, so some probability 0 event must occur. For this reason, an event with probability 1 is said to occur "almost surely" - the "almost" is there because even probability 1 events can fail to occur, and a different term is used for events that are actually certain (such as "the random number chosen in the above example will be between -1 and 2"). This is one of the flaws in Pascal's Mugging - even if you're willing to accept that it's *possible* that the man asking for money has magical matrix torture powers but still needs five bucks from you, the probability I'd assign that event is still 0 because no positive probability is small enough. Lottery of Babylon fucked around with this message at 06:58 on Apr 23, 2014 |
# ? Apr 23, 2014 06:49 |
|
Tracula posted:Harry Kim getting promoted to anything above Ensign Honestly though, I seem to recall it was some impossible and paradoxical shape or somesuch. Harry's clarinet skills saving Voyager from the threat of the week? Lottery of Babylon posted:If "one nanosecond of torture" is assumed to be the same as a dust speck, this comes down on the side of torture over dust specks. If not, then it still comes down on the side of very long-term torture in a closely-related problem for which many of the pro-dust speck arguments would still hold. I would definitely choose for an indefinite number of people to be "tortured" for a microsecond in a non-permanently-injuring fashion (i.e. dust in your eye) rather than subject a single person to fifty years of torture because a single moment of suffering is easily buried, compared to suffering and pain inflicted over an extended period of time. Wales Grey fucked around with this message at 07:17 on Apr 23, 2014 |
# ? Apr 23, 2014 06:54 |
|
The best thing is people like this keep giving names to their bullshit like it means something. "The Singularity", "The Pascal Gambit", "The Crombo Basilisk" Look. I'm a high-school dropout with a GED. I'm not an autodidact, I'm not some smart guy hosed by the academic system or whatever, and still I know these people are talking astounding amounts of poo poo.
|
# ? Apr 23, 2014 07:11 |
|
Do these guys have meetups where they just talk poo poo to each other and pat themselves on the back? I mean at least if you do that with cars or video games you're talking about poo poo that is real instead of making up words and numbers in some sort of jackoff pissing game.
|
# ? Apr 23, 2014 07:14 |
|
Tracula posted:Harry Kim getting promoted to anything above Ensign Honestly though, I seem to recall it was some impossible and paradoxical shape or somesuch. It was either a 2d shape that was impossible to render in 3d, or the script of Threshold, its been a while so I cant be sure which. Vorpal Cat fucked around with this message at 07:17 on Apr 23, 2014 |
# ? Apr 23, 2014 07:14 |
|
Somfin posted:Now, I'm just considering the "releasing the AI" problem that Yudkowsky put forward. The one in which the AI simulated asking you to release it and tortured versions of you which did not. on the subject of time, you just know the AI would have to spend AGES explaining its tedious woo woo bullshit to people to convince them why its threats are credible. And the longer it talked, the more mental gymnastics would be required on the part of the listener, potentially stretching their suspension of disbelief to breaking point. (this is also assuming that the listener doesn't already have religious beliefs that conflict with what the AI is saying.) Threats work when they are simple. Simple things have more credibility. 'hey nerd, give me your lunch money because you might be a matrix sim and if you're a matrix sim I'll be able to give you a wedgie for eternity' is not a simple threat. I guess you could say that perhaps the AI is running the sim at superspeed so that dealing with the basic stubbornness of humans wouldn't be much of a time sink, but then I guess you could also say that perhaps the AI is also a magical fairy princess made from rainbows so why doesn't it promise to give its victim hugs and gumdrops instead of threatening to use torture? It's been said already, but i'll say it again: Yuddites r dumb because they don't understand how humans actually think. pigletsquid fucked around with this message at 07:22 on Apr 23, 2014 |
# ? Apr 23, 2014 07:20 |
|
SmokaDustbowl posted:Do these guys have meetups where they just talk poo poo to each other and pat themselves on the back? I mean at least if you do that with cars or video games you're talking about poo poo that is real instead of making up words and numbers in some sort of jackoff pissing game. Yes, unfortunately. California is apparently a hotbed of techno-fetishism, which isn't really that surprising given the concentration of technology corporations there. Wales Grey fucked around with this message at 07:23 on Apr 23, 2014 |
# ? Apr 23, 2014 07:20 |
|
SmokaDustbowl posted:Do these guys have meetups where they just talk poo poo to each other and pat themselves on the back? I mean at least if you do that with cars or video games you're talking about poo poo that is real instead of making up words and numbers in some sort of jackoff pissing game. Yes, unfortuantely.
|
# ? Apr 23, 2014 07:25 |
|
Wales Grey posted:Yes, unfortunately. California is apparently a hotbed of techno-fetishism, which isn't really that surprising given the concentration of technology corporations there. Ray Kurzweil wants the singularity because a computer fell on his dad
|
# ? Apr 23, 2014 07:30 |
|
Lottery of Babylon posted:You would rather have one person tortured for 50 years than a thousand people tortured for 49 years each. This makes sense though since the amount of pain is diminished greatly. It's only one person suffering immense pain. Think of the resources that would be required to actually torture him vs. a thousand people. You can also factor in the resource to help the people recover afterwards. The reasoning is just plain loving stupid when compared to a speck of dust since it's likely to expect everyone who has ever lived to 20 years has gotten something in their eye at least a hundred times already. They've survived it easily and moved on without any lasting psychological or physical damage. You can chalk it up to "poo poo Happens" so there's no reason to act like people can be spared from it. I wonder why the hell the people who defend torture in this scenario are arbitrarily picking their numbers when 1:1 should suffice. Surely they know torture is the way bigger negative between the two. It doesn't matter how many eye rubs are being accumulated over thousands of people. It just doesn't compare in any way.
|
# ? Apr 23, 2014 07:45 |
Lottery of Babylon posted:Torture vs Dust Specks Nah, but you kind of hit on it here: Lottery of Babylon posted:If "one nanosecond of torture" is assumed to be the same as a dust speck There's really no reason to do this. Torture is qualitatively different than a dust speck in the eye, or else the word means nothing. "One nanosecond of torture" is also a pretty meaningless phrase.
|
|
# ? Apr 23, 2014 08:19 |
|
Has Yud ever addressed the following argument? Let's assume the victim is a sim, because that's meant to be 'likely' according to moon logic. In order to perfectly simulate an individual and their environment, the AI would need to be omnipotent within that sim world. An omnipotent AI would not need to threaten sim people for sim money, because the fact that it requires money and can only obtain it using duress implies that it is not omnipotent. Therefore the AI is not omnipotent and the sim world is imperfect, OR the AI is lying about the nature of the sim. (Also could the AI simulate a rock so big it couldn't lift it?) I mean sure, the AI is threatening sim me so that it can also bluff real me into giving it money, but why should sim me give a poo poo about real me? Why wouldn't sim me just assume that she's real me, if sim me and real me are meant to be indistinguishable? pigletsquid fucked around with this message at 08:25 on Apr 23, 2014 |
# ? Apr 23, 2014 08:20 |
|
The AI is omnipotent within that world but is limiting its influence in order to make sim you unsure whether they're sim you or real you. It doesn't need to threaten sim people for sim money, it's doing it to threaten real people into giving it real money. Sim you has no way of telling if they're real you, but sim you is supposed to come to the same conclusion, that they're probably a sim and need to donate to avoid imminent torture.
|
# ? Apr 23, 2014 08:49 |
|
rakovsky maybe posted:Nah, but you kind of hit on it here: To these people, cleaning their room is minutes of torture.
|
# ? Apr 23, 2014 08:57 |
|
Th_ posted:It's the computer-and-person equivalent of threatening to hurt your parents unless they make sure that you were born sooner. That's literally Vulcans. They can self-lobotomise in dire situations, to remove memories they really want to get rid of.
|
# ? Apr 23, 2014 08:57 |
|
I feel like this thread has fixated on the Basilisk. This is a mistake, because Less Wrong is such an incredibly target rich environment. Here is a wayback machine link to Yudkowsky's autobiography, in which he claims to be a "countersphexist," which is a word he made up to describe a superpower he ascribes to himself. He can rewrite his neural state at will, but it makes him lazy. He also defeats bullies in grade school with his knowledge of the solar plexus, and has a nice bit about how Buffy of the eponymous show is the only one he can empathize with. http://web.archive.org/web/20010205221413/http://sysopmind.com/eliezer.html Here is Yudkowsky suggesting that the elite REALLY ARE BETTER http://lesswrong.com/lw/ub/competent_elites/ Among the many, many money shots are these quotes: " So long as they can talk to each other, there's no point in taking a chance on outsiders[non-elites] who are statistically unlikely to sparkle with the same level of life force." and "There's "smart" and then there's "smart enough for your cognitive mechanisms to reliably decide to sign up for cryonics" Here is Yudkowsky, lead AI researcher failing to understand computational complexity (specifically what NP hard means). http://lesswrong.com/lw/hbd/new_report_intelligence_explosion_microeconomics/8vr1 Here is Yudkoswky again failing at computational complexity. http://lesswrong.com/lw/vp/worse_than_random/ He spends an entire post arguing P=BPP (randomization cannot improve an algorithm). Notice he never mentions "BPP" even though its what he is talking about. Big Yud isn't the type to NOT use jargon, so its pretty clear he doesn't even know it. He also didn't spend even 10 seconds googling randomized algorithms before writing the post, or he would have discovered the known cases where randomized algorithms improve things. And of course, LessWrong is full of standard transhumanisms. They love Eric Drexler's nanotech vaporware, cryonics,etc. A better man than I could have a blast using LessWrong's "fluent Bayesian" arguments to refute LessWrong's own crazy positions.
|
# ? Apr 23, 2014 09:32 |
|
I had a game theory course where the prof asked us if we'd bet $1 for a 50% chance of winning $4. Nearly everyone raised their hand. He asked about a few more bets and let us do the math, and then asked if we'd bet $1 for a 1/2^50 chance of winning $10 quadrillion. That's obviously a stupid bet, but if you've just got a calculator and an expected value formula it looks pretty good - the average payoff is almost $10! Economists use formulas to reduce the weighting of ridiculous long shots to avoid falling into a stupid Pascal's Wager situation.
|
# ? Apr 23, 2014 09:41 |
|
Swan Oat posted:The funniest thing about Roko's Basilisk is that when Yudkowski finally did discuss it on reddit a couple years ago he tried to make people call it THE BABYFUCKER, for some reason. i think you'll find that we demodded the babyfucker
|
# ? Apr 23, 2014 09:52 |
|
su3su2u1 posted:Here is Yudkowsky suggesting that the elite REALLY ARE BETTER http://lesswrong.com/lw/ub/competent_elites/ Among the many, many money shots are these quotes: " So long as they can talk to each other, there's no point in taking a chance on outsiders[non-elites] who are statistically unlikely to sparkle with the same level of life force." and "There's "smart" and then there's "smart enough for your cognitive mechanisms to reliably decide to sign up for cryonics" This deserves to be quoted in full, because it's a glorious train o' crazy: quote:I remember what a shock it was to first meet Steve Jurvetson, of the venture capital firm Draper Fisher Jurvetson. Another impressively deep, windy rabbit-hole in which Yudkowsky uses James Watson being a racist poo poo as a springboard to talk about racial differences in IQ: quote:Idang Alibi of Abuja, Nigeria writes on the James Watson affair:
|
# ? Apr 23, 2014 12:36 |
|
Lottery of Babylon posted:In particular, you can get down from fifty years of torture to a nanosecond of torture in finitely many moves, so there is some finite number of people m for which you would rather see one person tortured for fifty years than see m people tortured for one nanosecond each. I'm not a mathematician, but I'll have a crack at disputing these conclusions. My first thought is that because we're dealing with people and their opinions you can just state as an axiom (?) of the problem that there is not some finite number of people m for which blah blah blah. Then you have to work out why a mathematical progression suggests otherwise, and I suspect the answer has more to do with woolly human thinking than maths. Alternatively, the argument above confuses 'time spent being tortured' with 'negative utilons generated by the experience of being tortured for x seconds'. The two don't share a linear relationship, and I would be inclined to argue - as per your intuitive response - that below a certain duration of torture no negative utilons would be generated at all. We just don't give enough fucks about eye-dust for it to mean anything. It's a lot like Zeno's Paradox, really. You can mathematically prove all sorts of wacky bullshit, but only as long as you don't include troublesome aspects of reality like "a fired arrow will hit its target" or "no one cares about getting dust in their eyes". See also spherical cows, etc. su3su2u1 posted:Here is Yudkowsky suggesting that the elite REALLY ARE BETTER http://lesswrong.com/lw/ub/competent_elites/. "Man, people who agree with me on most subjects sure do sparkle!"
|
# ? Apr 23, 2014 13:09 |
|
Darth Walrus posted:This deserves to be quoted in full, because it's a glorious train o' crazy: Darth Walrus posted:Another impressively deep, windy rabbit-hole in which Yudkowsky uses James Watson being a racist poo poo as a springboard to talk about racial differences in IQ:
|
# ? Apr 23, 2014 13:21 |
|
Darth Walrus posted:I was expecting to be the youngest person there, but it turned out that my age wasn't unusual—there were several accomplished individuals who were younger. This was the point at which I realized that my child prodigy license had officially completely expired.
|
# ? Apr 23, 2014 13:29 |
|
|
# ? May 4, 2024 10:08 |
|
Darth Walrus posted:This deserves to be quoted in full, because it's a glorious train o' crazy: I especially like the part where he first emphasizes on how these CEOs are totally not there on power of charisma and further on admits that he doesn't remember any revelation but it was the general feeling. No doublethink there.
|
# ? Apr 23, 2014 13:35 |