Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Iunnrais
Jul 25, 2007

It's gaelic.
Yeah, that's the thing that gets me every time. For all his anti-religion anti-deity stance, he is HIGHLY religious and simply thinks that the "singularity AI" is a god. He uses every theological argument that's been devised by monks for the past 2000 odd years, even (or especially) the most discredited ones, and just applies it to a "sufficiently advanced AI". It's a strange hypocrisy that befuddles me that it fails to inspire cognitive dissonance in him, except that since he disparages "religion" he's never actually studied any of it, so ironically doesn't understand the weaknesses he claims to assault.

Adbot
ADBOT LOVES YOU

pentyne
Nov 7, 2012

Iunnrais posted:

Yeah, that's the thing that gets me every time. For all his anti-religion anti-deity stance, he is HIGHLY religious and simply thinks that the "singularity AI" is a god. He uses every theological argument that's been devised by monks for the past 2000 odd years, even (or especially) the most discredited ones, and just applies it to a "sufficiently advanced AI". It's a strange hypocrisy that befuddles me that it fails to inspire cognitive dissonance in him, except that since he disparages "religion" he's never actually studied any of it, so ironically doesn't understand the weaknesses he claims to assault.

Because once you try to argue anything tangentially related to religion or politics the LW community just smugly smiles and treats you like a precocious child.

You'd think the unofficial ban of any political discussion would be a major red flag that LW is incapable of serious debate that doesn't include "Well, you're completely wrong because you don't use Bayes, and if you did clearly you'd agree with me."

Darth Walrus
Feb 13, 2012

pentyne posted:

Because once you try to argue anything tangentially related to religion or politics the LW community just smugly smiles and treats you like a precocious child.

You'd think the unofficial ban of any political discussion would be a major red flag that LW is incapable of serious debate that doesn't include "Well, you're completely wrong because you don't use Bayes, and if you did clearly you'd agree with me."

Speaking of, this article was posted earlier, but seems relevant.

quote:

I'm no longer a skeptic, but still I can't resist the old skeptic urge to do a bit of debunking. After all, there are a lot of crackpots out there. There are people, for example, who believe that a superintelligent computer will arise in the next twenty years and then promptly either destroy humanity or cure death and set us free. There are people who believe that one of the best works of English literature is an unfinished Harry Potter fanfic by someone who can barely write a comprehensible English sentence. There are even people who believe the best thing you can do to help the poor and the starving is become a city stockbroker or Silicon Valley entrepreneur! And more often than not, the same people believe all these crazy things!

The striking thing about these people is that they are no ordinary kooks — some of them actually identify as skeptics themselves, and all of them claim to be committed rationalists. Many are even full-time evangelists for "rationality", and can justify in sound and impressive detail why all their beliefs are correct. And they're not just backed up by the laws of logic and mathematics, but also by some of the finest minds and fattest wallets in Silicon Valley. Who are these people? They are the members of the Elect group who have received into their minds and hearts the glorious truth of something called Bayes' Theorem.

BAYESIAN GRACE

Bayes' Theorem is a simple formula that relates the probabilities of two different events that are conditional upon each other. First proposed by the 18th-century Calvinist minister Thomas Bayes, the theorem languished in relative obscurity for nearly 200 years, an obscurity some would say was deserved. In its general form it follows trivially from the definitions of classical probability, which were formalised not long after Bayes' death by Pierre-Simon Laplace and others. From the time of Laplace until the 1960s, Bayes' Theorem barely merited a mention in statistics and probability textbooks.

The theorem owes its present-day notoriety to Cold-War-era research into statistical models of human behaviour [1], the same research movement that gave us the Prisoner's Dilemma, Mutually Assured Destruction and gently caress You, Buddy. Statisticians in the 1950s and 1960s, initially concentrated around the University of Chicago and the Harvard Business School, decided to interpret probability not as a measure of chance, but as a measure of the confidence an agent has in its subjective beliefs. Under this interpretation, Bayes' Theorem acquired great prescriptive power: it expressed how a perfectly rational agent should revise its beliefs upon obtaining new evidence. In other words, the researchers had discovered in Bayes' Theorem an elegant, one-line formula for how best to learn from experience — the formula for a perfect brain!

The researchers decided to reinterpret all statistics in this subjective, prescriptive and profoundly individualist sense, launching a revolution that set their field ablaze. The struggle that followed pitted "frequentists", who defended the old statistical ways, against "Bayesian" radicals — though since most of the Bayesians were ideological mates with Milton Friedman and John von Neumann, maybe "radical" isn't the right word. The Bayesians were excited by the potential applications of their new way of thinking in political, business and military spheres. Their opponents, meanwhile, kept pointing to Bayesianism's central difficulty: assigning a confidence value to a subjective belief is a whole lot more problematic than assigning a probability to the outcome of a coin toss.

The Bayesian revolution was not confined to the field of statistics: truly, it was a shot that was heard around the globe. Cognitive scientists seized on Bayes as a whole new way to introduce quantifiable crap into their models of human behaviour. Analytic philosophers expanded "Bayesianism" into a whole new area of empty pedantry. AI researchers used electronic Bayesian brains in a number of equally unimpressive "intelligent" systems — classifiers, learning systems, inference systems, rational agents. And right-wing economists saw in Bayesian agents — goal-directed, self-interested, anti-social creeps — a model of their ideal investor or consumer.

AMAZING BAYES

More pertinently to my interests, Bayesianism has also found a strong foothold in nerd culture. Bayes' theorem is now the stuff of gurus and conventions, T-shirts and fridge magnets, filking and fanfic. There are people who strive to live by its teachings; it's not an exaggeration to say that a cult has arisen around it. How did this formula create such a popular sensation? Why do so many people identify so strongly with it?

Perhaps the answer lies in the beguiling power of its prescriptive interpretation. In one simple line, Bayes' Theorem tells you how a perfectly rational being should use its observations to learn and improve itself: it's instructive, aspirational and universal. Other mathematical and scientific laws are merely the truth, but Bayes' Theorem is also the way and the light.

I must also admit that the theorem has some wondrous properties, of the kind that can readily inspire devotion. It's a small and simple formula, but it regularly works minor miracles. Its power often surprises; it has a habit of producing counterintuitive but correct results; it won't be fooled by those annoying trick questions posed by smug psychologists; it seems smarter than you are. It's not hard to see why people who discover Bayes' Theorem, like people who discover secret UFO files, gnostic texts, or Prolog programming, can think they've opened up a new world of deeper, strange, alien truths. And from there, it's a short step to become an eager disciple of Bayesianism.

Bayesianism has a particular attraction for nerds, who see in Bayes' Theorem the calculating badass they always imagined themselves to be. Much like the protagonist of a bad sci-fi novel, the theorem decisively uses the available evidence to attain the best possible results every time. It's no surprise that it wins fans among the milieu that idolises Ayn Rand, Robert Heinlein, Frank Herbert, and Ender Wiggin.

True believers can attribute amazing powers to Bayes' Theorem, even in places it doesn't belong. Here, for example, is a committed Bayesian who thinks he has used the theorem to prove that the sentence "absence of evidence is not evidence of absence" is a "logical fallacy". I find this striking for two reasons. Firstly, because I had no idea Bayes' Theorem had anything to say about this matter, and secondly, because I'm pretty sure the sentence in question is not a "fallacy", at least in the sense that the majority of level-headed English-speakers would interpret it. [2] But some Bayesians seem to believe they can conclude a falsehood from their own ignorance.

Another over-enthusiastic Bayes fan is the historian and "New Atheist" loudmouth Richard Carrier, who in his book Proving History claims that any valid historiographic method should be reducible to Bayes' Theorem. In the general sense, this claim couldn't be less interesting: both Bayes and historians are concerned with getting at "truth" by processing "evidence", and pointing this out will enlighten no one. And in the specific sense, the claim couldn't be more stupid. The idea that historical evidence and theses could be reduced to probability values, and that plugging them into Bayes' Theorem would make historical research more accurate or reliable or rigorous, is not only the worst kind of technocrat fantasy, it's also completely unworkable.

In general practice, there's no way to come up with meaningful figures for the right-hand side of the Bayes equation, and so Bayesians inevitably end up choosing values that happen to justify their existing beliefs. The Reverend Bayes originally used his theorem to prove the existence of God, while in his next book, Carrier will apparently use the same theorem to disprove the existence of the historical Jesus. I myself have applied it to a thornier subject: the problem of uncovering academic frauds. In fact, by inputting the precise values for what I believe about academic frauds, their likelihood of spreading pseudoscientific bullshit, and the likelihood that pseudoscientific bullshit is precisely what Carrier is spreading, I have used Bayes' Theorem to calculate that Richard Carrier is almost certainly a fraud, to a confidence level of five sigma. (Calculations available upon request.)

LESS WRONG

The most visible group of Bayesians on the web, and my subject for the rest of this article, is the Singularity Institute of Berkeley, California. [3] This organisation is the Heaven's Gate of Bayesianism, a doomsday cult which preaches that a hostile AI will take over the world and destroy us all (in an event known as the singularity) unless we do something about it — like make regular donations to the Singularity Institute. The Institute is urgently working on our best hope of survival. So far, it has put a lot of windbags on websites.

The principal website associated with the Singularity Institute is lesswrong.com, which describes itself as "a community blog devoted to refining the art of human rationality". As far as I can tell, "refining the art of human rationality" involves using Bayes' Theorem to develop a series of Baloney-Detection-Kit-style heuristics you can slap over any argument. I'm still unsure as to why becoming Bayesian will help us against the singularity threat: maybe the superintelligent AI will then confuse us for one of its own? I've no doubt the answer is explained somewhere in the 100,000 "sequences" on lesswrong.com, but please don't ask me to comb them for it.

The "sequences" are a pompously-titled series of blog entries written by Lesswrong's central guru figure, the charismatic autodidact Eliezer Yudkowsky, in which the tenets of the cult are murkily explained. I must confess that I fail to see the appeal of this guy and his voluminous writings. I find the "sequences" largely impenetrable, thick with nerd references and homespun jargon, and written in a claggy, bombastic, inversion-heavy style that owes much to Tolkien's Return of the King and more still to Yoda. When I read stuff like "correct complexity is only possible when every step is pinned down overwhelmingly", or "They would need to do all the work of obtaining good evidence entangled with reality, and processing that evidence coherently, just to anticorrelate that reliably", I can't hit the back button quickly enough. The main purpose of this kind of writing is to mystify and to overawe. Like countless other gurus, Yudkowsky has the pose of someone trying to communicate his special insight, but the prose of someone trying to conceal his lack of it. I suspect this is the secret of much of his cult-leader charisma.

And I suspect the rest can be accounted for by the built-in charisma of the autodidact, which is part of Western cultural heritage. Here in Christendom, people are conditioned to admire the voice in the wilderness, the one who goes it alone, the self-made man, the pioneer. Popular scientific narratives are filled with tales of the "lone genius" who defies the doubters to make his great advances; these tales are mostly bullshit, but people in this post-Christian age desperately want to believe them. And Yudowsky, a wannabe polymath with no formal education, whose ego is in inverse proportion to his achievements, has all the stylings of a self-made science Messiah. He's an earthly avatar for the kind of forsaken savants who think their IQ scores alone should entitle them to universal respect and admiration.

Yudkowsky isn't the only autodidact at lesswrong.com: his apostle Luke Muehlhauser is also proudly self-educated, boasting "I studied psychology in university but quickly found that I learn better and faster as an autodidact." I'll take his word on "faster", but I'll have to quibble about "better". The trouble with autodidacts is that they tend to suffer a severe loss of perspective. Never forced to confront ideas they don't want to learn, never forced to put what they've learned in a wider social context, they tend to construct a self-justifying and narcissistic body of knowledge, based on an idiosyncratic pick-and-mix of the world's philosophies. They become blinded to the incompleteness of their understanding, and prone to delusions of omniscience, writing off whole areas of inquiry as obviously pointless, declaring difficult and hotly-debated problems to have a simple and obvious answer. Yudkowsky and Muehlhauser exhibit all these symptoms in abundance, and now, surrounded by cultists and fanboys and uncritical admirers, they might well be hopeless cases.

BAYESIANISM IS THE MIND-KILLER

Many critics of the Singularity Institute focus on its cult-like nature: the way it presents itself as the only protection against an absurdly unlikely doomsday scenario; the way its members internalise a peculiar vocabulary that betrays itself when they step outside the cult confines; the way they keep pushing the work of their idolised cult guru on unwilling readers. (In particular, they keep cheerleading for Yudkowsky's endlessly dire Harry Potter fanfic, Mary Sue and the Methods of Rationality.) While all these criticisms are legitimate, and the cultish aspects of the Singularity Institute are an essential part of its power structure, I'm more concerned about the political views it disseminates under the guise of being stridently non-political.

One of Yudkowsky's constant refrains, appropriating language from Frank Herbert's Dune, is "Politics is the Mind-killer". Under this rallying cry, Lesswrong insiders attempt to purge discussions of any political opinions they disagree with. They strive to convince themselves and their followers that they are dealing in questions of pure, refined "rationality" with no political content. However, the version of "rationality" they preach is expressly politicised.

The Bayesian interpretation of statistics is in fact an expression of some heavily loaded political views. Bayesianism projects a neoliberal/libertarian view of reality: a world of competitive, goal-driven individuals all trying to optimise their subjective beliefs. Given the demographics of lesswrong.com, it's no surprise that its members have absorbed such a political outlook, or that they consistently push political views which are elitist, bigoted and reactionary.

Yudkowsky believes that "the world is stratified by genuine competence" and that today's elites have found their deserved place in the hierarchy. This is a comforting message for a cult that draws its membership from a social base of Web entrepreneurs, startup CEOs, STEM PhDs, Ivy leaguers, and assorted computer-savvy rich kids. Yudkowsky so thoroughly identifies himself with this milieu of one-percenters that even when discussing Bayesianism, he slips into the language of a heartless rentier. A belief should "pay the rent", he says, or be made to suffer: "If it turns deadbeat, evict it."

Members of Lesswrong are adept at rationalising away any threats to their privilege with a few quick "Bayesian Judo" chops. The sufferings caused by today's elites — the billions of people who are forced to endure lives of slavery, misery, poverty, famine, fear, abuse and disease for their benefit — are treated at best as an abstract problem, of slightly lesser importance than nailing down the priors of a Bayesian formula. While the theories of right-wing economists are accepted without argument, the theories of socialists, feminists, anti-racists, environmentalists, conservationists or anyone who might upset the Bayesian worldview are subjected to extended empty "rationalist" bloviating. On the subject of feminism, Muehlhauser adopts the tactics of an MRA concern troll, claiming to be a feminist but demanding a "rational" account of why objectification is a problem. Frankly, the Lesswrong brand of "rationality" is bigotry in disguise.

Lesswrong cultists are so careful at disguising their bigotry that it may not be obvious to casual readers of the site. For a bunch of straight-talking rationalists, Yudkowsky and friends are remarkably shifty and dishonest when it comes to expressing a forthright political opinion. Political issues surface all the time on their website, but the cult insiders hide their true political colours under a heavy oil slick of obfuscation. It's as if "Politics is the mind-killer" is a policy enforced to prevent casual readers — or prospective cult members — from realising what a bunch of far-out libertarian fanatics they are.

Take as an example Yudkowsky's comments on the James Watson controversy of 2007. Watson, one of the so-called fathers of DNA research, had told reporters he was "gloomy about the prospect of Africa" because "all our social policies are based on the fact that their intelligence is the same as ours — whereas all the testing says not really". Yudkowsky used this racist outburst as the occasion for some characteristically slippery Bayesian propagandising. In his essay, you'll note that he never objects to or even mentions the content of Watson's remarks — for some reason, he approaches the subject by sneering at the commentary of a Nigerian journalist — and neither does he question the purpose or validity of intelligence testing, or raise the possibility of inherent racism in such tests. Instead, he insinuates that anti-racists are appropriating the issue for their own nefarious ends:

"Race adds extra controversy to everything; in that sense, it's obvious what difference skin colour makes politically".

Yudkowsky appears to think that racism is an illusion or at best a distraction. He stresses the Bayesian dogma that only individuals matter:

"Group injustice has no existence apart from injustice to individuals. It's individuals who have brains to experience suffering. It's individuals who deserve, and often don't get, a fair chance at life. [...] Skin colour has nothing to do with it, nothing at all."

Here, he tells the victims of racial discrimination to forget the fact that their people have been systematically oppressed by a ruling elite for centuries, and face up to the radical idea that their suffering is their own individual problem. He then helpfully reassures them that none of it is their fault; they were screwed over at birth by being simply less intelligent that then creamy white guys at the top:

Never mind the airtight case that intelligence has a hereditary genetic component among individuals; if you think that being born with Down's Syndrome doesn't impact life outcomes, then you are on crack."

Yudkowsky would reject the idea that these disadvantaged individuals could improve their lot by grouping together and engaging in political action: politics is the mind-killer, after all. The only thing that can save them is Yudkowsky's improbable fantasy tech. In the future, "intelligence deficits will be fixable given sufficiently advanced technology, biotech or nanotech." And until that comes about, the stupid oppressed masses should sit and bear their suffering, not rock the boat, and let the genuinely competent white guys get on with saving the world.

Social Darwinism is a background assumption among the lesswrong faithful. Cult members have convinced themselves the world's suffering is a necessary consequence of nature's laws, and absolved themselves from any blame for it. The strong will forever triumph over the weak, and mere humans can't do anything to change that. "Such is the hideously unfair world we live in", writes Yudkowsky, and while he likes to fantasise about eugenic solutions, and has hopes for "rational" philanthropy, the official line is that only singularity-level tech can solve the world's problems.

In common with many doomsday cults, singularitarians both dread and pray for their apocalypse; for while a bad singularity will be the end of humanity, a good singularity is our last best hope. The good singularity will unleash a perfect rationalist utopia: from each according to his whim, to each according to his IQ score. Death will be no more, everyone will have the libido of a 16-year-old horndog, and humankind will colonise the stars. In fact, a good singularity is so overwhelmingly beneficial that it makes all other concerns irrelevant: we should dedicate all our resources to bringing it about as soon as possible. Lesswrong cultists are already preparing for this event in their personal and private lives, by acting like it has already happened.

THE BRAVE NEW RATIONALIST UTOPIA: TRIGGER WARNINGS AHOY

To get an idea of what social relations, and in particular sexual relations, will be like in the singularitarian utopia, it helps to look at the utopian visions of libertarian-friendly authors like Ayn Rand, Robert Heinlein, or Poul Anderson, or the more embarrassing bits of Iain M. Banks. Suffice to say that when the computers are in charge, Lesswrong nerds will be getting a whole lot of sex with a whole lot of partners in a whole lot of permutations. The Lesswrongers who tumble into cuddle-puddles at their Bay Area meetups aren't just the Bright Young Things of a decadent culture; they're trailblazers of the transhuman morality.

You might think those cuddle-puddles are cute and fluffy, but it's too convenient to give the members of lesswrong.com a pass because they're into a bit of free love. (It should incidentally be noted that the ideology of "free love" has often been exploited by men in power — most notably, cult gurus — to pressure others into sleeping with them). Lesswrongers might see themselves as the vanguard of a new sexual revolution, but there's nothing new or revolutionary about a few rich kids having an orgy. Even the "sexual revolution" of the late 60s and 70s was only progressive to the extent that it promoted equality in sexual activity. Its lasting achievement was to undermine the old patriarchal concept of sex as an act performed by a powerful male against passive subordinates, and forward the concept of sex as a pleasure shared among equal willing partners. Judged by this standard, Lesswrong is if anything at the vanguard of a sexual counter-revolution.

Consider, for example, the fact that so many Lesswrong members are drawn to the de facto rape methodology known as Pick-Up Artistry . In this absurd but well-received comment, some guy calling himself "Hugh Ristik" tries to make a case for the compatibility of PUA and feminism, which includes the following remarkable insight:

"Both PUAs and feminists make some errors in assessing female preferences, but feminists are more wrong: I would give PUAs a B+ and feminists an F"

It's evident that "Hugh Ristik" sees himself as a kind of Bayes' Theorem on the pull, and that "female preferences" only factor into the equation to the extent that they affect his confidence in the belief he will get laid.

As another clue to the nature of Lesswrong's utopian sexual mores, consider that Yudkowsky has written a story about an idyllic future utopia in which it is revealed that rape is legal. The Lesswrong guru was bemused by the reaction to this particular story development; that people were making a big deal of it was "not as good as he hoped" , because he had another story in mind in which rape was depicted in an even more positive light! Yudkowsky invites the outraged reader to imagine that his stand-in in the story might enjoy the idea of "receiving non-consensual sex", as if that should placate anyone. Once again we have a Bayesian individual generalising from his fantasies, apparently unmoved by the fact that "receiving non-consensual sex" is a horrible daily threat and reality for millions worldwide, or that people might find his casual treatment of the subject grossly disturbing and offensive.

All in all, I haven't seen anything on lesswrong.com to counter my impression that the "rational romantic relationships" its members advocate are mostly about reasserting the sexual rights of powerful males. After all, if you're a powerful male, such as a 21st-century nerd, then rationally, a warm receptacle should be available for your penis at all times, and rationally, such timeworn deflections as "I've got a headache" or "I'm already taken" or "I think you're a creep, stay away from me" simply don't cut it. Rationally, relationships are all about optimising your individual gently caress function, if necessary at others' expense — which coincidentally means adopting the politics of "gently caress everyone".

THIEL'S LITTLE LIBERTARIANS

The main reason to pay attention to the Lesswrong cult is that it has a lot of rich and powerful backers. The Singularity Institute is primarily bankrolled by Yudkowsky's billionaire friend Peter Thiel, the hedge fund operator and co-founder of PayPal, who has donated over a million dollars to the Institute throughout its existence [4]. Thiel, who was one of the principal backers of Ron Paul's 2012 presidential campaign, is a staunch libertarian and lifelong activist for right-wing causes. Back in his undergrad days, he co-founded Stanford University's pro-Reagan rag The Stanford Review, which became notorious for its anti-PC stance and its defences of hate speech. The Stanford experience seems to have marked Thiel with a lasting dislike of PC types and feminists and minorities and other people who tend to remind him what a poo poo he is. In 1995, he co-wrote a book called The Diversity Myth: 'Multiculturalism' and the Politics of Intolerance at Stanford, which was too breathtakingly right-wing even for Condi Rice; one of his projects today is the Thiel's Little Achievers Fellowship, which encourages students to drop out of university and start their own businesses, free from the corrupting influence of left-wing academics and activists.

Other figures who are or were associated with the Institute include such high-profile TED-talkers as Ray Kurzweil [5], the delusional "cyborg" crackpot; Aubrey De Grey, the delusional "Methuselah" crackpot; Jaan Tallinn, co-creator of Skype, the world's favourite backdoor Trojan; and Professor Nick Bostrom of Oxford University's generously-endowed Future of Humanity Institute, which is essentially a neoliberal think-tank in silly academic garb. Perhaps I will have more to say about this institution at another time.

Buoyed by Thiel's money, the Singularity Institute is undertaking a number of outreach ventures. One of these is the Center for Applied Rationality, which, among other things, runs Bayesian boot-camps for the children of industry. Here, deserving kids become indoctrinated with the lesswrong version of "rationality", which according to the centre's website is the sum of logic, probability (i.e. Bayesianism) and some neoliberal horror called "rational choice theory". The great example of "applied rationality" they want these kids to emulate? Intel's 1985 decision to pull out of the DRAM market and lay off a third of its workforce. I guess someone needs to inspire the next generation of corporate downsizers and asset-strippers.

Here we see a real purpose behind lesswrong.com. Ultimately it doesn't matter that people like Thiel or Kurzweil or Yudkowsky are pushing a crackpot idea like the singularity; what matters is that they are pushing the poisonous ideas that underlie Bayesianism. Thiel and others are funding an organisation that advances an ideological basis for their own predatory behaviour. Lesswrong and its sister sites preach a reductive concept of humanity that encourages an indifference to the world's suffering, that sees people as isolated, calculating individuals acting in their self-interest: a concept of humanity that serves and perpetuates the scum at the top.

ikanreed
Sep 25, 2009

I honestly I have no idea who cannibal[SIC] is and I do not know why I should know.

syq dude, just syq!
I always end up asking myself, "Am I giving this idea a fair shake, I mean, it'd be bad if I dogmatically dismissed their ideas like they dismissed others'". Then I remember everyone's beleifs are based on assumptions, and theirs are pretty crazy.

pentyne
Nov 7, 2012

Darth Walrus posted:

Speaking of, this article was posted earlier, but seems relevant.

This makes it seem like sooner or later Yudkowsky will post some seriously racist stuff and the LW fans will fracture as some charge to defend him furiously and others who to this point have bought his Cult of Reason go "holy gently caress man you are a worthless human being".

Granted, the racist material will be something along the lines of

quote:

the reason most CEOs and tech visionaries are white is because the crucible that forges such talents is unknown to the Nubian, the Slav, the Oriental, and the Tribal. If they wholly abandon the misguided trappings of their mother culture then they too can reach their true potential, but it will be some time before they can be considered equals

So everyone already buying his bullshit will claim there's nothing inherently wrong with the statement, but anyone not a spoiled well off white libertarian will just give up.

uber_stoat
Jan 21, 2001



Pillbug
The most racist of the less-wrongers already split off and went to a site called More Right, where they are known as "Neo Reactionaries."

Djeser
Mar 22, 2013


it's crow time again

A quote from one of the Less Wrong members on aging:

quote:

Mr. Mowshowitz calls [advancing technology] escape velocity. “That’s where medicine is advancing so fast that I can’t age fast enough to die,” he explained. “I can’t live to 1,000 now, but by the time I’m 150, the technology will be that much better that I’ll live to 300. And by the time I’m 300, I’ll live to 600 and so on,” he said, a bit breathlessly. “So I can just . . . escape, right? And now I can watch the stars burn out in the Milky Way and do whatever I want to do.”

On Roko's Basilisk:

quote:

The Observer tried to ask the Less Wrong members at Ms. Vance’s party about it, but Mr. Mowshowitz quickly intervened. “You’ve said enough,” he said, squirming. “Stop. Stop.”

Someone else, on polyamory:

quote:

Asked whether polyamory was part of the New York scene as well, Ms. Vance said it was uncommon. “I’d certainly say that we don’t think ‘poly’ is morally wrong or anything,” she noted, adding that the California contingent had taken the idea quite a bit further. “In one of those [co-]houses, I saw a big white board on the board with a ‘poly-graph,’ a big diagram of who was connected to whom,” she said. “It was a pretty big graph."

That's the sort of poo poo that happens if you run goonhouse, I guess.

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



"Actuarial escape velocity" is a concept from a Larry Niven story, and even he didn't extend it out to Literally Forever; the way he put it was that for every year you live, the average predicted lifespan goes up by more than a year. This does not necessarily mean YOU PERSONALLY will be immortal, of course.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

pentyne posted:

This makes it seem like sooner or later Yudkowsky will post some seriously racist stuff and the LW fans will fracture as some charge to defend him furiously and others who to this point have bought his Cult of Reason go "holy gently caress man you are a worthless human being".

Granted, the racist material will be something along the lines of


So everyone already buying his bullshit will claim there's nothing inherently wrong with the statement, but anyone not a spoiled well off white libertarian will just give up.

This already happened, and it didn't fracture LW at all, they all agreed with him. Here's an except from Harry Potter and the Methods of Rationality. The context is Harry's thoughts as Draco Malfoy, who is eleven years old, idly chats about his plans to rape a ten-year-old girl.

Yudkowsky posted:

And in the slowed time of this slowed country, here and now as in the darkness-before-dawn prior to the Age of Reason, the son of a sufficiently powerful noble would simply take for granted that he was above the law. At least when it came to a little rape here and there. There were places in Muggle-land where it was still the same way, countries where that sort of nobility still existed and still thought like that, or even grimmer lands where it wasn't just the nobility. It was like that in every place and time that didn't descend directly from the Enlightenment. A line of descent, it seemed, which didn't quite include magical Britain, for all that there had been cross-cultural contamination of things like pop-top soda cans.

That's the revised, de-racist-ified version after someone outside his cult complained about the original version. Here's what was once there:

Yudkowsky posted:

There had been ten thousand societies over the history of the world where this conversation could have taken place. Even in Muggle-land it was probably still happening, somewhere in Saudi Arabia or the darkness of the Congo. It happened in every place and time that didn't descend directly from the Enlightenment.

But hey, if he changed it, he must have realized he was being racist as hell, right? Nope, this was his explanation for the change:

Yudkowsky posted:

"Most readers, it’s pretty clear, didn’t take that as racist, but it now also seems clear that if someone is told to *expect* racism and pointed at the chapter, that’s what they’ll see. Aren’t preconceptions lovely things?"

:smug: If you see any racist connotations in talking about the backwardness of "the darkness of the congo" and how the only true civilization comes from modern white western europe then maybe you're the real racist and your brain is being clouded by your mind-killing political biases

Oh, and while we're at it, here's what Yudkowsky thinks of the backstory between Lily and Snape:

Harry, Yudkowsky's mouthpiece posted:

Not that I've ever been through high school myself, but my books give me to understand that there's a certain kind of teenage girl who'll be outraged by a single insult if the boy is plain or poor, yet who can somehow find room in her heart to forgive a rich and handsome boy his bullying. She was shallow, in other words. Tell whoever it was that she wasn't worthy of him and he needs to get over it and move on and next time date girls who are deep instead of pretty.

Yudkowsky himself posted:

Those of you who are all like "Lily Evans was a good and virtuous woman and she broke off her friendship with Snape because he was studying Dark Arts, and she didn't go anywhere near James Potter until he stopped bullying", remember, Harry only knows the information Severus gave him. But also... a plain and poor boy is a good friend to a beautiful girl for years, and loves her, and pursues her fruitlessly; and then when the rich and handsome bully cleans up his act a bit, she goes home with him soon after... considering just how normal that is, I don't think you're allowed to write it and not have people at least wonder.

Remember that Snape was doing the wizard equivalent of hanging out at KKK rallies and calling Lily a friend of the family. But he was a ~nice guy~ so obviously she was a shallow vapid bitch for not sleeping with him.

ArchangeI
Jul 15, 2010
I find it amazing that he openly admits to not having read the books. HPMOR isn't fanfiction, it's just copyright infringement.

Also "I've never been through high school but I have studied human interaction from fiction books, and these realistic accounts clearly show..."

Chamale
Jul 11, 2010

I'm helping!



I've been rewatching the Harry Potter movies this summer, so I decided to read through HPMOR. There are points where something happens that directly contradicts the whole point of the books, like when Lily tries to use Avada Kedavra on Voldemort rather than putting down her wand and dying for Harry. Also, there's a scene where Harry makes fun of Atlas Shrugged, but I gave up reading it around chapter 50 because I was getting sick of the many "Who is John Galt?" speeches.

I feel like there were a couple genuinely clever moments, but they weren't worth all the crap. There's a theme I noticed in another one of Yudkowski's writings, Three Worlds Collide, where scientists should hide a terribly destructive secret (nuclear weapons, or a new technology that can blow up a star) because the government can't be trusted with it. It's an interesting idea, and I think Less Wrong hopes that's how they'll stop a theoretical evil AI from being built. I don't think Yudkowski understands how hard it is to keep a secret like that, especially now that science is so interconnected; during the Manhattan Project, interested parties figured out independently that the Americans were working on some kind of project involving uranium and theoretical physics, even if they didn't have the details.

Qwertycoatl
Dec 31, 2008

AATREK CURES KIDS posted:

There's a theme I noticed in another one of Yudkowski's writings, Three Worlds Collide, where scientists should hide a terribly destructive secret (nuclear weapons, or a new technology that can blow up a star) because the government can't be trusted with it. It's an interesting idea, and I think Less Wrong hopes that's how they'll stop a theoretical evil AI from being built. I don't think Yudkowski understands how hard it is to keep a secret like that, especially now that science is so interconnected; during the Manhattan Project, interested parties figured out independently that the Americans were working on some kind of project involving uranium and theoretical physics, even if they didn't have the details.

Guess who has an AI research organisation that hasn't produced any useful AI research, but which occasionally alludes to research they totally have, honest, but it's so advanced it would be dangerous to let outsiders see?

Eliezer's institute's director posted:

The question of whether to keep research secret must be made on a case-by-case basis. In fact, next week I have a meeting (with Eliezer and a few others) about whether to publish a particular piece of research progress.

ArchangeI
Jul 15, 2010
Well thank God no one else on this planet is capable of doing research and they are therefore able to keep the lid on it indefinitely instead of presenting their findings and work with others on a solution.

Anticheese
Feb 13, 2008

$60,000,000 sexbot
:rodimus:

Huh. Seeing as how Bayesian statistics has its roots in a Calvinist minister, it makes an awful lot of sense if you look at the Less Wrongers as seeing themselves as the unconditional elect for when Judgement Day comes.

The Terminator version, not Biblical.

Improbable Lobster
Jan 6, 2012

"From each according to his ability" said Ares. It sounded like a quotation.
Buglord

Qwertycoatl posted:

Guess who has an AI research organisation that hasn't produced any useful AI research, but which occasionally alludes to research they totally have, honest, but it's so advanced it would be dangerous to let outsiders see?

This would be hilarious if it was coming from a young child.

SubG
Aug 19, 2004

It's a hard world for little things.

Qwertycoatl posted:

Guess who has an AI research organisation that hasn't produced any useful AI research, but which occasionally alludes to research they totally have, honest, but it's so advanced it would be dangerous to let outsiders see?

Eliezer's institute's director posted:

The question of whether to keep research secret must be made on a case-by-case basis. In fact, next week I have a meeting (with Eliezer and a few others) about whether to publish a particular piece of research progress.
At least from here their research institution looks a lot like a bunch of assholes doing Paranoia LARPing.

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



Anticheese posted:

Huh. Seeing as how Bayesian statistics has its roots in a Calvinist minister, it makes an awful lot of sense if you look at the Less Wrongers as seeing themselves as the unconditional elect for when Judgement Day comes.

The Terminator version, not Biblical.
Well, one theory of the inspirational insight for Darwin's theory was applying Adam Smith's metaphors about economics to organic life over the course of millions+ of years. So it isn't necessarily a model that is inherently weighted with the flaws of Calvinism.

But their reinvention of religion will never stop being funny!

SolTerrasa
Sep 2, 2011


ArchangeI posted:

I find it amazing that he openly admits to not having read the books. HPMOR isn't fanfiction, it's just copyright infringement.

Also "I've never been through high school but I have studied human interaction from fiction books, and these realistic accounts clearly show..."

They actually do this often enough that there's a word for it. Well, a phrase, but they use it like a single lexical token; they call it "generalizing from fictional evidence", and here's why they think it's legit:

One of the things that really sucks about Bayes Theorem is that you can (justifiably) make it arbitrarily hard to convince you of something just by being sufficiently sure that you were right the whole time. In order to use Bayes Theorem for anything, you need to have a prior, which is your P(X) beginning just from your assumptions, which essentially represents how easy it is to convince you otherwise. Most people who have no information about something will choose what's called a "non-informative" prior, which is (1 / number of alternatives to X). So the non-informative prior for a coin flip is 0.5, the non-informative prior for an outcome of a d20 roll is 0.05, etc. (it'd be dumb to have priors for those but I just woke up)

I had never heard of a non-informative prior until I left LessWrong and got a real AI education, because they never use them. They prefer to go based on their instincts. Big Yud has a number of posts about refining instincts because they're essential to being a good Bayesian. So according to the God of Math it's fine to use fiction as the source of your gut instinct, but the thing is that they never assign reasonable confidence values to anything. In Bayes Theorem, They use these stupid numbers like 1/3^^^3 instead. As a Bayesian, you can say "this coin flip will be heads with P(1-1/3^^^3)", so that you aren't really obligated to be convinced when the flip turns up tails, because the odds that your eyes deceive you is substantially less than 1/3^^^3, so, well, it must have been heads, that's just the way the math works out.

Combining these two things, evidence from fiction and absurdly high numbers, the followers of big yud have invented a mathematically sound way to live in a fantasy land.

Slime
Jan 3, 2007
Yud also seems to forget that you're allowed to update your priors based on observation. Hell, you're outright meant to update them based on observation. Yud on the other hand just takes that arbitrary number and declares that it's completely right forever because reasons.

For someone who treats Bayes like some sort of prophet, he sure doesn't understand how to apply the guy's work.

Absurd Alhazred
Mar 27, 2010

by Athanatos

Slime posted:

Yud also seems to forget that you're allowed to update your priors based on observation. Hell, you're outright meant to update them based on observation. Yud on the other hand just takes that arbitrary number and declares that it's completely right forever because reasons.

For someone who treats Bayes like some sort of prophet, he sure doesn't understand how to apply the guy's work.

I am going to reiterate that what he and his followers really don't understand more fundamentally is conditional probabilities. That's why they think they can keep the probability that the Bayesian robber is going to kill some innocents constant while raising the stakes until you have to give him money. Well, guess what, either the robber constantly upping the number of innocents means your posterior for his being genuine gets lower and lower each time, or to begin with, each order of magnitude of people tortured without additional proof makes your prior against them stronger and stronger, because you are not a dumb.

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



Absurd Alhazred posted:

because you are not a dumb.
I've found where this breaks down

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Absurd Alhazred posted:

I am going to reiterate that what he and his followers really don't understand more fundamentally is conditional probabilities. That's why they think they can keep the probability that the Bayesian robber is going to kill some innocents constant while raising the stakes until you have to give him money. Well, guess what, either the robber constantly upping the number of innocents means your posterior for his being genuine gets lower and lower each time, or to begin with, each order of magnitude of people tortured without additional proof makes your prior against them stronger and stronger, because you are not a dumb.

Yudkowsky never comes out and says it, but I think his reasoning internally is something like this:

If the robber can torture ten trillion people for fifty years, then the robber must be God an ascended AI with literally infinite computing power. (Remember that, despite ostensibly hating infinity, Yudkowsky believes that with positive probability immortality is possible and therefore infinite runtime is possible.) An ascended AI with infinite computing power is capable of torturing 3^^^^3 people for fifty years. Therefore if the robber can torture ten trillion people for fifty years, then the robber can torture 3^^^^3 people for fifty years. Therefore beyond a certain threshold the probability that the robber can act on its threat does not decrease as the magnitude of the threat increases. (Or, at the very least, there is some lower bound positive probability p such that for all n the probability that the robber can torture n people is not less than p, so all the robber needs to do is threaten (3^^^^3)/p people.)

I think it really comes back to his insane fear of death. He hates death so much that he assumes it must be possible to never die. And if life can be extended to infinity, then so can anything else, including the psychotic AI's torture capabilities. His picture of the future is a cyberboot stamping on a human face forever, and he refuses to learn anything that might contradict that because he finds that picture comforting.

Absurd Alhazred
Mar 27, 2010

by Athanatos

Lottery of Babylon posted:

Yudkowsky never comes out and says it, but I think his reasoning internally is something like this:

If the robber can torture ten trillion people for fifty years, then the robber must be God an ascended AI with literally infinite computing power. (Remember that, despite ostensibly hating infinity, Yudkowsky believes that with positive probability immortality is possible and therefore infinite runtime is possible.) An ascended AI with infinite computing power is capable of torturing 3^^^^3 people for fifty years. Therefore if the robber can torture ten trillion people for fifty years, then the robber can torture 3^^^^3 people for fifty years. Therefore beyond a certain threshold the probability that the robber can act on its threat does not decrease as the magnitude of the threat increases. (Or, at the very least, there is some lower bound positive probability p such that for all n the probability that the robber can torture n people is not less than p, so all the robber needs to do is threaten (3^^^^3)/p people.)

I think it really comes back to his insane fear of death. He hates death so much that he assumes it must be possible to never die. And if life can be extended to infinity, then so can anything else, including the psychotic AI's torture capabilities. His picture of the future is a cyberboot stamping on a human face forever, and he refuses to learn anything that might contradict that because he finds that picture comforting.

But again, you need to conditionalize that poo poo. The crazier the AI, the stronger the prior against. He's treating these two things as if they're independent, there's no going around it.

Don Gato
Apr 28, 2013

Actually a bipedal cat.
Grimey Drawer
Where does Yudkowsky say that he hates infinity? I don't doubt that he does because that's just the kind of stupid thing he would fixate on, but I just assumed he used 3^^^^3 as a big number because he thinks it makes him look smarter than just saying infinity.


SubG posted:

At least from here their research institution looks a lot like a bunch of assholes doing Paranoia LARPing.

So is Friend Computer their best case or worst case scenario?

Don Gato fucked around with this message at 03:01 on Aug 10, 2014

Absurd Alhazred
Mar 27, 2010

by Athanatos

Don Gato posted:

So is Friend Computer their best case or worst case scenario?

That sounds like a question a mutant would ask.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Absurd Alhazred posted:

But again, you need to conditionalize that poo poo. The crazier the AI, the stronger the prior against. He's treating these two things as if they're independent, there's no going around it.

I know, but I think Yudkowsky would argue that limn-->infinityP(robber's claim is credible | robber claims to be able to torture n people)=p>0 because he believes in the possibility (and therefore nonzero probability, because he thinks probabilities can't be zero) of immortal deusmachinas with infinite computing power. He won't actually phrase it that way because that would involve learning a bit of math, but that's how his thought process works.

Don Gato posted:

Where does Yudkowsky say that he hates infinity? I don't doubt that he does because that's just the kind of stupid thing he would fixate on, but I just assumed he used 3^^^^3 as a big number because he thinks it makes him look smarter than just saying infinity.

He always uses oddly specific incredibly large numbers like 3^^^^3 instead of infinity, he says that human minds aren't capable of working with infinity, he says that 0 and 1 aren't actually probabilities (and that anyone who uses them is an idiot) on the grounds that if you convert them to log odds they go to infinity and infinity isn't a number, and he brings up the finite size and divisibility of the observable universe occasionally. I don't think he's ever said he hates infinity in so many words, but he certainly doesn't seem to get along with it very well.

Absurd Alhazred
Mar 27, 2010

by Athanatos

Lottery of Babylon posted:

I know, but I think Yudkowsky would argue that limn-->infinityP(robber's claim is credible | robber claims to be able to torture n people)=p>0 because he believes in the possibility (and therefore nonzero probability, because he thinks probabilities can't be zero) of immortal deusmachinas with infinite computing power. He won't actually phrase it that way because that would involve learning a bit of math, but that's how his thought process works.
But the limit we care about isn't the probability at "infinity", but the expectation value at "infinity". I think that we should be safely be able to say that:

p(cred|claim n) < 1/(3^^^3n^(1+1/3^^^^^^^^3))

(I mean, we're starting with a small probability for "will torture random person", right?), so as n goes to "infinity", n (cred|claim n) becomes negligible - say, of less negative utilons than a speck of dust in the eye. Surely you can gain more utility from $20 than minus a speck in the eye. QED, ffs.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Absurd Alhazred posted:

But the limit we care about isn't the probability at "infinity", but the expectation value at "infinity".

As long as the probability "at infinity" (or rather the limit as n-->infinity) is greater than zero, the expected value of the number of people tortured goes to infinity.

Absurd Alhazred posted:

I think that we should be safely be able to say that:

p(cred|claim n) < 1/(3^^^3n^(1+1/3^^^^^^^^3))

(I mean, we're starting with a small probability for "will torture random person", right?), so as n goes to "infinity", n (cred|claim n) becomes negligible - say, of less negative utilons than a speck of dust in the eye. Surely you can gain more utility from $20 than minus a speck in the eye. QED, ffs.

See, I agree with you, but Yudkowsky doesn't accept your premise that the probability the claim is credible goes to zero as the number of people threatened goes to infinity. And without that premise, he reaches the opposite conclusion. This is what he said addressing your solution:

Yudkowsky posted:

Should we penalize computations with large space and time requirements? This is a hack that solves the problem, but is it true? Are computationally costly explanations less likely? Should I think the universe is probably a coarse-grained simulation of my mind rather than real quantum physics, because a coarse-grained human mind is exponentially cheaper than real quantum physics? Should I think the galaxies are tiny lights on a painted backdrop, because that Turing machine would require less space to compute?

Given that, in general, a Turing machine can increase in utility vastly faster than it increases in complexity, how should an Occam-abiding mind avoid being dominated by tiny probabilities of vast utilities?

"If you think the guy threatening to torture 3^^^^3 people is just some crazy guy and not a cunningly-disguised god, then you must also believe all of reality is a lie in order to be logically consistent" - a thing Yudkowsky literally believes.

This quote happens to also demonstrate nicely his failure to understand that your beliefs need to be updated in response to evidence. Yes, before looking at any evidence, it does make sense to start out thinking that stars are likely fixed lights on a painted backdrop - but then once you examine the evidence more closely and find that, for example, stellar parallax does measurably occur, you should update your beliefs and conclude that no, they're not fixed points on a flat backdrop.

He makes the same mistake in his argument for why immortality is probably possible. Simple physics are more likely (hence Occam's razor) and a universal prior should be biased towards them; Conway's Game of Life has simple physics; Conway's Game of Life permits immortality; therefore immortality can be possible with simple physics; therefore, he argues, immortality is possible with nontrivial probability. Yes, if you're presented with an arbitrary universe about which you know nothing, you might initially think immortality is plausible. But once you've examined the universe more closely, you need to update your beliefs in response to the evidence. We've examined our universe quite extensively and made a lot of observations, and we've seen a lot of evidence that immortality is not possible. But Yudkowsky wants us to discard all of our accumulated evidence and stick with our initial prior forever.

Lottery of Babylon fucked around with this message at 07:18 on Aug 10, 2014

Absurd Alhazred
Mar 27, 2010

by Athanatos

Lottery of Babylon posted:

As long as the probability "at infinity" (or rather the limit as n-->infinity) is greater than zero, the expected value of the number of people tortured goes to infinity.


See, I agree with you, but Yudkowsky doesn't accept your premise that the probability the claim is credible goes to zero as the number of people threatened goes to infinity. And without that premise, he reaches the opposite conclusion. This is what he said addressing your solution:


"If you don't think the guy threatening to torture 3^^^^3 people is just some crazy guy and not a cunningly-disguised god, then you must also believe all of reality is a lie in order to be logically consistent" - a thing Yudkowsky literally believes.

This quote happens to also demonstrate nicely his failure to understand that your beliefs need to be updated in response to evidence. Yes, before looking at any evidence, it does make sense to start out thinking that stars are likely fixed lights on a painted backdrop - but then once you examine the evidence more closely and find that, for example, stellar parallax does measurably occur, you should update your beliefs and conclude that no, they're not fixed points on a flat backdrop.

He makes the same mistake in his argument for why immortality is probably possible. Simple physics are more likely (hence Occam's razor) and a universal prior should be biased towards them; Conway's Game of Life has simple physics; Conway's Game of Life permits immortality; therefore, he argues, immortality can be possible with simple physics; therefore immortality is possible with nontrivial probability. Yes, if you're presented with an arbitrary universe about which you know nothing, you might initially think immortality is plausible. But once you've examined the universe more closely, you need to update your beliefs in response to the evidence. We've examined our universe quite extensively and made a lot of observations, and we've seen a lot of evidence that immortality is not possible. But Yudkowsky wants us to discard all of our accumulated evidence and stick with our initial prior forever.

I see what you mean. The prior is the posterior, forever after. Amen.

Peel
Dec 3, 2007

I don't normally like these comparisons but it is remarkable how much this looks like rationalist metaphysics. Why if you just make these obvious* assumptions we can prove from first principles the existence of God and the immortality of the soul. But I'd be astonished if these guys didn't denigrate philosophy as unscientific and irrational and certainly not something they're doing, so like Ayn Rand before them (who did think she was doing philosophy but that she didn't need to study anyone else's work) they make a storm of elementary errors the rest of the world hashed out decades or centuries ago.

*highly contestable

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



Peel posted:

I don't normally like these comparisons but it is remarkable how much this looks like rationalist metaphysics. Why if you just make these obvious* assumptions we can prove from first principles the existence of God and the immortality of the soul. But I'd be astonished if these guys didn't denigrate philosophy as unscientific and irrational and certainly not something they're doing, so like Ayn Rand before them (who did think she was doing philosophy but that she didn't need to study anyone else's work) they make a storm of elementary errors the rest of the world hashed out decades or centuries ago.

*highly contestable
It would seem more accurate to say that they are trying to reinvent rationalist metaphysics that are in accord with the sort of cultural zeitgeist of American religious perspectives, but without using any of their personal trigger words, such as "god" or "soul."

pentyne
Nov 7, 2012

Peel posted:

I don't normally like these comparisons but it is remarkable how much this looks like rationalist metaphysics. Why if you just make these obvious* assumptions we can prove from first principles the existence of God and the immortality of the soul. But I'd be astonished if these guys didn't denigrate philosophy as unscientific and irrational and certainly not something they're doing, so like Ayn Rand before them (who did think she was doing philosophy but that she didn't need to study anyone else's work) they make a storm of elementary errors the rest of the world hashed out decades or centuries ago.

*highly contestable

They're closer to Rand then they would ever admit. Rand's shining triumph in her eyes was her solution to the classic is-ought problem, where she ignored centuries of discussion and more or less "it is this way because I say it is. MORAL RELATIVISM WINS AGAIN!"

Kind of similar to the "Well, Bayes logic(their version of it) answers this question completely, so why debate or discuss it?"

GottaPayDaTrollToll
Dec 3, 2009

by Lowtax
I looked at one of their "rationality quotes" threads to see sort of things resonate with the general LessWrong community. #1 was from a 'filk' song. #2 was from My Little Pony fanfiction. About what I expected.

Freemason Rush Week
Apr 22, 2006

ArchangeI posted:

I find it amazing that he openly admits to not having read the books. HPMOR isn't fanfiction, it's just copyright infringement.

Also "I've never been through high school but I have studied human interaction from fiction books, and these realistic accounts clearly show..."

It's like that one time on 4chan I saw someone recommend Nano to Kaoru - a manga about a frog boy who creepily manipulates a submissive girl into doing kinky things for his own pleasure - was a good way to see how healthy BDSM worked "in real life." Oh god, Yudkowsky is into BDSM and animoo, that was him wasn't it :negative:

Full disclosure: I'm the target audience for this crap, and for a few months I was buying into it hardcore. But after a while a lot of the things others have brought up in this thread - the lack of actual output or empirical evidence, the grandiose verbage in lieu of a concrete plan of action, and the cult-like "us against the world" mentality that's front and center in many of his posts - drove me away. And I have only a basic understanding of programming (and zero formal training), so if my ignorant rear end could figure out the scam then there's hope for the people posting there today. :unsmith:

(still don't think death brings meaning to life though, that's dumb post-hoc rationalization)

HapiMerchant
Apr 22, 2014

Mr. Horrible posted:

It's like that one time on 4chan I saw someone recommend Nano to Kaoru - a manga about a frog boy who creepily manipulates a submissive girl into doing kinky things for his own pleasure - was a good way to see how healthy BDSM worked "in real life." Oh god, Yudkowsky is into BDSM and animoo, that was him wasn't it :negative:

Full disclosure: I'm the target audience for this crap, and for a few months I was buying into it hardcore. But after a while a lot of the things others have brought up in this thread - the lack of actual output or empirical evidence, the grandiose verbage in lieu of a concrete plan of action, and the cult-like "us against the world" mentality that's front and center in many of his posts - drove me away. And I have only a basic understanding of programming (and zero formal training), so if my ignorant rear end could figure out the scam then there's hope for the people posting there today. :unsmith:

(still don't think death brings meaning to life though, that's dumb post-hoc rationalization)

hey! NtK isnt about only one side getting something outta it, it's a touching love story of a dom and a sub who---- never mind its pretty drat creepy, on reflection.

Antivehicular
Dec 30, 2011


I wanna sing one for the cars
That are right now headed silent down the highway
And it's dark and there is nobody driving And something has got to give

Mr. Horrible posted:

(still don't think death brings meaning to life though, that's dumb post-hoc rationalization)

"Death brings meaning to life" may be post-hoc rationalization, but it's not like death is a lovely tech product we can pull off the market if the marketing guys can't find a good line to justify it -- death is reality, and at least right now, death is inevitable. Does death suck? Sure it does. Would it be awesome to eradicate death? Probably (although doing so would require us to radically alter the function of human society, but that's an aside). The problem is that, at our current level of technology, the LessWrong dude's answers to death don't work, and being an emotionally healthy adult means coming to terms with that, not hiding under a rock and waiting for the singularity to save us. Coming to terms with death means coming to post-hoc rationalizations of it as a method of acceptance.

Of course, typing this out, it occurs to me that cryogenics are basically the LW crowd's pseudo-religious version of an afterlife myth. "Future scientists can, and will, resurrect your loved ones' frozen corpses and allow them to live in the future techno-utopia" is about as purely faith-based as "your loved ones' souls have departed their bodies to dwell in a spirit realm of perfect happiness," but I can see it providing the same comfort, and it even has substantial doctrinal resemblances to religions that dictate certain bodily conditions for the dead to be successfully resurrected.

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



Antivehicular posted:

Of course, typing this out, it occurs to me that cryogenics are basically the LW crowd's pseudo-religious version of an afterlife myth. "Future scientists can, and will, resurrect your loved ones' frozen corpses and allow them to live in the future techno-utopia" is about as purely faith-based as "your loved ones' souls have departed their bodies to dwell in a spirit realm of perfect happiness," but I can see it providing the same comfort, and it even has substantial doctrinal resemblances to religions that dictate certain bodily conditions for the dead to be successfully resurrected.
Yeah, it's almost as if they were guided, either by wishful thinking or habits of intellect, into finding the solutions which they'd already learned about, saving them a great deal of effort and intellectual calories expended.

As for death justifying life, I feel a bit as if these guys are pleading that their horror of death is SO STRONG that the fact that others might, for instance, out of ancient weary compassion, say "euthanasia of a terminal cancer patient is perhaps acceptable" or even moving into another stage of grieving is taken as a failure, as opposed to, you know, reconciling with an unhappy reality.

The Vosgian Beast
Aug 13, 2011

Business is slow
Roko's Basilisk was probably the first time I've seen LW's intellectual commitments lead them to an unpalatable conclusion. Everything else exists to make them feel good about their fate, place in the world, and their own intelligence.

Strategic Tea
Sep 1, 2012

Keep in mind I don't think they actually advocate the basilisk as part of their ideology. Big Yud is terrified of the thing and tries to make sure it's never discussed. Because if an AI heard it, it might give it ideas :spooky:

Adbot
ADBOT LOVES YOU

Exit Strategy
Dec 10, 2010

by sebmojo
All I know is that my next villain for an Eclipse Phase campaign is going to be a resimulated version of Yudowski as straight-up Roko's Basilisk, as . I'll see how many of my players want to punch me when that comes down the pipe.

  • Locked thread