Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Lutha Mahtin
Oct 10, 2010

Your brokebrain sin is absolved...go and shitpost no more!

holy wow, just....whoa

quote:

I spent a weekend at Google talking with nerds about charity. I came away … worried.
Updated by Dylan Matthews on August 10, 2015, 10:00 a.m. ET


"There's one thing that I have in common with every person in this room. We're all trying really hard to figure out how to save the world."

The speaker, Cat Lavigne, paused for a second, and then she repeated herself. "We're trying to change the world!"

Lavigne was addressing attendees of the Effective Altruism Global conference, which she helped organize at Google's Quad Campus in Mountain View the weekend of July 31 to August 2. Effective altruists think that past attempts to do good — by giving to charity, or working for nonprofits or government agencies — have been largely ineffective, in part because they've been driven too much by the desire to feel good and too little by the cold, hard data necessary to prove what actually does good.

It's a powerful idea, and one that has already saved lives. GiveWell, the charity evaluating organization to which effective altruism can trace its origins, has pushed philanthropy toward evidence and away from giving based on personal whims and sentiment. Effective altruists have also been remarkably forward-thinking on factory farming, taking the problem of animal suffering seriously without collapsing into PETA-style posturing and sanctimony.

Effective altruism (or EA, as proponents refer to it) is more than a belief, though. It's a movement, and like any movement, it has begun to develop a culture, and a set of powerful stakeholders, and a certain range of worrying pathologies. At the moment, EA is very white, very male, and dominated by tech industry workers. And it is increasingly obsessed with ideas and data that reflect the class position and interests of the movement's members rather than a desire to help actual people.

In the beginning, EA was mostly about fighting global poverty. Now it's becoming more and more about funding computer science research to forestall an artificial intelligence–provoked apocalypse. At the risk of overgeneralizing, the computer science majors have convinced each other that the best way to save the world is to do computer science research. Compared to that, multiple attendees said, global poverty is a "rounding error."

I identify as an effective altruist: I think it's important to do good with your life, and doing as much good as possible is a noble goal. I even think AI risk is a real challenge worth addressing. But speaking as a white male nerd on the autism spectrum, effective altruism can't just be for white male nerds on the autism spectrum. Declaring that global poverty is a "rounding error" and everyone really ought to be doing computer science research is a great way to ensure that the movement remains dangerously homogenous and, ultimately, irrelevant.

Should we care about the world today at all?

EA Global was dominated by talk of existential risks, or X-risks. The idea is that human extinction is far, far worse than anything that could happen to real, living humans today.

To hear effective altruists explain it, it comes down to simple math. About 108 billion people have lived to date, but if humanity lasts another 50 million years, and current trends hold, the total number of humans who will ever live is more like 3 quadrillion. Humans living during or before 2015 would thus make up only 0.0036 percent of all humans ever.

The numbers get even bigger when you consider — as X-risk advocates are wont to do — the possibility of interstellar travel. Nick Bostrom — the Oxford philosopher who popularized the concept of existential risk — estimates that about 10^54 human life-years (or 10^52 lives of 100 years each) could be in our future if we both master travel between solar systems and figure out how to emulate human brains in computers.

Even if we give this 10^54 estimate "a mere 1% chance of being correct," Bostrom writes, "we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives."

Put another way: The number of future humans who will never exist if humans go extinct is so great that reducing the risk of extinction by 0.00000000000000001 percent can be expected to save 100 billion more lives than, say, preventing the genocide of 1 billion people. That argues, in the judgment of Bostrom and others, for prioritizing efforts to prevent human extinction above other endeavors. This is what X-risk obsessives mean when they claim ending world poverty would be a "rounding error."

Why Silicon Valley is scared its own creations will destroy humanity

There are a number of potential candidates for most threatening X-risk. Personally I worry most about global pandemics, both because things like the Black Death and the Spanish flu have caused massive death before, and because globalization and the dawn of synthetic biology have made diseases both easier to spread and easier to tweak (intentionally or not) for maximum lethality. But I'm in the minority on that. The only X-risk basically anyone wanted to talk about at the conference was artificial intelligence.

The specific concern — expressed by representatives from groups like the Machine Intelligence Research Institute (MIRI) in Berkeley and Bostrom's Future of Humanity Institute at Oxford — is over the possibility of an "intelligence explosion." If humans are able to create an AI as smart as humans, the theory goes, then it stands to reason that that AI would be smart enough to create itself, and to make itself even smarter. That'd set up a process of exponential growth in intelligence until we get an AI so smart that it would almost certainly be able to control the world if it wanted to. And there's no guarantee that it'd allow humans to keep existing once it got that powerful. "It looks quite difficult to design a seed AI such that its preferences, if fully implemented, would be consistent with the survival of humans and the things we care about," Bostrom told me in an interview last year.

This is not a fringe viewpoint in Silicon Valley. MIRI's top donor is the Thiel Foundation, funded by PayPal and Palantir cofounder and billionaire angel investor Peter Thiel, which has given $1.627 million to date. Jaan Tallinn, the developer of Skype and Kazaa, is both a major MIRI donor and the co-founder of two groups — the Future of Life Institute and the Center for the Study of Existential Risk — working on related issues. And earlier this year, the Future of Life Institute got $10 million from Thiel's PayPal buddy, Tesla Motors/SpaceX CEO Elon Musk, who grew concerned about AI risk after reading Bostrom's book Superintelligence.

And indeed, the AI risk panel — featuring Musk, Bostrom, MIRI's executive director Nate Soares, and the legendary UC Berkeley AI researcher Stuart Russell — was the most hyped event at EA Global. Musk naturally hammed it up for the crowd. At one point, Russell set about rebutting AI researcher Andrew Ng's comment that worrying about AI risk is like "worrying about overpopulation on Mars," countering, "Imagine if the world's governments and universities and corporations were spending billions on a plan to populate Mars." Musk looked up bashfully, put his hand on his chin, and smirked, as if to ask, "Who says I'm not?"

Russell's contribution was the most useful, as it confirmed this really is a problem that serious people in the field worry about. The analogy he used was with nuclear research. Just as nuclear scientists developed norms of ethics and best practices that have so far helped ensure that no bombs have been used in attacks for 70 years, AI researchers, he urged, should embrace a similar ethic, and not just make cool things for the sake of making cool things.

What if the AI danger argument is too clever by half?

What was most concerning was the vehemence with which AI worriers asserted the cause's priority over other cause areas. For one thing, we have such profound uncertainty about AI — whether general intelligence is even possible, whether intelligence is really all a computer needs to take over society, whether artificial intelligence will have an independent will and agency the way humans do or whether it'll just remain a tool, what it would mean to develop a "friendly" versus "malevolent" AI — that it's hard to think of ways to tackle this problem today other than doing more AI research, which itself might increase the likelihood of the very apocalypse this camp frets over.

The common response I got to this was, "Yes, sure, but even if there's a very, very, very small likelihood of us decreasing AI risk, that still trumps global poverty, because infinitesimally increasing the odds that 10^52 people in the future exist saves way more lives than poverty reduction ever could."

The problem is that you could use this logic to defend just about anything. Imagine that a wizard showed up and said, "Humans are about to go extinct unless you give me $10 to cast a magical spell." Even if you only think there's a, say, 0.00000000000000001 percent chance that he's right, you should still, under this reasoning, give him the $10, because the expected value is that you're saving 10^32 lives.

Bostrom calls this scenario "Pascal's Mugging," and it's a huge problem for anyone trying to defend efforts to reduce human risk of extinction to the exclusion of anything else. These arguments give a false sense of statistical precision by slapping probability values on beliefs. But those probability values are literally just made up. Maybe giving $1,000 to the Machine Intelligence Research Institute will reduce the probability of AI killing us all by 0.00000000000000001. Or maybe it'll make it only cut the odds by 0.00000000000000000000000000000000000000000000000000000000000000001. If the latter's true, it's not a smart donation; if you multiply the odds by 10^52, you've saved an expected 0.0000000000001 lives, which is pretty miserable. But if the former's true, it's a brilliant donation, and you've saved an expected 100,000,000,000,000,000,000,000,000,000,000,000 lives.

I don't have any faith that we understand these risks with enough precision to tell if an AI risk charity can cut our odds of doom by 0.00000000000000001 or by only 0.00000000000000000000000000000000000000000000000000000000000000001. And yet for the argument to work, you need to be able to make those kinds of distinctions.

The other problem is that the AI crowd seems to be assuming that people who might exist in the future should be counted equally to people who definitely exist today. That's by no means an obvious position, and tons of philosophers dispute it. Among other things, it implies what's known as the Repugnant Conclusion: the idea that the world should keep increasing its population until the absolutely maximum number of humans are alive, living lives that are just barely worth living. But if you say that people who only might exist count less than people who really do or really will exist, you avoid that conclusion, and the case for caring only about the far future becomes considerably weaker (though still reasonably compelling).

Doing good through aggressive self-promotion

To be fair, the AI folks weren't the only game in town. Another group emphasized "meta-charity," or giving to and working for effective altruist groups. The idea is that more good can be done if effective altruists try to expand the movement and get more people on board than if they focus on first-order projects like fighting poverty.

This is obviously true to an extent. There's a reason that charities buy ads. But ultimately you have to stop being meta. As Jeff Kaufman — a developer in Cambridge who's famous among effective altruists for, along with his wife Julia Wise, donating half their household's income to effective charities — argued in a talk about why global poverty should be a major focus, if you take meta-charity too far, you get a movement that's really good at expanding itself but not necessarily good at actually helping people.

And you have to do meta-charity well — and the more EA grows obsessed with AI, the harder it is to do that. The movement has a very real demographic problem, which contributes to very real intellectual blinders of the kind that give rise to the AI obsession. And it's hard to imagine that yoking EA to one of the whitest and most male fields (tech) and academic subjects (computer science) will do much to bring more people from diverse backgrounds into the fold.

The self-congratulatory tone of the event didn't help matters either. I physically recoiled during the introductory session when Kerry Vaughan, one of the event's organizers, declared, "I really do believe that effective altruism could be the last social movement we ever need." In the annals of sentences that could only be said with a straight face by white men, that one might take the cake.

Effective altruism is a useful framework for thinking through how to do good through one's career, or through political advocacy, or through charitable giving. It is not a replacement for movements through which marginalized peoples seek their own liberation. If EA is to have any hope of getting more buy-in from women and people of color, it has to at least acknowledge that.

There's hope

I don't mean to be unduly negative. EA Global was also full of people doing innovative projects that really do help people — and not just in global poverty either. Nick Cooney, the director of education for Mercy for Animals, argued convincingly that corporate campaigns for better treatment of farm animals could be an effective intervention. One conducted by the Humane League pushed food services companies — the firms that supply cafeterias, food courts, and the like — to commit to never using eggs from chickens confined to brutal battery cages. That resulted in corporate pledges sparing 5 million animals a year, and when the cost of the campaign was tallied up, it cost less than 2 cents per animal in the first year alone.

Another push got Walmart and Starbucks to not use pigs from farms that deploy "gestation crates" which make it impossible for pregnant pigs to turn around or take more than a couple of steps. That cost about 5 cents for each of the 18 million animals spared. The Humane Society of the United States' campaigns for state laws that restrict battery cages, gestation crates, and other inhumane practices spared 40 million animals at a cost of 40 cents each.

This is exactly the sort of thing effective altruists should be looking at. Cooney was speaking our language: heavy on quantitative measurement, with an emphasis on effectiveness and a minimum of emotional appeals. He even identified as "not an animal person." "I never had pets growing up, and I have no interest in getting them today," he emphasized. But he was also helping make the case that EA principles can work in areas outside of global poverty. He was growing the movement the way it ought to be grown, in a way that can attract activists with different core principles rather than alienating them.

If effective altruism does a lot more of that, it can transform philanthropy and provide a revolutionary model for rigorous, empirically minded advocacy. But if it gets too impressed with its own cleverness, the future is far bleaker.

Correction: This article originally stated that the Machine Intelligence Research Institute is in Oakland; it's in Berkeley.

Adbot
ADBOT LOVES YOU

pram
Jun 10, 2001
didnt read gas thread ban op

GameCube
Nov 21, 2006


your post was too long. i didn't read it

Lutha Mahtin
Oct 10, 2010

Your brokebrain sin is absolved...go and shitpost no more!

As a Millennial I posted:

your post was too long. i didn't read it

sorry

e: its actually quite "funy comoputer" but i respect that your time is valuable

H.P. Hovercraft
Jan 12, 2004

one thing a computer can do that most humans can't is be sealed up in a cardboard box and sit in a warehouse
Slippery Tilde
OP i'm gonna need some bolding here for the funy computer parts

Silver Alicorn
Mar 30, 2008

𝓪 𝓻𝓮𝓭 𝓹𝓪𝓷𝓭𝓪 𝓲𝓼 𝓪 𝓬𝓾𝓻𝓲𝓸𝓾𝓼 𝓼𝓸𝓻𝓽 𝓸𝓯 𝓬𝓻𝓮𝓪𝓽𝓾𝓻𝓮
this sure is a thread

GameCube
Nov 21, 2006

oh i found it

quote:

Correction: This article originally stated that the Machine Intelligence Research Institute is in Oakland; it's in Berkeley.
:xd:

Lutha Mahtin
Oct 10, 2010

Your brokebrain sin is absolved...go and shitpost no more!

H.P. Hovercraft posted:

OP i'm gonna need some bolding here for the funy computer parts

Effective altruism (or EA, as proponents refer to it) is more than a belief, though. It's a movement, and like any movement, it has begun to develop a culture, and a set of powerful stakeholders, and a certain range of worrying pathologies. At the moment, EA is very white, very male, and dominated by tech industry workers. And it is increasingly obsessed with ideas and data that reflect the class position and interests of the movement's members rather than a desire to help actual people.

In the beginning, EA was mostly about fighting global poverty. Now it's becoming more and more about funding computer science research to forestall an artificial intelligence–provoked apocalypse. At the risk of overgeneralizing, the computer science majors have convinced each other that the best way to save the world is to do computer science research. Compared to that, multiple attendees said, global poverty is a "rounding error."

I identify as an effective altruist: I think it's important to do good with your life, and doing as much good as possible is a noble goal. I even think AI risk is a real challenge worth addressing. But speaking as a white male nerd on the autism spectrum, effective altruism can't just be for white male nerds on the autism spectrum.

pram
Jun 10, 2001

Lutha Mahtin posted:

Effective altruism (or EA, as proponents refer to it) is more than a belief, though. It's a movement, and like any movement, it has begun to develop a culture, and a set of powerful stakeholders, and a certain range of worrying pathologies. At the moment, EA is very white, very male, and dominated by tech industry workers. And it is increasingly obsessed with ideas and data that reflect the class position and interests of the movement's members rather than a desire to help actual people.

In the beginning, EA was mostly about fighting global poverty. Now it's becoming more and more about funding computer science research to forestall an artificial intelligence–provoked apocalypse. At the risk of overgeneralizing, the computer science majors have convinced each other that the best way to save the world is to do computer science research. Compared to that, multiple attendees said, global poverty is a "rounding error."

I identify as an effective altruist: I think it's important to do good with your life, and doing as much good as possible is a noble goal. I even think AI risk is a real challenge worth addressing. But speaking as a white male nerd on the autism spectrum, effective altruism can't just be for white male nerds on the autism spectrum.

gonna have to be shorter than this buddy

pram
Jun 10, 2001
iuf your thoughts cant fit into 140 chars then llol #smdh #getaload

Lutha Mahtin
Oct 10, 2010

Your brokebrain sin is absolved...go and shitpost no more!

the rest of the article is filled with stuff like that, its not very long even :cmon:

stupid twitter generation can't even read a 2 page article :corsair:

H.P. Hovercraft
Jan 12, 2004

one thing a computer can do that most humans can't is be sealed up in a cardboard box and sit in a warehouse
Slippery Tilde

Lutha Mahtin posted:

I identify as an effective altruist: I think it's important to do good with your life, and doing as much good as possible is a noble goal. I even think AI risk is a real challenge worth addressing. But speaking as a white male nerd on the autism spectrum, effective altruism can't just be for white male nerds on the autism spectrum.

lol

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe
i read the whole thing and lol kill all nerds

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe
so is effective altruism the same people as lesswrong

Lutha Mahtin
Oct 10, 2010

Your brokebrain sin is absolved...go and shitpost no more!

Hammerite posted:

so is effective altruism the same people as lesswrong


i had only ever heard of the term before when i ran across a link to this one guy (literally named jeff k lol) who donates a bigger than average cut of his salary to data-driven poverty eradication groups. but then i ran across this article and just....lol

the jeff guy gives money for like, anti-malaria nets and parasite treatments because he read research that found you can save lives this way for like 100bux a life. holy poo poo, that's pretty good right? nope, not good enough, gotta apply the soylent guy's ideas about "efficiency" and consider the billions of theoretical future people who might one day face down a matrix-style AI bent on the extermination of humans. prototype research robots are almost getting halfway decent at loading a dishwasher now, it's obviously only a few years until skynet, DUH

Cat Face Joe
Feb 20, 2005

goth vegan crossfit mom who vapes



pram posted:

iuf your thoughts cant fit into 140 chars then llol #smdh #getaload

nerds create charity, believe stopping the robot apocalypse is more important that world hunger

pram
Jun 10, 2001
thx #lol #killnerds

Space-Pope
Aug 13, 2003

by zen death robot
i wish i could waste millions of dollars on naive futurist bullshit

Space-Pope
Aug 13, 2003

by zen death robot
"oh no guys watch out AI is coming to kill us!"
*the best AI in the world beats a chess grandmaster. sometimes.*

PleasingFungus
Oct 10, 2012
idiot asshole bitch who should fuck off

Hammerite posted:

so is effective altruism the same people as lesswrong

I'm told: yes

shocking, isn't it?

eonwe
Aug 11, 2008



Lipstick Apathy
its amazing they found so many words to write the idea that they dont care about poor people

Lutha Mahtin
Oct 10, 2010

Your brokebrain sin is absolved...go and shitpost no more!

Eonwe posted:

its amazing they found so many words to write the idea that they dont care about poor people

It's not that they don't care about poor people. it's that the only poor people they have ever paid more than a moment's thought to are fictional characters from science fiction films

blugu64
Jul 17, 2006

Do you realize that fluoridation is the most monstrously conceived and dangerous communist plot we have ever had to face?

quote:

The number of future humans who will never exist if humans go extinct is so great that reducing the risk of extinction by 0.00000000000000001 percent can be expected to save 100 billion more lives than, say, preventing the genocide of 1 billion people.

lol that's a hop skip and a jump away from advocating genocide of 1 billion people for the greater good.

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

blugu64 posted:

lol that's a hop skip and a jump away from advocating genocide of 1 billion people for the greater good.

how can we minimise the number of people living in poverty? well,

overeager overeater
Oct 16, 2011

"The cosmonauts were transfixed with wonderment as the sun set - over the Earth - there lucklessly, untethered Comrade Todd on fire."



Lutha Mahtin posted:

the soylent guy's ideas about "efficiency"

idea: a challenge to raise awareness of water availability and water usage

reality: a man stuffing himself into a plastic alibaba flightsuit, showering himself with an artisanal blend of dirt and bacteria, rounding out his diet with regular Rifaximin doses to ensure his turds are as dense as they are resistant to antibiotics

pram
Jun 10, 2001

Vlad the Retailer posted:

idea: a challenge to raise awareness of water availability and water usage

reality: a man stuffing himself into a plastic alibaba flightsuit, showering himself with an artisanal blend of dirt and bacteria, rounding out his diet with regular Rifaximin doses to ensure his turds are as dense as they are resistant to antibiotics

what the gently caress

pram
Jun 10, 2001
why does he think that flightsuit doesnt need to be cleaned lol

GameCube
Nov 21, 2006

pram posted:

why does he think that flightsuit doesnt need to be cleaned lol

oh god... didn't you see

you're in for a treat http://genius.it/robrhinehart.com?p=1331

GameCube
Nov 21, 2006

whoops that's the annotated version

here's the spoiler, but i recommend experiencing it yourself: he donates the old one and buys a new one

bump_fn
Apr 12, 2004

two of them

Lutha Mahtin posted:

It's not that they don't care about poor people. it's that the only poor people they have ever paid more than a moment's thought to are fictional characters from science fiction films

pram
Jun 10, 2001

As a Millennial I posted:

whoops that's the annotated version

here's the spoiler, but i recommend experiencing it yourself: he donates the old one and buys a new one

lmao

GameCube
Nov 21, 2006

On the production side I lean mostly toward polyester, which is not only easily recyclable, it has a negligible water footprint. For energy, a polyester t shirt takes about 12.5MJ to make, which is 3.5kWh, less than half a cycle in a washer and dryer. So if you’re not washing a bunch of clothes at once it can be more efficient to make new clothes than to wash them.

GameCube
Nov 21, 2006

tbh if i had no friends and didn't share an office, maybe i'd go soylent. i'm that boring of a person

angry_keebler
Jul 16, 2006

In His presence the mountains quake and the hills melt away; the earth trembles and its people are destroyed. Who can stand before His fierce anger?
i've always wondered what the marginal resource cost of a buunch of paper plates is versus the average life of your ordnary set of 8 ceramic plates

H.P. Hovercraft
Jan 12, 2004

one thing a computer can do that most humans can't is be sealed up in a cardboard box and sit in a warehouse
Slippery Tilde

As a Millennial I posted:

tbh if i had no friends and didn't share an office, maybe i'd go soylent. i'm that boring of a person

ensure plus is designed by actual professionals and is subject to food production regulations

also it is cheaper and comes in flavors

GameCube
Nov 21, 2006

but... is it made from algae

qirex
Feb 15, 2001

I think it's time we recognize Universal Expert Disorder [aka the assumption of transitive competence] as a harmful mental illness

pram
Jun 10, 2001
pic of qirex locked up in an institution w/ bruce willis and brad pitt

qirex
Feb 15, 2001

I am pretty good at math and logic therefore I must be good at solving all the world's problems

Adbot
ADBOT LOVES YOU

H.P. Hovercraft
Jan 12, 2004

one thing a computer can do that most humans can't is be sealed up in a cardboard box and sit in a warehouse
Slippery Tilde
i am p good at being a surgeon therefore i must be great at being an investor

  • Locked thread