Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!
:siren::siren: WARNING. This is a no Doomposting zone!! :siren::siren:

We will be talking about some topics that can get pretty heavy, especially in this era, the era where a global catastrophic risk event has actually happened. If contemplating the end of life on earth, the human race, or 21st century civilization causes you to experience feelings of despair and hopelessness, please either seek purpose in activism or consult the Mental Health thread, or seek help in other ways. You are important, your life has value!!

----------------------------------------------------------
What IS Existential and Global Catastrophic Risk?

So! With that out of the way, let's talk about Existential and Global Catastrophic Risk. What are these you may ask? These are terms that get bandied about a bit nowadays, even by politicians like Alexandria Ocasio Cortez. "Existential threat" or "existential risk" has entered the popular lexicon as "a really really big threat".

But the phrase actually got started in academia, in a 2002 paper by Oxford philosopher Nick Bostrom. In this, he defines "existential risk" as:

quote:

Existential risk – One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.

An existential risk is one where humankind as a whole is imperiled. Existential disasters have major adverse consequences for the course of human civilization for all time to come.

Whereas a "global catastrophic risk" would be later defined in a paper that he wrote in collaboration with the astronomer Milan Cirkovic in Global Catastrophic Risks:

quote:

...A global catastrophic risk is of either global or trans-generational scope, and of either endurable or terminal intensity.


This was based on the following diagram, a taxonomy of risks based on intensity and scope plotted along two axes:



Although the idea of human civilization collapse, risks to the existence of human civilization and human existence have been a subject of discussion since time immemorial, it began to be contemplated with some scientific rigor since the Cold War's projections on "megadeaths", the Bulletin of Atomic Scientists, and Sagan, Turco et al's paper on nuclear winter, and really unfurled into its full fruition as a nascent field of research - "existential risk studies" - since the publishing of Bostrom's paper, and even more since Bostrom's book, Superintelligence: Paths, Dangers, Strategies

Ok, Why do we care?

It's a legitimate and very relevant field of study to the dangers we, as a species, face in the 21st century. One of the main takeaways from existential risk research is that the large and imminent threat posed to human and planetary survival by humanity's own technological and scientific discoveries and the activities of industrial civilization.

You only need to read the newspapers to see daily coverage of a melting permafrost, Arctic glaciers melting far faster than we could have ever realized, the collapse of ecological and biogeochemical systems, scientific discoveries of rapidly accelerating positive feedback 'tipping points', and so on. Aside from the obvious environmental issues caused by human civilization and human technology, there are also existential risk concerns raised by pervasive global surveillance, the rise of easily-accessible bioengineering with CRISPR-CAS9 and synthetic biology, the collapse of consensus reality from social media disinformation and internet echo chambers, and the potential - someday - for artificial general intelligence.

Because of these, existential risk research is having something of a renaissance right now, and existential risk focused think tanks are beginning to have an influence on both government and corporate policy.

This would be great, if it only wasn't for one kind of problematic thing -- the fact that Bostrom is a transhumanist, and his ideas had a major influence on the transhumanist community in Silicon Valley, which has had a continuing effect on the direction that a lot of the talk about existential risk has taken. Bostrom had a conversation with fedora hat wearing Internet "polymath" Eliezer Yudkowsky, and wrote a book about it. This book has had such an impact on other writers in this field, inspiring a transhumanist, AI-focused trend in existential and global catastrophic risk research that, I feel, has distracted it from thinking about far more immediate and present dangers posed to our future by abrupt climate change and ecological collapse.

Moreover, if you look at the cast of players in the X-risk field, you'd find that they are overwhelmingly White and male, reflecting the same trends in STEM and especially computer science the large East Asian/South Asian presence in tech notwithstanding, from where it tends to draw many of its luminaries. With the strong crossover between them, the futurist crowd, and major powerful figures in Silicon Valley, such as Elon Musk, if these people are going to have a strong influence on society's future decisions and priorities, the danger is that their views and projections may be blinkered and limited by their lived experiences and biases. The X-risk research community has consequently leaned towards techno-optimism, and even techno-fetishism, and critiques approaching from the leftist and anti-capitalist perspective are basically non-existent.

As a person of color, and a leftist, I am concerned that a community that purports to impress its views upon civilization in the super-long-term may not necessarily share cultural views, and values, that match my own, and those of my community. Furthermore, diversity breeds different approaches, methodologies, and ideas; a more diverse existential risk community would be able to foresee potential existential and catastrophic risks for which a white-majority existential risk community would be blind.

It's problematic, and I think it's a call for more people of color, gender, sexual, and ability minorities to participate in and criticize the works produced by the X-risk and futurology community.

So, what is this thread for?

A general place to discuss existential and catastrophic risks! Some topics:

  • Theories for potential global catastrophic risks/existential risks
  • Long-term potential futures
  • Strategies for human civilization to survive
  • Prospects for survival over the next century
  • Problems you have with the current thought-leaders in existential risk
  • Killer robots and AI: Are they possible??

Also, feel free to mock, criticize, and make fun of some of the things that people deep within the X-risk and futurist world twist themselves into knots over, like Roko's Basilisk, spicy drama on LessWrong wiki, bizarre cults like the people who are into cryonics, ponder about the nature of consciousness and the feasibility of human brain simulations, and so on. There's really a lot of truly weird stuff out there.

Important takeaways

Existential Risk specifically means "Something that makes all the sentient beings die". A catastrophe could kill 5 billion people, and it would be hideously, indescribably bad - suffering on a scale unknown to human history (though not, perhaps, prehistory). But it would not be an existential risk because it would not kill 100% of us. Many X-risk people think that a superintelligent AI - an AI smarter than human beings - could pose an existential risk, ending us in the same way that we ended most of our fellow hominid competitors.

Global Catastrophic Risk is what is meant by most disasters in the public mindset. For example, COVID-19 has been a global catastrophic event, because it has set back the global economy substantially, and killed and sickened tens of millions worldwide. Most scenarios of climate change would fit under this category, as it is extremely difficult to foresee climate change that is physically possible, that would occur quickly enough to result in the death of every single human being in every single biome where we are presently found.

Some Figures in Existential Risk

Nick Bostrom

Books: Superintelligence: Paths, Dangers, Strategies
Founder of the Future of Humanity Institute. Possibly a robot. I've already written about him earlier in this thread.

Martin Rees

Books: On the Future
Founder of the Centre for the Study of Existential Risk. An astronomer who has written at some length on existential risks.

Phil Torres

Books: Morality, Foresight, and Human Flourishing: An introduction to existential risk
Also a naturalist, biologist, and science communicator, I actually really enjoy Torres's writing and views, as he seems to be one of the few in the X-Risk field that writes about environmental problems and issues of climate change and sustainability, and has even called out the racial biases of some in the field.

Ray Kurzweil

Books: The Singularity is Near
The futurist par excellence. Largely responsible for advancing the quasi-religion of Singularitarianism, which seems to survive just by having a lot of cachet with the founders of Google and other Silicon Valley billionaires who want to live forever. In a nutshell, he extrapolates Moore's Law to overall technological change, into a prediction that technology will soon advance past a point where it explodes exponentially, meaning humans will, within the 21st century, be able to upload their consciousnesses into computers and live forever. Currently trying to prolong his life by eating lots of vitamins every day.

sigh.... yeah... Eliezer Yudkowsky

Books: Harry Potter and the Methods of Rationality :laffo:
Other goons have spoken more about how this guy is a complete crank. The basic idea is that he was the founder of the "LessWrong" community, a community purportedly about advancing rationality in thinking, but is mostly about internet fedora wearers wanking over the idea of superintelligent AI. Somehow very influential in X-risk circles, despite not having any research to his name, not even completing an undergraduate degree. Founder of the Machine Intelligence Research Institute.

Related threads
The SPAAAAAAAAAACE thread: Adjacent discussions of the Fermi Paradox, Great Filter, and the like. This thread was made as kind of a silo for some of the existential risk-related ideas that came up in that thread when contemplating the cosmology issue of "where is everybody?"
The Climate Change Thread: Doomposting-ok zone. People there are pretty good at recommending things to do if you feel powerless.

DrSunshine fucked around with this message at 23:04 on Oct 15, 2020

Adbot
ADBOT LOVES YOU

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!

bootmanj posted:

So if we get climate change right as a species how are we going to move 1.5 billion people? Many of them will be moving from countries that won't reasonably support human life even in a good climate change scenario.
https://www.woodwellclimate.org/the-coming-redistribution-of-the-human-population/

It's important not to imagine some kind of climate switch flipping in a year, resulting in a tide of billions of brown people -- I would imagine this is what right-wing ecofascists might picture. Instead, the answer is probably more prosaic. There would probably be some form of large, international cooperation to build and accommodate an influx of refugees, at the same time as international aid to focus on building and making robust systems to adapt within those affected areas. It's not something that will be done immediately, like a giant airlift, but a gradual emigration over several decades.

I'm not sure where I could find a map of "days of extreme heat" for other countries, but here's one for the USA: https://ucsusa.maps.arcgis.com/apps/MapSeries/index.html?appid=e4e9082a1ec343c794d27f3e12dd006d

In the worst-case late-century scenario, there would be somewhere between 10-20 "off the charts" heat days per year. Keeping in mind, that the article you linked mentioned "mean" yearly temperatures. A future business as usual scenario (likely in my opinion) of 4-6 C hotter than now would still have seasons and days that aren't lethal, there would just be a much higher frequency of lethal days. In that case, and knowing that it's down the pipeline, I can see risk mitigation strategies being deployed by these countries - including evacuation, creation of mass heat-shelters, wider use of AC, and so on. Governments might invest a lot into infrastructure or collective housing and working arrangements where many millions of people could live either in contained cool buildings, or live within very close proximity to some kind of shelter where they could dwell during lethal heat waves. At the same time, emigration could be facilitated, so that you have people leaving the country over time, while those who need or wish to stay for whatever reason can find shelter in safe places.

It's also worth noting that diversity exists within affected countries as well. For the example of India, you could see a planned trend of relocating to higher elevation areas near the foothills of the Himalayas, where the weather is cooler.

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!

Aramis posted:

One aspect of this that keeps me up at night is that acknowledging a risk as being existential opens the door to corrective measures that would be otherwise ethically inconceivable.

The real problem for me arises when you turn this around: People who would like to push for inhuman policies, say genocide, have a vested interest in letting risks that are potentially existential, such as global warming, exacerbate themselves until the point where they can push their agenda.

What I would like is to somehow make today's denialists materially responsible for the actions that they might have a hand in making necessary. But the legwork for this needs to be started now, before the risk becomes clearly existential in the first place. And that's not fair either, since greed-based or optimism-based denialism, while still bad, do not warrant the type of hell I'd wish on a theoretical genocidal denialist.

Is there a good ethics framework out there that can tackle this kind of stuff from a reasonably practical standpoint?

Phil Torres writes a lot about this in his Morality, Foresight and Human Flourishing, actually!! His concern is moral philosophy as it pertains to existential risk, and one of the sections of the book concerns itself with outlining the potential actions of "omnicidal agents" - he calls them "radical negative utilitarians". Basically, it's possible to define for oneself a moral position where the greater good of eliminating suffering, human or animal, or protecting the biosphere, leads one to advocate for the extermination of human life. I think this idea is morally repugnant, and seems to miss the forest for the trees, since it would be sufficient to protect the planet's biosphere if all humans were relocated off world somehow, or (if it's possible) downloaded into digital consciousness.

(here is a paper by him that summarizes this)

At any rate, you may want to look into Consequentialism for an ethical framework.

EDIT:

How are u posted:

Oh yeah, genetic engineering is a total wildcard in all of this as far as I can tell. I wouldn't be surprised to see attempts to engineer whole ecosystems and biospheres as things continue to get worse and worse more quickly. Who the hell knows what things will look like in 30, 40, 50 years.

True, that could also be a potential climate change adaptation. Like if it could be possible to engineer ourselves to be able to survive temperatures above 45C for sustained periods of time without dying of heat exhaustion, perhaps that's a tack that some vulnerable countries could take.

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!
Another thing I wanted to mention is that I think existential and global catastrophic risks are intersectional issues. I haven't really seen any discussion of this in the academic literature, which speaks to the lengths to which the X-risk community is blind to these concerns.

But it's really quite obvious if you think about it. If something threatens the livelihood of billions of people, many of whom are poor, many of whom are non-white, upon whom the burdens of home and family care falls disproportionately on those who identify as women, on the most vulnerable -- then of course it is an intersectional issue. What could be more disempowering and alienating than the wiping-out of the future?

It's important to note that existential risk - the risk to the future existence of sentient life on Earth - is a social justice issue, because it robs those marginalized groups of the chance to contribute to the future flourishing of life.

To wit - it would be, literally, a cosmic injustice to allow Elon Musk and a few thousand white and Asian Silicon Valley tech magnates to colonize the entire future light cone of humanity from the surface of Mars, while allowing billions to perish and suffer on Earth.

awesmoe posted:

how is 'pervasive global surveillance' an existential risk? I'd have thought it was transgenerational/endurable.
(obviously, genocidal actors determined to destroy all life on earth could be _aided_ by surveillance, but in that case i'd suggest the first thing is the problem)

So, Bostrom writes about the concept of a singleton - a single entity with total decisionmaking power. Total global surveillance would be one of the powers enjoyed by a singleton, or possibly enable the creation of one. The existential threat this poses is more like a long-term existential threat -- a global singleton that is committed to enforcing a single totalitarian, rigid ideology (say, Christian dominionism) might cause the human race to stagnate to the point where a natural existential risk takes us out, or mismanage affairs to the degree that it causes mass death. In fact, if it was guided by certain millenarian ideologies, it might explicitly attempt to cause human extinction. Global surveillance enacted by or enabling a singleton would have a chilling effect on technological and democratic progress, that would stifle our potential in the long run.

DrSunshine fucked around with this message at 00:49 on Oct 16, 2020

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!

A Buttery Pastry posted:

First of all, this thread takes me back. It's like a 2014 D&D thread or something. :v: Like, it actually asks the reader to consider things from a more meta/philosophical perspective.

This seems... optimistic. Europe was brought to a breaking point when about 300k refugees arrived, and "Let them drown in the Med" is gonna be the majority opinion next time around if it isn't already. Compared to that, even a gradual emigration of the likely number of refugees is gonna seem like a flood. Frankly, I'd be surprised if Europe could deal with internal refugees alone without the EU breaking down, and those are gonna be far fewer in number and not as obviously foreign. The most likely compromise is gonna be maintaining freedom of movement internally in return for the EU army to expand into the most well-armed border patrol force the world has ever seen. Likely funded by completely withdrawing foreign aid.

Of course, it's not like Europe (or America) is the sole source of progress and competence, the response to the COVID pandemic indicates about the opposite. Europe can't even deal with a recession, it's not surprising it's loving up the response to coronavirus too. I guess what I'm getting at is; it's good that a lot of non-Western governments are relatively competent and serious, because the countries best situated from a purely climatic perspective are basket cases, completely unable to comprehend the idea of any sort of large-scale risk. Like, if you put that diagram/graph from the OP in front of a bunch of Western politicians I'm not sure most of them would even understand it, or be able to add additional issues to it, like their world view was essentially post-catastrophic threat. Or the existential risks would be poo poo like universal healthcare and taxes.

Well, bootmanj did write "So if we get climate change right as a species" so I took that as a cue to take the speculation in an optimistic route. I understood that as meaning "Assuming we make the changes necessary to mitigate civilization-level risk from climate change, what are the kinds of changes that might need to be made to adapt to a future environment where many presently-inhabited areas become uninhabitable?"

I still hold out hope that large-scale systemic changes (eg revolution) are possible to make the paradigm-shifts required to undertake civilizational risk mitigation strategies that will enable us to pass through the birthing-pangs of a post-scarcity society. I suppose I am an optimist in that regard. For this, I look to the example of history: social upheavals have happened that have enacted broad-scale changes in societies almost overnight. Societies seem to go through large periods of stability, punctuated by extremely rapid change, and studies have indicated that it only takes the mobilization of 3.5% of a society to enact a nonviolent revolution. And what is a government, society, or economic system anyway? It's simply a matter of humans changing their minds on how they choose to participate in society - a matter of ideology and belief, of collectively held memes.

Nothing physically or physiologically dooms humanity to live under late-capitalism forever. As a materialist, and someone with a background in the physical sciences, I tend to view what we are capable of in terms of what is simply physically possible. In that respect, I don't see any real reason why we cannot guarantee a flourishing life for every human being, equal rights for all, and a prosperous and diverse biosphere. That may require moving most of the human population off world in the long-term and transforming the earth into a kind of nature preserve, which, I feel, would accomplish what the anti-natalists and anarcho-primitivists have been advocating all this time, without genocide.

EDIT: To get back to the subject of your post - that's a great observation! Indeed, the future may rest with Asia and Africa, peoples who were once colonized by the West rightfully reasserting their role in history. Too often even leftist environmentalists in the West bemoan the impending doom of the world's brown peoples, who inhabit the parts of the world that will be most affected by abrupt climate changes that are already in the pipeline, without realizing that the leadership and citizens of the so-called "developing world" are well-aware of the problems that their nations face, and are currently working hard to address them endogenously*. It's a kind of modern, liberal version of the "White Man's Burden".

*For example, see how China is rapidly increasing the number of nuclear power plants it's building. While supposedly-advanced nations like Germany are actually increasing their CO2 emissions by voting against nuclear power and trying to push solar in a country that gets as much sunlight as Seattle, WA!

DrSunshine fucked around with this message at 16:34 on Oct 17, 2020

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!

Raenir Salazar posted:

I was tempted to make a thread but it probably falls under this thread, is the working class approaching obsolescence? CGP makes a pretty compelling argument that when insurance rates make robot/automation more competitive than human labour than workers, primarily blue collar labour the world labour is going to be rapidly phased out for machines that don't complain and don't unionize.

The development of automation has slowed down a bit since that video but given enough time I find it difficult to imagine that the "working class" will continue to exist as we know it and become comprised by grunt coders and what is commonly referred to as the "precariate", people in precarious sociofinancial positions but not necessarily labour or blue collar positions.

I think the downwards pressure imposed by technological progress and innovation is going to push hundreds of millions of people out of the middle class over the next few decades.

Yes, this is definitely a concern. It's also something that's been gradually happening through the history of modern industrial capitalism. Race Against the Machine by Brynjolfsson and McAfee is a good place to start in this.

However, there've been critiques on this line of thinking that interpose that this argument may not have taken into account the displacement of the labor force from developed countries to developing and middle-income countries like Bangladesh, China, India, Mexico, and Malaysia, and the fact that the tendency has been for automation technologies to simply be used to demand more productivity out of workers without necessarily changing their employment.

On the opposite tack, the rise of bullshit make-work jobs seems to indicate that much of the work that's now currently being done in the Western world is actually an artifact of existing social conditions, and we might not really need many people to be actually working. It could be that a large fraction of the middle class nowadays is already living in a post-scarcity world, and the economic and political conditions simply haven't caught up to that fact yet.

I anticipate that UBI could become a kind of palliative, band-aid to this growing problem. With modern economies depending more on consumers' ability to consume and buy products, I could imagine late-capitalist societies struggling with the "How are you going to get them to buy Fords if they have no money?" problem implementing a UBI just in order to keep the demand-side of the capitalist equation from falling apart.

DrSunshine fucked around with this message at 16:52 on Oct 17, 2020

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!

Yeowch!!! My Balls!!! posted:

also the intended audience for the message is notoriously quite good at writing off mass death of the browner peoples of the earth as "sometimes you gotta break a few eggs"

trying to win their sympathy with the suffering of disadvantaged populations traditionally gets you polite dismissal at best and a "GUESS THEY SHOULDNT HAVE TRIED TO COME HERE HEE HEE HAW" at worst

In that sense, it's not any different from any other field where underrepresented or marginalized groups try to break into one that is dominated by White male elites. :v:


Aramis posted:

It really depends on where you draw the line between existential and quasi-existential risk.

Global warming is a good example of that. It's possibly (and arguably likely) an existential risk that will be eventually "downgraded" to a risk that will be existential for a portion of the population, but not humanity as a whole. And the division is certainly intersectional in nature. This becomes immediately relevant because intersectionality will certainly be involved in the process of determining what actions should/will be taken to attempt mitigation of the existential risk.

This is a great expansion of the taxonomy that I want to delve deeper into, and you make a point that's exactly what I'm getting at. Global Catastrophic Risk (GCR) mitigation approaches will definitely, and must absolutely take into account intersectionality, both in pondering which groups may be most affected, and in possible response methods. It does no good to, for example, head off local or global extinction from climate change if the resulting solution is one which perpetuates racial, social, or economic injustices, or which would require the perpetuation of conditions that Bostrom would classify as "hellish" for an eternity of possible human lives.

Anyway, a distinction that I've added into the taxonomy of XRs in my mind is conditional existential risk versus final existential risk. Expressed in the terminology of probability, P(A|B) is the probability of XR A given conditional risk B, while P(A) is the total probability of XR. An example of a conditional XR would be, again, abrupt global climate change, where it enhances overall factors for extinction, all the while being somewhat difficult of a candidate for extinction on its own, while a final XR would be a Ceres-sized asteroid crashing into the Earth. I think it's worth making this distinction because while not all GCRs are XRs, some GCRs could conditionally become XRs, either on their own, or by enhancing the risk of subsequent GCRs that push overall into total extinction.


Interesting! I've started reading this paper, will give my thoughts.

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!

Aramis posted:

This is an interesting distinction, but I think it needs to be partnered with a separate "mitigability" axis in order to be of any real use. final existential risk contains too many events that are fundamentally conversation-ending beyond discussions about acceptance. I'd contend that it consists mostly of such events. The fact that you instinctively went for "Ceres-sized asteroid crashing into the Earth" as a representative example of the category kind of attests to that.

That makes a lot of logical sense. You could divide up the mitigability into something that ranges from "easily addressable" to "impossible to change", like vacuum collapse, gamma ray burst, or massive asteroid impact. Scaling would be pretty much just a matter of % of global GDP invested to mitigate said disasters.

I'm compiling a list of books to get into Existential Risk. The OP has some already, but there's a lot more out there.

How to get into existential risk
Nick Bostrom
Superintelligence: Paths, Dangers, Strategies
The defining book on the subject of existential risks posed by superintelligent AGI (ASI). I regard it as mostly a speculative book, since many experts in computer intelligence and neuroscience agree that some fundamental questions about what consciousness and intelligence actually are need to be resolved before we can even approach making an AGI. We are nowhere near close to doing this. However, it's the first book I've ever read on the subject of existential and global catastrophic risk and serves as a good framing to the language and ways of thinking used.

Nick Bostrom & Milan Cirkovic, Ed.
Global Catastrophic Risks
An excellent book with a collection of essays on many different global catastrophic risk-related topics, such as how to price in the cost of catastrophic risks, and a large section on risks from nuclear war as well as natural risks from astronomical events.

Toby Ord
ThePrecipice: Existential Risk and the Future of Humanity

Martin Rees
On the Future

Phil Torres
Morality, Foresight, and Human Flourishing: An Introduction to Existential Risk

I want to put down a few suggestions for books on long-term thinking and so on as well, and would love suggestions.

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!
:justpost:

Geoengineering does come up as both a possible response to global catastrophic risks and a cause of global catastrophic risks. There are many who argue that we shouldn't recklessly embark on geoengineering solutions to fix climate change, because of the potential unexpected outcomes of large-scale projects. It would also serve as a disincentive to cut CO2 emissions or reduce deforestation since you could just "kick the can down the road" by doing more geoengineering. Nevertheless, as CO2 emissions continue apace, I don't doubt we'll need to do some form of geoengineering just to keep it from getting worse - alongside cutting CO2. I'm a big supporter of rewilding, for example.

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!

Bar Ran Dun posted:

Another thing when we talk about ends, either individually or collectively we are taking about telos, our meaning. “The Owl of Minerva spreads it’s wings with the falling of dusk.” It is at ends that meaning (or the absence of meaning) is determined.

To me when we talk about potential ends, we must also be talking about the potential meaning (or absence) in our lives right now. Think of it this way, if climate change eradicated us all. The story that some post us thinking creature would tell looking at us and our end would be determined and shaped by the climate change that offed us. The events looking backwards would get interpreted in light of the nature of the end that occurred.

What an incredible post. I had to take a while to think about this before responding!

So, what I think you're talking about is to raise a point about the meaning of our actions (and thus their morality) as perceived by an observer from their end-point. So the point of minimizing existential risk may have to be interpreted in that light -- would a viewer at the end of time be grateful for their chance to exist? Would whatever actions we took to bring that observer into existence be perceived to them to be worth it? Do the ends justify the means?

I would think, yes. The reason is because, I think, life declares itself to be worth living by mere extension of its action of living. All living things declare their existence to be meaningful to them by simply struggling to survive, rather than by lying down and awaiting death. All life values itself by action of living. In the same light, and by extension, if we consider humanity to be a natural phenomenon -- human society being a reflection of human behavior, human civilization as no different from the complex societies of ants -- then it is no great leap to deduce that humanity declares itself to be worth existing simply by engaging in the activities that bring it life. Human activity cannot be separated from the phenomena of nature, we are part of nature, as it is part of us.

To the extent that the biosphere self-regulates in order to keep the conditions of the Earth amenable to life's existence, one could take this argument one step further and say that the biosphere's telos is simply to continue to exist. In that light, existential risk reduction, and the study thereof, makes the moral declaration that biospheric continuation is worthwhile.

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!
So you guys, let's have it out right now. How many of you think that the human race/most mammalian life will be extinct before the end of the century? And why?

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!

How are u posted:

Nah, I don't believe that we're too far gone. I am trying to live a life where I'm plugged into and doing work to help fix the problems. I know things are very bad, and the science indicates that there may be possibilities where things could rapidly get worse to the extent that the OP was talking about in their prompt.

But, personally, for me if everything we could ever hope to accomplish w/r/t mitigating climate change could end up meaning jack poo poo and we're all doomed regardless then I'd prefer to just not dwell on it and continue to have some hope and work hard towards doing whatever we can.

That's what's working for me, and though all I have is my personal experience with choosing to go with some Optimism, it sure is better than I was 4 or 5 years ago when I was in full climate-doom nothing matters headspace and deeply, clinically depressed.

Same. Actually, the reason why I started this thread, and why I took a real deep dive into reading, thinking about, and criticizing the literature on Existential Risk was because I had gone through a similar period of climate doom. My coping method was to read about existential risk issues, starting with Nick Bostrom's Superintelligence, which got me to think about issues of survival and the existence of intelligence on a much longer timescale - what the community calls long-termism. I feel that contemplating existential risk actually helps with feelings of climate anxiety and doom because it broadens your perspective and helps you to consider the issue of long-term species survival in a more objective and value-independent perspective. I'm hoping that this thread can help get people to read up more on this subject and have the same helpful effect that it had on me.

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!
I don't really want to get too deep into this because it's more of a personal inclinations/drives thing, but I've made posts in the SPAAACE!! thread on this regard.

DrSunshine posted:

I have the complete 180 degree opposite view. I think that to resign ourselves to eventual extinction in 400-500 million years would make it all pointless and terribly sad. What the Fermi Paradox says to me is that we are, as far as we know right now, quite alone in the universe, which makes Earthly life very very precious. We are one gamma ray burst away from a cold, dead, lifeless, thoughtless universe that will have no meaning or purpose whatsoever after we are gone. The evolution of sentient life on Earth brought into being a new layer of existence, superimposed onto the physical reality -- the sphere of the experienced world. Qualia, the ineffable units of experienced life, came into being as soon as there were beings complex enough to have experiences. And to me, the fact of experiences existing, justifies itself.

In that respect, and assuming based on our present knowledge of the world, it is our moral duty to ensure that terrestrial life continues, and to establish as many habitats for terrestrial life as possible, as backups. As long as thermodynamic potential gradients exist in the universe, we civilization-building life forms should ensure that they are taken advantage of to foster habitats for life forms. I think that we should make every effort to convert every single speck of matter in the universe into places for Earthly life to exist.

DrSunshine posted:

But how do you know that? There's no guarantee that something else will evolve in a few million years that will be able to develop a space program. What if humanity is the only species on earth that ever manages to do it? Say humans vanished from the world tomorrow, and nothing - not some descendant of elephants or whales or chimps - nothing ever does it again. Life continues as it always has for the past 500 million years or so, reproducing and evolving, and then as the sun gets hotter, plants will be unable to cope and the whole planetary ecosystem collapses. Then, after another billion years or so, the sun will swallow the earth and even bacteria will be gone.

What will the point of any of it be? How can this be a good thing that you look forward to? All of the suffering, all of the evolution, all of the work that people have done to try to protect one species or another, all of it -- none of it will have mattered.

I hold the view that it's unacceptable to accept that the species -- and by extension, the entire ecosystem -- may cease to exist trillions upon trillions of years before its potential life-span. I believe in life-extension on an ecological scale. The universe, as I see it, will continue to be habitable for many orders of magnitude greater than the life-span of our sun. And since I believe in the innate value of lived experiences of sentient beings (not just humans but all life, and all potentially sapient beings that might descend from them), it would be a crime of literally astronomical proportions to deny future sentient beings the right to exist without having made all the uttermost attempts at bringing them into being. Accepting human extinction, giving up, is morally unjustifiable to me.

You may have made peace with your own death. That is fine. So have I. But I think it's a totally different class of question altogether when one thinks of the death of the entire species, and the entire biosphere. Allowing humanity, the Earth's best shot thus far at reproducing its own biosphere, to go extinct, would doom the Earth's biosphere, and all the myriad life on it, all its "endless forms most beautiful" to certain doom in less than 1 billion years.

For references to the kind of thinking that I draw from:

https://www.vox.com/future-perfect/2018/10/26/18023366/far-future-effective-altruism-existential-risk-doing-good

https://www.effectivealtruism.org/articles/ea-global-2018-psychology-of-existential-risk/

https://www.eaglobal.org/talks/psychology-of-existential-risk-and-long-termism/

DrSunshine fucked around with this message at 16:14 on Nov 13, 2020

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!

Aramis posted:

Biological life is shockingly effective at increasing entropy, to the point where abiogenesys can be seen as a thermodynamic evolutionary strategy. On top of that, the more complex the life, the more efficient said life is at that conversion. What I'm getting at is that there is a very real argument to be made that life, as well as consciousness, is an attempt by the universe to hasten its inevitable heat death.

It's not a mistake, it's a suicide attempt.

I don't think this is a very enlightening statement. All you've done is make an observation about life as a negentropic process and equated the second law of Thermodynamics to suicide, just to give it that wooo dark and edgy nihilistic vibe. It's poetic but ultimately fatuous. Are you saying that a universe that is full of lifeless rocks and gas would be more preferable? Moreover using the term "attempt" and "suicide" attributes agency to the universe, when all it is doing is acting out laws of physics. Furthermore, if we take the strong anthropic principle to be sound, it would appear that life (and perhaps by extension, consciousness) in a universe with our given arrangement of physical constants would be inevitable - just another physical process that should be guaranteed to occur in a universe that happened to form the way it has. In that sense you couldn't ascribe any moral or subjective value to life's existence, it simply is in the same sense that black holes are.

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!
I mean, that only enhances my point in a way, and the point of others who want to increase the number of worlds colonized by sentient beings. If consciousness is the highest expression of life's entropy-maximization drive, then if we wish to hasten the heat death of the universe, we should maximize consciousness.

EDIT: Also I'd argue that stars and black holes are far better entropy maximizers than living beings are. :goonsay:

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!

Bug Squash posted:

If we're going to be tolerating edgelord jrpg villian speeches, I'll be muting the thread. Just got no interest in that noise.

We'll only tolerate them if they're being made by a superintelligent AGI, since it'll obviously know best and, if it does so, will have a pretty good reason for coming to that conclusion.

Here's an interesting article that suggests that "agency" - seemingly-intelligent behavior, or behavior that appears purposive - may be a trait that arises from physics and systems that process information: https://aeon.co/essays/the-biological-research-putting-purpose-back-into-life?utm_source=pocket-newtab

quote:

How, though, does an agent ever find the way to achieve its goal, if it doesn’t come preprogrammed for every eventuality it will encounter? For humans, that often tends to come from a mixture of deliberation, experience and instinct: heavyweight cogitation, in other words. Yet it seems that even ‘minimal agents’ can find inventive strategies, without any real cognition at all. In 2013, the computer scientists Alex Wissner-Gross at Harvard University and Cameron Freer, now at the Massachusetts Institute of Technology, showed that a simple optimisation rule can generate remarkably lifelike behaviour in simple objects devoid of biological content: for example, inducing them to collaborate to achieve a task or apparently to use other objects as tools.

Wissner-Gross and Freer carried out computer simulations of disks that moved around in a two-dimensional space, a little like cells or bacteria swimming on a microscope slide. The disk could follow any path through the space, but subject to a simple overarching rule: the disk’s movements and interactions had to maximise the entropy it generated over a specified window of time. Crudely speaking, this tended to entail keeping open the largest number of options for how the object might move – for example, it might elect to stay in open areas and avoid getting trapped in confined spaces. This requirement acted like a force – what Wissner-Gross and Freer dubbed an ‘entropic force’ – that guided the object’s movements.

Oddly, the resulting behaviours looked like intelligent choices, made to secure a goal. In one example, a large disk ‘used’ a small disk to extract a second small disk from a narrow tube – a process that looked remarkably like tool use. In another example, two disks in separate compartments synchronised their movements to manipulate a larger disk into a position where they could interact with it – behaviour that looked like social cooperation.

DrSunshine fucked around with this message at 02:50 on Nov 16, 2020

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!
Here's a good video to watch on this subject:

https://www.youtube.com/watch?v=Htf0XR6W9WQ

He goes into how we think about Existential Risk, and there's a pretty neat picture too: https://store.dftba.com/collections/domain-of-science/products/map-of-doom-poster

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!

A big flaming stink posted:

This guy undersells climate change like crazy

I'm glad that it at least is mentioned as an XR.

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!
Concerning the Doomsday Argument, my problems with it is that it gives the same result - "Doom soon" - with a similar confidence no matter when in time an observer does this calculation. For example, say I was a philosopher in 8000 BC at the dawn of agriculture, and employed this reasoning. If I somehow knew that there had only been about a million humans before me, then I would, by this logic, reason that there's a much greater likelihood of me being within the population of 10 million total humans than within 7.7 billion total humans.

Of course, we know from history that this early person's prediction would be wildly off. We would have reached 10 million people ever born by sometime before 1 CE.

EDIT: Then there's the question of our reference class being "human". What is counted as a human in our reasoning here? Further - what counts as "extinction"? For example, say at some point in the near future we gain the ability to upload our consciousness, and do so en masse. Genetically modern humans have ceased to exist. Then could we have said to have gone extinct?

DrSunshine fucked around with this message at 00:42 on Jan 27, 2021

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!

axeil posted:

Great thread! This is something I think about a fair amount although I haven't gone off the deep end like the Less Wrong people.

I usually end up thinking of it in context of the Fermi Paradox. For those not in the know, the Fermi Paradox, simply stated is: "where are all the aliens??"

More complexly stated its the inherent contradiction between the multitudes of habitable worlds in the galaxy (depends on how you calculate it but its at least in the high millions), versus how many sentient species we've seen in the galaxy (so far just the 1).

Our planet isn't particularly remarkable and it formed rather "late" in the overall timeline of the Universe. So if we look up in the night sky we should be seeing lots of alien life based on the prevalence of habitable worlds...but we aren't. Why?

There's a plethora of answers some very mundane (we just can't detect them), some :tinfoil: (they're already here and are lizard people!) but I want to focus on the much more interesting group: that our assumption is wrong and intelligent life is in fact, much, much rarer than we intuitively think based on the number of habitable systems.


I want to return to this point and piggyback off of it into something I've pondered about. Here's a possible Fermi paradox-adjacent question that I don't think I've seen stated anywhere else. It has a bit to do with some Anthropic reasoning.

So, many of those in the futurist community (Isaac Arthur et al) believe - as I do - that the universe in the long tail end might be more habitable or have more chances to harbor nascent intelligent civilizations than it is in the early end. This is out of sheer statistics: an older universe with more quiet red dwarf stars that can burn stably for trillions of years gives many many more chances for intelligent life to arise that can do things like observe the universe with astronomy and wonder why they exist.

So why is it that we observe a (fairly) young universe? As far as we can tell, the universe is only about 14 billion years old, out of a potential habitable range of tens of trillions of years. If the universe should be more amenable to life arising in the distant future, trillions of years from now, then the overwhelming probability is that we should exist in that old period, than it should in just the first 14 billion years of its existence.

This brings up some rather disturbing possible answers:

1) Something about the red dwarf era is inimical to the rise of intelligent life.

2) Intelligent life ceases to exist long before that era.

And a related conclusion from this line of reasoning: We live in the temporal habitable zone. Intelligent life arises as soon as it's possible: something about the ratio of metallicity in the 2nd or 3rd generation of stars that formed after the Big Bang, the conditions of stellar formation and universe expansion, etc, makes the period in which our solar system formed the most habitable that the universe could possibly be.

The above conclusion could be a potential Fermi Paradox answer - the reason why we don't see a universe full of ancient alien civilizations or the remains of their colossal megastructures is because all intelligent civilizations, us included, are around the same level of advancement and just haven't had the time to reach each other yet. We are the among the first, and all of us began around the same time: as soon as it became possible.

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!
Someone has apparently made a documentary about the Simulation Hypothesis.

It's pretty interesting stuff if you're into Nick Bostrom and his ideas: https://www.simulation-argument.com/simulation.html

It's also recently had a pretty interesting counterargument from physicist David Kipping, which you can see here:
https://www.youtube.com/watch?v=HA5YuwvJkpQ

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!
Let's talk about Artificial Superintelligence (ASI), and how the XR community has a blind-spot about it and other potential X-risk technologies.

So, first let's address the question of just how feasible ASI is - is it worth all the hand-wringing that Silicon Valley and adjacent geeks seem to make of it, since Nick Bostrom popularized the idea when he wrote Superintelligence: Paths, Dangers and Strategies? The short answer is, as far as we can tell from what we have so far, we have no idea. The development of an artificial general intelligence rests on our solving certain philosophical questions about what the nature of consciousness, intelligence, and reasoning actually are, and right now our understanding of consciousness, cognitive science, neuroscience, and intelligence is woefully inadequate.

So it's probably a long way off. The weak AI that we have right now, that already poses rather dire questions about the nature of human work, automation, labor, and privacy, is probably not the path through which we eventually produce a conscious intelligent machine that can reason at the level of a human. Perhaps neuromorphic computing will be the path forward in this case.

Nevertheless, no matter how far it is practically, we shouldn't write it off as impossible -- we know that at least human level intelligence can exist, because, well, we exist. If human-level intelligence can exist, it's possible that some physical process could be arranged such that it would display behavior that is more capable than humans. There's nothing about the physical laws of the universe that should prevent that from being the case.

To avoid getting bogged down in technical minutia, let's just call ASI and other potential humanity-ending technologies "X-risk tech". This includes potential future ASI, self-replicating nanobots, deadly genetically engineered bacteria, and so on. Properties that characterize X-risk tech are:

  • Low industrial footprint - Unlike a nuclear weapon, which requires massive industrial production chains and carries a large footprint in terms of human expertise, X-risk techs can be easily replicated or have the ability to self-replicate, which means they can be developed by multiple independent actors in the world.
  • Large impact - X-risk technologies enable the controller to enact their agenda on the world at a great multiple of their own personal reach. Arguably this is what makes them X-risk technologies, because an accident with extremely powerful or impactful technologies carries the risk of human extinction.

I think the X-risk community is right to worry about the proliferation of X-risk techs. But their critcisms restrict the space of concerns to the first-level of control and mitigation - "How do we develop friendly AI? How do we develop control and error-correction mechanisms for self-replicating nanotechnology?" Or extending to a second-level question of game theory and strategy as an extension of Cold War MAD strategy - "How do we ensure a strategic environment that's conducive to X-tech detente?".

I would like to propose a third-level of reasoning in regards to X-risk tech: to address the concern at the root cause. The cause is this: a socio-economic-political regime that incentivizes short-term gains in a context of multiple selfish actors operating under conditions of scarcity. Think about it. What entities have an incentive to develop an X-risk tech? We have self-interested nation-state actors that want a geopolitical advantage against regional or global rivals - think about the USA, China, Russia, Iran, Saudi Arabia, or North Korea. Furthermore, in a capitalist environment, we also have the presence of oligarchical interest groups that can command large amounts of economic power and political influence thanks to their control over a significant percentage of the means of production: hyper-wealthy individuals like Elon Musk and Jeff Bezos, large corporations, and financial organizations like hedge funds.

All of these contribute to a multipolar risk environment that could potentially deliver huge power benefits to the actors who are the first to develop X-risk techs. If an ASI were to be developed by a corporation, for example, it would be under a tremendous incentive to use that ASI's abilities to deliver profits. If an ASI were developed by some oligarchic interest group, it could deploy that tech to ransom the world and establish a singleton (a state in which it has unilateral freedom to act) and remake the future to its own benefit and not to the greater good.

Furthermore, the existence of a liberal capitalist world order actually incentivizes self-interested actors to develop X-tech, simply because of the enormous leverage someone who controls an X-tech could wield. This context of mutual zero-sum competition means that every group that is capable of investing into developing X-techs should rationally be making efforts to do so because of the payoffs inherent in achieving them.

On the opposite tack, contrast with what a system with democratic control of the means of production could accomplish. Under a world order of mutualism and class solidarity, society could collectively choose to prioritize techs that would benefit humanity in the long run, and collectively act to reduce X-risks, be that by colonizing space, progressing towards digital immortality, star-lifting, collective genetic uplift, and so on. Without a need to pull down one's neighbor in order to get ahead, a solidaristic society could afford to simply not develop X-techs in the first place, rather than being subject to perverse incentives to mitigate personal existential risks at the expense of collective existential risk.

It's clear to me, following this reasoning, that much of the concern with X-techs could be mitigated by advocating for and working towards the abolition of the capitalist system and the creation of a new system which would work towards the benefit of all.

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!

archduke.iago posted:

The fact that these various technologies need to be bundled into a catch-all category of X-technology should be a red flag, the framework you're describing is essentially identical to Millenarianism, the type of thinking that results in cults: everything from Jonestown to cargo. I don't think it's a coincidence that conceptual super-AI systems share many of the properties of God: all powerful, all knowing, and either able to bestow infinite pleasure or torture. As someone who actually researches/publishes on applications of AI, the discourse around AGI/ASI is pretty damaging.

First off, the premise motivating action doesn't make sense: advocates try to write off the minuscule probability of these technologies (it's telling that very few computer/data scientists are on the AGI train) by multiplying against "all the lives that will ever go on to exist." But this i) doesn't hold mathematically, since we don't know the comparative order of magnitude of each value and ii) this gets used as a bludgeon to justify why work in this area is of paramount importance, at the expense of everyday people and concerns (who, by the way, definitely exist).

Second, I don't think the ethical frameworks that the AGI nerds are working with generalize to the wider population. Their concern about what an AGI would do when given power is motivated by what they imagine they themselves would do, if given power. It's no coincidence that many Silicon Valley types speak of their companies revolutionizing society or maximizing impact in such a sociopathic manner.

Because these hypotheses are impossible to test, the discourse in this space ends up descending into punditry, with the most successful pundits being the ones whose message is most appealing to those in power. Since it's people like Thiel and Musk funding these cranks, it's inevitable that the message they've come out with is how tech nerds like themselves hold the future of humanity in their hands, how this work is of singular importance, and how nothing they might do to affect people's lives today could pale in importance.

Agreed, very much, on all your points. I think that the singular focus of many figures in existential risk research on ASI/AGI is really problematic, for all the same reasons that you illustrate. It's also very problematic that so many of them are upper class or upper-middle class white men, from Western countries. The fact that this field, which is starting to grow in prominence thanks to popular concerns (rightfully, in my opinion) over the survival of the species over the next century, is so totally dominated by a very limited demographic, suggests to me that its priorities and focuses are being skewed by ideological and cultural biases, when it could greatly contribute to the narrative on climate change and socioeconomic inequality.

My own concerns are much more centered around sustainability and the survival of the human race as part of a planetary ecology, and also as a person of color, I'm very concerned that the biases of the existential risk research community will warp its potential contributions in directions that only end up reinforcing the entrenched liberal Silicon Valley mythos. Existential risk research needs to be wrenched away from libertarians, tech fetishists, Singularitarian cultists, and Silicon Valley elitists, and I think it's important to contribute non-white, non-male, non-capitalist voices to the discussion.

EDIT:

archduke.iago posted:

As someone who actually researches/publishes on applications of AI, the discourse around AGI/ASI is pretty damaging.

I'm not an AI researcher! Could you go into more detail, with some examples? I'd be interested to see how it affects or warps your own field.

DrSunshine fucked around with this message at 02:20 on Feb 27, 2021

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!
That has also been a tangential worry for me - if, someday, in the distant future, we create an AGI, what if we just end up creating a new race of sentient beings to exploit? We already have no problem treating real-life humans as objects, much less actual machines that don't even habit a flesh and blood body. If we engineer an AGI that is bound to serve us, wouldn't that be akin to creating a sentient slave race? The thought is horrifying.

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!
There's no real reason to indicate that an AGI would necessarily have any of the potentially godlike powers that many ASI/LessWrong theorists seem to ascribe to it if we didn't engineer them as such. Accidents with AGI would more likely resemble other historical industrial accidents, as you have a highly engineered and designed system go haywire due to human negligence, failures in safety from rushed planning, external random events, or some mixture of all those factors.

The larger problem, rather than the fact that AGI exists at all, would be the environment into which AGI is created. I would compare it to the Cold War, with nuclear proliferation. In that case, there's both an incentive for the various global actors to develop nuclear weapons as a countermeasure to others, and to develop their nuclear arsenals quickly, to reduce the time in which they are vulnerable to a first strike from an adversary with no recourse. This is a recipe for disaster with any sufficiently powerful technology, because it increases the chances that accidents would occur from negligence.

Now carry that over to the idea of AGI that is born into our present late-capitalist world order, which could be a technology that simply needs computer chips and software, and you have a situation where potential AGI-developing actors would stand to lose out significantly on profit or market share, or strategic foresight power. The incentive -- and I would argue it's already present today -- would be to try to develop AGI as soon as possible. I argue that we could reduce the chances of potential AGI accidents from human negligence by eliminating the potential profit/power upsides from the context.

As an aside, I definitely agree with archduke.iago that a lot of ASI talk ends up sounding like a sci-fi'ed up version of medieval scholars talking about God, see for example Pascal's Wager. ASI thought experiments like Roko's Basilisk are just Pascal's Wager with "ASI" substituted for God almost one for one.

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!

alexandriao posted:

It's a fancy term created by rich people to abstract over and let them ignore the fact that they aren't doing anything tangible with their riches.

Why do you say that? Is it inherently bourgeois to contemplate human extinction? We do risk assessment and risk analysis based on probability all the time -- thinking about insurance from disasters, preventing chemical leaks, hardening IT infrastructure from cyber attacks, and dealing with epidemic disease. Why is extending that to possible threat to human survival tarred just because it's fashionable among Silicon Valley techbros?

I would argue that threats to civilization and human survival are too important to be left to bourgeois philosophers.

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!
It's funny looking at the Silicon Valley Titans of Industry that are Very Concerned about ASI, because they are so very close to getting it. That the bogeyman that they fear is the kind of ASI that they themselves would create were the technology available today. Of course an amoral capitalist would create an intelligence whose superior intelligence was totally orthogonal to human conceptions of morality and values. That concept is, in itself, the very essence of the "rational optimizer" postulated in ideal classical capitalist economics.

EDIT: I myself have no philosophical issue with the idea that intelligence greater than human's might be possible, and could be instantiated in architecture other than wetware. After all, we exist, and some humans are much more intelligent than others. If we accept the nature of human intelligence to be physical, and evolution to be a happy chemical accident, there shouldn't be any reason why some kind of intelligent behavior couldn't arise in a different material substrate, and inherit all the physical advantages and properties of that substrate. Where I take issue is that a lot of ASI philosophizing takes as a given the axiom that "Intelligence is orthogonal to values", coming from Nick Bostrom -- but we know so little about what "intelligence" truly comprises that it's entirely too early to accept this hypothesis as a given, and any reasoning from this might ultimately turn out to be flawed.

DrSunshine fucked around with this message at 15:09 on Mar 20, 2021

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!

:golfclap:

This is a really good analysis here. And it’s one of the reasons why I made this thread! Thanks!

EDIT:

quote:

Even within this thread -- there are tangible works that could be read and enacted to improve the lives of those living locally, that would do more to defend off tangible threats like a neoconservative revolution, or the lifelong health effects of poverty and stress.

The Black Panther Party, in the mid-20th Century, organized local community-run groups to feed children in the neighborhood. Some of those are still running, and are preventing children from starving, thus ensuring people have better immune systems going forward. That is a tangible goal that right now has a net positive impact on society and has a tangible effect on certain classes of future risk. Mutual Aid groups do more to stave off catastrophe by not only actually helping people, but also they teach people how to help support each other, and how to organize future efforts towards an economic revolution. A revolution which ultimately will (hopefully, depending on many myriad factors) help to mitigate climate change, lift people out of poverty, and ensure people have access to clean water.


Sure! Of course. I am not saying "don't do that". My point is twofold:

1) That there's a legitimate reason to take a left-wing analysis towards the space of X-risk issues that are commonly brought up by the LessWrong types, which they seem to find unresolvable because they're blind to materialist and Marxist analyses.

2) There's a benefit to recasting present-day left actions and agitation in terms of larger-scale X-risks. Actions like mutual aid on a community level benefit people in the here and now, but the stated aim, the ultimate goal, should be to reduce X-risk to humanity, and spread life and consciousness across the entire observable universe.

DrSunshine fucked around with this message at 19:58 on Mar 20, 2021

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!
Necroing my own topic because this seems to really be blowing up. The Effective Altruism movement has a lot of ties into the Existential Risk community.

https://www.vox.com/future-perfect/...y-crytocurrency

quote:

It’s safe to say that effective altruism is no longer the small, eclectic club of philosophers, charity researchers, and do-gooders it was just a decade ago. It’s an idea, and group of people, with roughly $26.6 billion in resources behind them, real and growing political power, and an increasing ability to noticeably change the world.

EA, as a subculture, has always been categorized by relentless, sometimes navel-gazing self-criticism and questioning of assumptions, so this development has prompted no small amount of internal consternation. A frequent lament in EA circles these days is that there’s just too much money, and not enough effective causes to spend it on. Bankman-Fried, who got interested in EA as an undergrad at MIT, “earned to give” through crypto trading so hard that he’s now worth about $12.8 billion as of this writing, almost all of which he has said he plans to give away to EA-aligned causes. (Disclosure: Future Perfect, which is partly supported through philanthropic giving, received a project grant from Building a Stronger Future, Bankman-Fried’s philanthropic arm.)

Along with the size of its collective bank account, EA’s priorities have also changed. For a long time, much of the movement’s focus was on “near-termist” goals: reducing poverty or preventable death or factory farming abuses right now, so humans and animals can live better lives in the near-term.

But as the movement has grown richer, it is also increasingly becoming “longtermist.” That means embracing an argument that because so many more humans and other intelligent beings could live in the future than live today, the most important thing for altruistic people to do in the present moment is to ensure that that future comes to be at all by preventing existential risks — and that it’s as good as possible. The impending release of What We Owe to the Future, an anticipated treatise on longtermism by Oxford philosopher and EA co-founder Will MacAskill, is indicative of the shift.


The movement has also become more political — or, rather, its main benefactors have become more political. Bankman-Fried was one of the biggest donors to Joe Biden’s 2020 campaign, as were Cari Tuna and Dustin Moskovitz, the Facebook/Asana billionaires who before Bankman-Fried were by far the dominant financial contributors to EA causes. More recently, Bankman-Fried spent $10 million in an unsuccessful attempt to get Carrick Flynn, a longtime EA activist, elected to Congress from Oregon. Bankman-Fried has said he’ll spend “north of $100 million” on the 2024 elections, spread across a range of races; when asked in an interview with podcast host Jacob Goldstein if he would donate “a lot of money” to the candidate running against Trump, he replied, “That’s a pretty decent guess.”
An illustration of crypto billionaire Sam Bankman-Fried surrounded by floating money and political symbols like the Capitol dome.

But his motivations aren’t those of an ordinary Democratic donor — Bankman-Fried told Goldstein that fighting Trump was less about promoting Democrats than ensuring “sane governance” in the US, which could have “massive, massive, ripple effects on what the future looks like.” Indeed, Bankman-Fried is somewhat bipartisan in his giving. While the vast majority of his political donations have gone to Democrats, 16 of the 39 candidates endorsed by the Bankman-Fried-funded Guarding Against Pandemics PAC are Republicans as of this writing.

Effective altruism in 2022 is richer, weirder, and wields more political power than effective altruism 10, or even five years ago. It’s changing and gaining in importance at a rapid pace. The changes represent a huge opportunity — and also novel dangers that could threaten the sustainability and health of the movement. More importantly, the changes could either massively expand or massively undermine effective altruism’s ability to improve the broader world.
The origins of effective altruism

The term “effective altruism,” and the movement as a whole, can be traced to a small group of people based at Oxford University about 12 years ago.

In November 2009, two philosophers at the university, Toby Ord and Will MacAskill, started a group called Giving What We Can, which promoted a pledge whose takers commit to donating 10 percent of their income to effective charities every year (several Voxxers, including me, have signed the pledge).

In 2011, MacAskill and Oxford student Ben Todd co-founded a similar group called 80,000 Hours, which meant to complement Giving What We Can’s focus on how to give most effectively with a focus on how to choose careers where one can do a lot of good. Later in 2011, Giving What We Can and 80,000 Hours wanted to incorporate as a formal charity, and needed a name. About 17 people involved in the group, per MacAskill’s recollection, voted on various names, like the “Rational Altruist Community” or the “Evidence-based Charity Association.”

The winner was “Centre for Effective Altruism.” This was the first time the term took on broad usage to refer to this constellation of ideas.

The movement blended a few major intellectual sources. The first, unsurprisingly, came from philosophy. Over decades, Peter Singer and Peter Unger had developed an argument that people in rich countries are morally obligated to donate a large share of their income to help people in poorer countries. Singer memorably analogized declining to donate large shares of your income to charity to letting a child drowning in a pond die because you don’t want to muddy your clothes rescuing him. Hoarding wealth rather than donating it to the world’s poorest, as Unger put it, amounts to “living high and letting die.” Altruism, in other words, wasn’t an option for a good life — it was an obligation.

Ord told me his path toward founding effective altruism began in 2005, when he was completing his BPhil, Oxford’s infamously demanding version of a philosophy master’s. The degree requires that students write six 5,000-word, publication-worthy philosophy papers on pre-assigned topics, each over the course of a few months. One of the topics listed Ord’s year was, “Ought I to forgo some luxury whenever I can thereby enable someone else’s life to be saved?” That led him to Singer and Unger’s work, and soon the question — ought I forgo luxuries? which ones? how much? — began to consume his thoughts.

Then, Ord’s friend Jason Matheny (then a colleague at Oxford, today CEO of the Rand Corporation) pointed him to a project called DCP2. DCP stands for “Disease Control Priorities” and originated with a 1993 report published by the World Bank that sought to measure how many years of life could be saved by various public health projects. Ord was struck by just how vast the difference in cost-effectiveness between the interventions in the report was. “The best interventions studied were about 10,000 times better than the least good ones,” he notes.

It occurred to him that if residents of rich countries are morally obligated to help residents of less wealthy ones, they might be equally obligated to find the most cost-effective ways to help. Spending $50,000 on the most efficient project saved 10 times as many life-years as spending $50 million on the least efficient project would. Directing resources toward the former, then, would vastly increase the amount of good that rich-world donors could do. It’s not enough merely for EAs to give — they must give effectively.

Ord and his friends at Oxford weren’t the only ones obsessing over cost-effectiveness. Over in New York, an organization called GiveWell was taking shape. Founded in 2007 by Holden Karnofsky and Elie Hassenfeld, both alums of the eccentric hedge fund Bridgewater Associates, the group sought to identify the most cost-effective giving opportunities for individual donors. At the time, such a service was unheard of — charity evaluators at that point, like Charity Navigator, focused more on ensuring that nonprofits were transparent and spent little on overhead. By making judgments about which nonprofits to give to — a dollar to the global poor was far better than, say, a museum — GiveWell ushered in a sea change in charity evaluation.

Those opportunities were overwhelmingly found outside developed countries, primarily in global health. By 2011, the group had settled on recommending international global health charities focused on sub-Saharan Africa.

“Even the lowest-income people in the U.S. have (generally speaking) far greater material wealth and living standards than the developing-world poor,” the group explains today. “We haven’t found any US poverty-targeting intervention that compares favorably to our international priority programs” in terms of quality of evidence or cost-effectiveness. If the First Commandment of EA is to give, and the Second Commandment is to do so effectively, the Third Commandment is to do so where the problem is tractable, meaning that it’s actually possible to change the underlying problem by devoting more time and resources to it. And as recent massive improvements in life expectancy suggest, global health is highly tractable.

Before long, it was clear that Ord and his friends in Oxford were doing something very similar to what Hassenfeld and Karnofsky were doing in Brooklyn, and the two groups began talking (and, of course, digging into each other’s cost-effectiveness analyses, which in EA is often the same thing). That connection would prove immensely important to effective altruism’s first surge in funding.

In 2011, the GiveWell team made two very important new friends: Cari Tuna and Dustin Moskovitz.

The latter was a co-founder of Facebook; today he runs the productivity software company Asana. He and his wife Tuna, a retired journalist, command some $13.8 billion as of this writing, and they intend to give almost all of it away to highly effective charities. As of July 2022, their foundation has given out over $1.7 billion in publicly listed grants.

After connecting with GiveWell, they wound up using the organization as a home base to develop what is now Open Philanthropy, a spinoff group whose primary task is finding the most effective recipients for Tuna and Moskovitz’s fortune. Because of the vastness of that fortune, Open Phil’s comparatively long history (relative to, say, FTX Future Fund), and the detail and rigor of its research reports on areas it’s considering funding, the group has become by far the most powerful single entity in the EA world.

Tuna and Moskovitz were the first tech fortune in EA, but they would not be the last. Bankman-Fried, the child of two “utilitarian leaning” Stanford Law professors, embraced EA ideas as an undergraduate at MIT, and decided to “earn to give.”

After graduation in 2014, he went to a small firm called Jane Street Capital, then founded the trading firm Alameda Research and later FTX, an exchange for buying and selling crypto and crypto-related assets, like futures. By 2021, FTX was valued at $18 billion, making the then-29-year-old a billionaire many times over. He has promised multiple times to give almost that entire fortune away.

"It’s safe to say that effective altruism is no longer the small, eclectic club of philosophers, charity researchers, and do-gooders it was just a decade ago."

The steady stream of billionaires embracing EA has left it in an odd situation: It has a lot of money, and substantial uncertainty about where to put it all, uncertainty which tends to grow rather than ebb with the movement’s fortunes. In July 2021, Ben Todd, who co-founded and runs 80,000 Hours, estimated that the movement had, very roughly, $46 billion at its disposal, an amount that had grown by 37 percent a year since 2015. And only 1 percent of that was being spent every year.

Moreover, the sudden wealth altered the role longtime, but less wealthy, EAs play in the movement. Traditionally, a key role of many EAs was donating to maximize funding to effective causes. Jeff Kaufman, one of the EAs engaged in earning-to-give who I profiled back in 2013, until recently worked as a software engineer at Google. In 2021, he and his wife Julia Wise (an even bigger figure in EA as the full-time community liaison for the Center for Effective Altruism) earned $782,158 and donated $400,000 (they make all these numbers public for transparency).

That’s hugely admirable, and much, much more than I donated last year. But that same year, Open Phil distributed over $440 million (actually over $480 million due to late grants, a spokesperson told me). Tuna and Moskovitz alone had the funding capacity of over a thousand less-wealthy EAs, even high-profile EAs dedicated to the movement who worked at competitive, six-figure jobs. Earlier this year, Kaufman announced he was leaving Google, and opting out of “earning to give” as a strategy, to do direct work for the Nucleic Acid Observatory, a group that seeks to use wastewater samples to detect future pandemics early. Part of his reasoning, he wrote on his blog, was that “There is substantially more funding available within effective altruism, and so the importance of earning to give has continued to decrease relative to doing things that aren’t mediated by donations.”

That said, the new funding comes with a lot of uncertainty and risk attached. Given how exposed EA is to the financial fortunes of a handful of wealthy individuals, swings in the markets can greatly affect the movement’s short-term funding conditions.

In June 2022, the crypto market crashed, and Bankman-Fried’s net worth, as estimated by Bloomberg, crashed with it. He peaked at $25.9 billion on March 29, and as of June 30 was down more than two-thirds to $8.1 billion; it’s since rebounded to $12.8 billion. That’s obviously nothing to sneeze at, and his standard of living isn’t affected at all. (Bankman-Fried is the kind of vegan billionaire known for eating frozen Beyond Burgers, driving a Corolla , and sleeping on a bean bag chair.) But you don’t need to have Bankman-Fried’s math skills to know that $25.9 billion can do a lot more good than $12.8 billion.

Tuna and Moskovitz, for their part, still hold much of their wealth in Facebook stock, which has been sliding for months. Moskovitz’s Bloomberg-estimated net worth peaked at $29 billion last year. Today it stands at $13.8 billion. “I’ve discovered ways of losing money I never even know I had in me,” he jokingly tweeted on June 19.

But markets change fast, crypto could surge again, and in any case Moskovitz and Bankman-Fried’s combined net worth of $26.5 billion is still a lot of money, especially in philanthropic terms. The Ford Foundation, one of America’s longest-running and most prominent philanthropies, is only worth $17.4 billion. EA now commands one of the largest financial arsenals in all of US philanthropy. And the sheer bounty of funding is leading to a frantic search for places to put it.
An illustration showing billionaire couple Cari Tuna and Dustin Moskovitz, with a hand pouring money over them.

One option for that bounty is to look to the future — the far future. In February 2022, the FTX Foundation, a philanthropic entity founded chiefly by Bankman-Fried, along with his FTX colleagues Gary Wang and Nishad Singh and his Alameda colleague Caroline Ellison, announced its “Future Fund”: a project meant to donate money to “improve humanity’s long-term prospects” through the “safe development of artificial intelligence, reducing catastrophic biorisk, improving institutions, economic growth,” and more.

The fund announced it was looking to spend at least $100 million in 2022 alone, and it already has: On June 30, barely more than four months after the fund’s launch, it stated that had already given out $132 million. Giving money out that fast is hard. Doing so required giving in big quantities ($109 million was spent on grants over $500,000 each), as well as unusual methods like “regranting” — giving over 100 individuals trusted by the Future Fund budgets of hundreds of thousands or even millions of dollars each, and letting them distribute it as they like.

The rush of money led to something of a gold-rush vibe in the EA world, enough so that Nick Beckstead, CEO of the FTX Foundation and a longtime grant-maker for Open Philanthropy, posted an update in May clarifying the group’s methods. “Some people seem to think that our procedure for approving grants is roughly ‘YOLO #sendit,’ he wrote. “This impression isn’t accurate.”

But that impression nonetheless led to significant soul-searching in the EA community. The second most popular post ever on the EA Forum, the highly active message board where EAs share ideas in minute detail, is grimly titled, “Free-spending EA might be a big problem for optics and epistemics.” Author George Rosenfeld, a founder of the charitable fundraising group Raise, worried that the big surge in EA funding could lead to free-spending habits that alter the movement’s culture — and damage its reputation by making it look like EAs are using billionaires’ money to fund a cushy lifestyle for themselves, rather than sacrificing themselves to help others.

Rosenfeld’s is the second most popular post on the EA Forum. The most popular post is a partial response to him on the same topic by Will MacAskill, one of EA’s founders. MacAskill is now deeply involved in helping decide where the funding goes. Not only is he the movement’s leading intellectual, he’s on staff at the FTX Future Fund and an advisor at the EA grant-maker Longview Philanthropy.

He began, appropriately: “Well, things have gotten weird, haven’t they?”
The shift to longtermism

Comparing charities fighting global poverty is really hard. But it’s also, in a way, EA-on-easy-mode. You can actually run experiments and see if distributing bed nets saves lives (it does, by the way). The outcomes of interest are relatively short-term and the interventions evaluated can be rigorously tested, with little chance that giving will do more harm than good.

Hard mode comes in when you expand the group of people you’re aiming to help from humans alive right now to include humans (and other animals) alive thousands or millions of years from now.

From 2015 to the present, Open Philanthropy distributed over $480 million to causes it considers related to “longtermism.” All $132 million given to date by the FTX Future Fund is, at least in theory, meant to promote longtermist ideas and goals.

Which raises an obvious question: What the gently caress is longtermism?

The basic idea is simple: We could be at the very, very start of human history. Homo sapiens emerged some 200,000-300,000 years ago. If we destroy ourselves now, through nuclear war or climate change or a mass pandemic or out-of-control AI, or fail to prevent a natural existential catastrophe, those 300,000 years could be it.
"He began, appropriately: “Well, things have gotten weird, haven’t they?” "

But if we don’t destroy ourselves, they could just be the beginning. Typical mammal species last 1 million years — and some last much longer. Economist Max Roser at Our World in Data has estimated that if (as the UN expects) the world population stabilizes at 11 billion, greater wealth and nutrition lead average life expectancy to rise to 88, and humanity lasts another 800,000 years (in line with other mammals), there could be 100 trillion potential people in humanity’s future.

By contrast, only about 117 billion humans have ever lived, according to calculations by demographers Toshiko Kaneda and Carl Haub. In other words, if we stay alive for the duration of a typical mammalian species’ tenure on Earth, that means 99.9 percent of the humans who will ever live have yet to live.

And those people, obviously, have virtually no voice in our current society, no vote for Congress or president, no union and no lobbyist. Effective altruists love finding causes that are important and neglected: What could be more important, and more neglected, than the trillions of intelligent beings in humanity’s future?

In 1984, Oxford philosopher Derek Parfit published his classic book on ethics, Reasons and Persons, which ended with a meditation on nuclear war. He asked readers to consider three scenarios:

Peace.
A nuclear war that kills 99 percent of the world’s existing population.
A nuclear war that kills 100 percent.

Obviously 2 and 3 are worse than 1. But Parfit argued that the difference between 1 and 2 paled in comparison to the difference between 2 and 3. “Civilization began only a few thousand years ago,” he noted, “If we do not destroy mankind, these few thousand years may be only a tiny fraction of the whole of civilized human history.” Scenario 3 isn’t just worse than 2, it’s dramatically worse, because by killing off the final 1 percent of humanity, scenario 3 destroys humanity’s whole future.

This line of thinking has led EAs to foreground existential threats as an especially consequential cause area. Even before Covid-19, EAs were early in being deeply concerned about the risk of a global pandemic, especially a human-made one coming about due to ever-cheaper biotech tools like CRISPR, which could be far worse than anything nature can cook up. Open Philanthropy spent over $65 million on the issue, including seven- and eight-figure grants to the Johns Hopkins Center for Health Security and the Nuclear Threat Initiative’s biodefense team, before 2020. It’s added another $70 million since. More recently, Bankman-Fried has funded a group led by his brother, Gabe, called Guarding Against Pandemics, which lobbies Congress to fund future pandemic prevention more aggressively.

Nuclear war has gotten some attention too: Longview Philanthropy, an EA-aligned grant-maker supported by both Open Philanthropy and FTX, recently hired Carl Robichaud, a longtime nuclear policy grant-maker, partly in reaction to more traditional donors like the MacArthur Foundation pulling back from trying to prevent nuclear war.

But it is AI that has been a dominant focus in EA over the last decade. In part this reflects the very real belief among many AI researchers that human-level AI could be coming soon — and could be a threat to humanity.

This is in no way a universal belief, but it’s a common enough one to be worrisome. A poll this year found that leading AI researchers put around 50-50 odds on AI surpassing humans “in all tasks” by 2059 — and that was before some of the biggest strides in recent AI research over the last five years. I will be 71 years old in 2061. It’s not even the long-term future; it’s within my expected lifetime. If you really believe superintelligent, perhaps impossible-to-control machines are coming in your lifetime, it makes sense to panic and spend big.

That said, the AI argument strikes many outside EA as deeply wrong-headed, even offensive. If you care so much about the long term, why focus on this when climate change is actually happening right now? And why care so much about the long term when there is still desperate poverty around the world? The most vociferous critics see the longtermist argument as a con, an excuse to do interesting computer science research rather than work directly in the Global South to solve actual people’s problems. The more temperate see longtermism as dangerously alienating effective altruists from the day-to-day practice of helping others.

I know this because I used to be one of these critics. I think, in retrospect, I was wrong, and I was wrong for a silly reason: I thought the idea of a super-intelligent AI was ridiculous, that these kind of nerdy charity folks had read too much sci-fi and were fantasizing wildly.

I don’t think that anymore. The pace of improvement in AI has gotten too rapid to ignore, and the damage that even dumb AI systems can do, when given too much societal control, is extreme. But I empathize deeply with people who have the reaction I did in 2015: who look at EA and see people who talked themselves out of giving money to poor people and into giving money to software engineers.

Moreover, while I buy the argument that AI safety is an urgent, important problem, I have much less faith that anyone has a tractable strategy for addressing it. (I’m not alone in that uncertainty — in a podcast interview with 80,000 Hours, Bankman-Fried said of AI risk, “I think it’s super important and I also don’t feel extremely confident on what the right thing to do is.”)

That, on its own, might not be a reason for inaction: If you have no reliable way to address a problem you really want to address, it sometimes makes sense to experiment and fund a bunch of different approaches in hopes that one of them will work. This is what funders like Open Phil have done to date.
An illustration of the planet Earth, with lightbulbs, people, trees, and more dropping around it.

But that approach doesn’t necessarily work when there’s huge “sign uncertainty” — when an intervention has a reasonable chance of making things better or worse.

This is a particularly relevant concern for AI. One of Open Phil’s early investments was a $30 million grant in 2017 to OpenAI, which has since emerged as one of the world’s leading AI labs. It has created the popular GPT-3 language model and DALL-E visual model, both major steps forward for machine learning models. The grant was intended to help by “creating an environment in which people can effectively do technical research on AI safety.” It may have done that — but it also may have simply accelerated the pace of progress toward advanced AI in a way that amplifies the dangers such AI represents. We just don’t know.

Partially for those reasons, I haven’t started giving to AI or longtermist causes just yet. When I donate to buy bed nets, I know for sure that I’m actually helping, not hurting. Our impact on the far future, though, is always less certain, no matter our intentions.
The move to politics

EA’s new wealth has also allowed it vastly more influence in an arena where the movement is bound to gain more attention and make new enemies: politics.

EA has always been about getting the best bang for your buck, and one of the best ways for philanthropists to get what they want has always been through politics. A philanthropist can donate $5 million to start their own school … or they can donate $5 million to lobby for education reforms that mold existing schools more like their ideal. The latter almost certainly will affect more students than the former.

So from at least the mid-2010s, EAs, and particularly EA donors, embraced political change as a lever, and they have some successes to show for it. The late 2010s shift of the Federal Reserve toward caring more about unemployment and less about inflation owes a substantial amount to advocacy from groups like Fed Up and Employ America — groups for which Open Philanthropy was the principal funder.

Tuna and Moskovitz have been major Democratic donors since 2016, when they spent $20 million for the party in an attempt to beat Donald Trump. The two gave even more, nearly $50 million, in 2020, largely through the super-PAC Future Forward. Moskovitz was the group’s dominant donor, but former Google CEO Eric Schmidt, Twitter co-founder Evan Williams, and Bankman-Fried supported it too. The watchdog group OpenSecrets listed Tuna as the 7th biggest donor to outside spending groups involved in the 2020 election — below the likes of the late Sheldon Adelson or Michael Bloomberg, but far above big-name donors like George Soros or Reid Hoffman. Bankman-Fried took 47th place, above the likes of Illinois governor and billionaire J.B. Pritzker and Steven Spielberg.

As in philanthropy, the EA political donor world has focused obsessively on maximizing impact per dollar. David Shor, the famous Democratic pollster, has consulted for Future Forward and similar groups for years; one of my first in-person interactions with him was at an EA Global conference in 2018, where he was trying to understand these people who were suddenly very interested in funding Democratic polling. He told me that Moskovitz’s team was the first he had ever seen who even asked how many votes-per-dollar a given ad buy or field operation would produce.

Bankman-Fried has been, if anything, more enthusiastic about getting into politics than Tuna and Moskovitz. His mother, Stanford Law professor Barbara Fried, helps lead the multi-million dollar Democratic donor group Mind the Gap. The pandemic prevention lobbying effort led by his brother Gabe was one of his first big philanthropic projects. And Protect Our Future, a super PAC he’s the primary supporter of that’s led by longtime Shor colleague and dedicated EA Michael Sadowsky, has spent big on the 2022 midterms already. That includes $10 million supporting Carrick Flynn, a longtime EA who co-founded the Center for the Governance of AI at Oxford, in his unsuccessful run for Congress in Oregon.

That intervention made perfect sense if you’re immersed in the EA world. Flynn is a true believer; he’s obsessed with issues like AI safety and pandemic prevention. Getting someone like him in Congress would give the body a champion for those causes, which are largely orphaned within the House and Senate right now, and could go far with a member monomaniacally focused on them.

But to Oregon voters, little of it made sense. Willamette Week, the state’s big alt-weekly, published a cover-story exposé portraying the bid as a Bahamas-based crypto baron’s attempt to buy a seat in Congress, presumably to further crypto interests. It didn’t help that Bankman-Fried had made several recent trips to testify before Congress and argue for his preferred model of crypto regulation in the US — or that he prominently appeared at an FTX-sponsored crypto event in the Bahamas with Bill Clinton and Tony Blair, in a flex of his new wealth and influence. Bankman-Fried is lobbying Congress on crypto, he’s bankrolling some guy’s campaign for Congress — and he expects the world to believe that he isn’t doing that to get what he wants on crypto?

It was a big optical blunder, one that threatened to make not just Bankman-Fried but all of EA look like a craven cover for crypto interests. The Flynn campaign was a reminder of just how much of a culture gap remains between EA and the wider world, and in particular the world of politics.

And that gap could widen still more, and become more problematic as longtermism, with all its strangeness, becomes a bigger part of EA. “We should spend more to save people in poor countries from preventable diseases” is an intelligible, if not particularly widely held, position in American politics. “We should be representing the trillions of people who could be living millions of years from now” is not.


An article in the New Yorker about Will MacAskill, whose new book just came out:
https://www.newyorker.com/magazine/2022/08/15/the-reluctant-prophet-of-effective-altruism

quote:

The philosopher William MacAskill credits his personal transfiguration to an undergraduate seminar at Cambridge. Before this shift, MacAskill liked to drink too many pints of beer and frolic about in the nude, climbing pitched roofs by night for the life-affirming flush; he was the saxophonist in a campus funk band that played the May Balls, and was known as a hopeless romantic. But at eighteen, when he was first exposed to “Famine, Affluence, and Morality,” a 1972 essay by the radical utilitarian Peter Singer, MacAskill felt a slight click as he was shunted onto a track of rigorous and uncompromising moralism. Singer, prompted by widespread and eradicable hunger in what’s now Bangladesh, proposed a simple thought experiment: if you stroll by a child drowning in a shallow pond, presumably you don’t worry too much about soiling your clothes before you wade in to help; given the irrelevance of the child’s location—in an actual pond nearby or in a metaphorical pond six thousand miles away—devoting resources to superfluous goods is tantamount to allowing a child to drown for the sake of a dry cleaner’s bill. For about four decades, Singer’s essay was assigned predominantly as a philosophical exercise: his moral theory was so onerous that it had to rest on a shaky foundation, and bright students were instructed to identify the flaws that might absolve us of its demands. MacAskill, however, could find nothing wrong with it.

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!
Existential Risk philosopher Phil Torres (who I reviewed most favorably in my op) wrote a Current Affairs article that clearly sums up a lot of my criticisms of the "longtermist/EA/XR" community's philosophical assumptions

https://www.currentaffairs.org/2021/07/the-dangerous-ideas-of-longtermism-and-existential-risk

quote:

Longtermism should not be confused with “long-term thinking.” It goes way beyond the observation that our society is dangerously myopic, and that we should care about future generations no less than present ones. At the heart of this worldview, as delineated by Bostrom, is the idea that what matters most is for “Earth-originating intelligent life” to fulfill its potential in the cosmos. What exactly is “our potential”? As I have noted elsewhere, it involves subjugating nature, maximizing economic productivity, replacing humanity with a superior “posthuman” species, colonizing the universe, and ultimately creating an unfathomably huge population of conscious beings living what Bostrom describes as “rich and happy lives” inside high-resolution computer simulations.

This is what “our potential” consists of, and it constitutes the ultimate aim toward which humanity as a whole, and each of us as individuals, are morally obligated to strive. An existential risk, then, is any event that would destroy this “vast and glorious” potential, as Toby Ord, a philosopher at the Future of Humanity Institute, writes in his 2020 book The Precipice, which draws heavily from earlier work in outlining the longtermist paradigm. (Note that Noam Chomsky just published a book also titled The Precipice.)

The point is that when one takes the cosmic view, it becomes clear that our civilization could persist for an incredibly long time and there could come to be an unfathomably large number of people in the future. Longtermists thus reason that the far future could contain way more value than exists today, or has existed so far in human history, which stretches back some 300,000 years. So, imagine a situation in which you could either lift 1 billion present people out of extreme poverty or benefit 0.00000000001 percent of the 1023 biological humans who Bostrom calculates could exist if we were to colonize our cosmic neighborhood, the Virgo Supercluster. Which option should you pick? For longtermists, the answer is obvious: you should pick the latter. Why? Well, just crunch the numbers: 0.00000000001 percent of 1023 people is 10 billion people, which is ten times greater than 1 billion people. This means that if you want to do the most good, you should focus on these far-future people rather than on helping those in extreme poverty today. As the FHI longtermists Hilary Greaves and Will MacAskill—the latter of whom is said to have cofounded the Effective Altruism movement with Toby Ord—write, “for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1,000) years, focussing primarily on the further-future effects. Short-run effects act as little more than tie-breakers.”

...

All of this is to say that I’m not especially optimistic about convincing longtermists that their obsession with our “vast and glorious” potential (quoting Ord again) could have profoundly harmful consequences if it were to guide actual policy in the world. As the Swedish scholar Olle Häggström has disquietingly noted, if political leaders were to take seriously the claim that saving billions of living, breathing, actual people today is morally equivalent to negligible reductions in existential risk, who knows what atrocities this might excuse? If the ends justify the means, and the “end” in this case is a veritable techno-Utopian playground full of 1058 simulated posthumans awash in “the pulsing ecstasy of love,” as Bostrom writes in his grandiloquent “Letter from Utopia,” would any means be off-limits? While some longtermists have recently suggested that there should be constraints on which actions we can take for the far future, others like Bostrom have literally argued that preemptive violence and even a global surveillance system should remain options for ensuring the realization of “our potential.” It’s not difficult to see how this way of thinking could have genocidally catastrophic consequences if political actors were to “[take] Bostrom’s argument to heart,” in Häggström’s words.

They're also behaving like a creepy mind-control cult:

quote:

In fact, numerous people have come forward, both publicly and privately, over the past few years with stories of being intimidated, silenced, or “canceled.” (Yes, “cancel culture” is a real problem here.) I personally have had three colleagues back out of collaborations with me after I self-published a short critique of longtermism, not because they wanted to, but because they were pressured to do so from longtermists in the community. Others have expressed worries about the personal repercussions of openly criticizing Effective Altruism or the longtermist ideology. For example, the moral philosopher Simon Knutsson wrote a critique several years ago in which he notes, among other things, that Bostrom appears to have repeatedly misrepresented his academic achievements in claiming that, as he wrote on his website in 2006, “my performance as an undergraduate set a national record in Sweden.” (There is no evidence that this is true.) The point is that, after doing this, Knutsson reports that he became “concerned about his safety” given past efforts to censure certain ideas by longtermists with clout in the community.

EDIT:

Given how OpenAI, which recently has been in the news with Dall-E, has been given substantial funding by OpenPhilanthropy, which is ostensibly concerned with AI Safety and existential risk, I feel like there's almost a kind of dialectical irony in this. Just as Marx wrote in the Communist Manifesto:

quote:

The development of modern industry, therefore, cuts from under its feet the very foundation on which the bourgeoisie produces and appropriates products. What the bourgeoisie therefore produces, above all, are its own grave diggers.

I can't help but wonder given the incredibly creepy advances made by OpenAI recently, that perhaps AI Safety Research into AGI risks instantiating that which they fear most - an Unfriendly AI, or some sort of immortal, posthuman oligarchy formed from currently-existing billionaires. I fear that the longtermist movement is becoming humanity's own grave diggers.

DrSunshine fucked around with this message at 18:13 on Aug 19, 2022

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!

A big flaming stink posted:

Christ it always comes down to rokos basilisk doesn't it.

E: also to equally weigh potential lives with current lives just says to me your utility function is dog poo poo

It's basilisks all the way down!

Adbot
ADBOT LOVES YOU

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!

alexandriao posted:

Wasn't literally started by billionaires lol

Yeah... about that...

Look, altruism is very effective when you give the money to yourself.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply