Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Bug Squash
Mar 18, 2009

Bar Ran Dun posted:

Right, Mars is solid because it’s smaller and didn’t have a late collision. That’s not the only way to get one but it’s probably why we have ours.

In other observed solar systems the pattern for gas giants is to keep moving inward and they end up closer to the star. They end up hoovering up or ejecting the rocky planets if that happens. Something weird happened with Jupiter and atleast one of our other gas giants that stopped that.

Ok, this is flat out wrong. Most of the initial thermal energy of a rocky planet gets lost "relatively" quickly. That's what lead Lord Kelvin to erroneously claim that the Earth could only be a few millions years old. The mega collision would have liquidified the planet and dumped a load of thermal energy into the system, but Earth and Venus have retained magma cores mostly due to energy released in radioactive decay, and as a function of their sheer size making holding onto their initial accretion energy easier.

With regards to whether we have a particularly strange arrangement of gas giants, we just don't have enough data yet. All our methods for detecting exoplants are massively biased, making certain types of systems much easier to detect. The biggest bias is towards those systems where the gas giants have migrated inwards. We don't know if that's typical behaviour yet, it's just the type of system that we find easiest to find.

Bug Squash fucked around with this message at 17:54 on Feb 4, 2021

Adbot
ADBOT LOVES YOU

Bar Ran Dun
Jan 22, 2006




Here’s my understanding of it from a old Scientific America article:

“It takes a rather long time for heat to move out of the earth. This occurs through both "convective" transport of heat within the earth's liquid outer core and solid mantle and slower "conductive" transport of heat through nonconvecting boundary layers, such as the earth's plates at the surface. As a result, much of the planet's primordial heat, from when the earth first accreted and developed its core, has been retained.

The amount of heat that can arise through simple accretionary processes, bringing small bodies together to form the proto-earth, is large: on the order of 10,000 kelvins (about 18,000 degrees Farhenheit). The crucial issue is how much of that energy was deposited into the growing earth and how much was reradiated into space. Indeed, the currently accepted idea for how the moon was formed involves the impact or accretion of a Mars-size object with or by the proto-earth. When two objects of this size collide, large amounts of heat are generated, of which quite a lot is retained. This single episode could have largely melted the outermost several thousand kilometers of the planet.

Additionally, descent of the dense iron-rich material that makes up the core of the planet to the center would produce heating on the order of 2,000 kelvins (about 3,000 degrees F). The magnitude of the third main source of heat--radioactive heating--is uncertain. The precise abundances of radioactive elements (primarily potassium, uranium and thorium) are poorly known in the deep earth.

In sum, there was no shortage of heat in the early earth, and the planet's inability to cool off quickly results in the continued high temperatures of the Earth's interior. In effect, not only do the earth's plates act as a blanket on the interior, but not even convective heat transport in the solid mantle provides a particularly efficient mechanism for heat loss. The planet does lose some heat through the processes that drive plate tectonics, especially at mid-ocean ridges. For comparison, smaller bodies such as Mars and the Moon show little evidence for recent tectonic activity or volcanism.”

Now that picture painted above might be getting dated, it is 30 years old. If there is a new model post it.

Bug Squash
Mar 18, 2009

I don't think I disagree with anything you just posted, my issue is with the claim that the giant impact, assuming it happened, is nessesary for a magnetic field to form on a rocky planet, for the reason that the planet would be solid otherwise.

The fact that Venus has a liquid interior, but no evidence of a giant impact, seems to break this chain of reason.

(Note that Venus doesn't have a strong magnetic field due to how slowly it rotates and some other effects. I think this is irrelevant to the discussion, but feel free to disagree)

(Also, it could be that giant impacts are actually common but usually leave no evidence and earth is only odd in that it got left with a big mood. If that's the case then it's just part of normal planetary formation and doesn't have much to say about how common life is)

suck my woke dick
Oct 10, 2012

:siren:I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!:siren:

Put this cum-loving slave on ignore immediately!

Bar Ran Dun posted:

Here’s my understanding of it from a old Scientific America article:

“It takes a rather long time for heat to move out of the earth. This occurs through both "convective" transport of heat within the earth's liquid outer core and solid mantle and slower "conductive" transport of heat through nonconvecting boundary layers, such as the earth's plates at the surface. As a result, much of the planet's primordial heat, from when the earth first accreted and developed its core, has been retained.

The amount of heat that can arise through simple accretionary processes, bringing small bodies together to form the proto-earth, is large: on the order of 10,000 kelvins (about 18,000 degrees Farhenheit). The crucial issue is how much of that energy was deposited into the growing earth and how much was reradiated into space. Indeed, the currently accepted idea for how the moon was formed involves the impact or accretion of a Mars-size object with or by the proto-earth. When two objects of this size collide, large amounts of heat are generated, of which quite a lot is retained. This single episode could have largely melted the outermost several thousand kilometers of the planet.

Additionally, descent of the dense iron-rich material that makes up the core of the planet to the center would produce heating on the order of 2,000 kelvins (about 3,000 degrees F). The magnitude of the third main source of heat--radioactive heating--is uncertain. The precise abundances of radioactive elements (primarily potassium, uranium and thorium) are poorly known in the deep earth.

In sum, there was no shortage of heat in the early earth, and the planet's inability to cool off quickly results in the continued high temperatures of the Earth's interior. In effect, not only do the earth's plates act as a blanket on the interior, but not even convective heat transport in the solid mantle provides a particularly efficient mechanism for heat loss. The planet does lose some heat through the processes that drive plate tectonics, especially at mid-ocean ridges. For comparison, smaller bodies such as Mars and the Moon show little evidence for recent tectonic activity or volcanism.”

Now that picture painted above might be getting dated, it is 30 years old. If there is a new model post it.

"It takes a rather long time for heat to move out of the earth" doesn't mean "like 4 billion years". You're misunderstanding what your quote is saying.

BoldFrankensteinMir
Jul 28, 2006


Bug Squash posted:

(Also, it could be that giant impacts are actually common but usually leave no evidence and earth is only odd in that it got left with a big mood. If that's the case then it's just part of normal planetary formation and doesn't have much to say about how common life is)

Is there any actual evidence for how common planetary collisions are? Because if the rate is high, it seems weird to take that as being a sign FOR advanced life developing. I mean there wouldn't be a law of physics compelling Theias to hit Protoearths only when it's convenient for life to form, right? What about the potential for wiping whole biospheres out of existence, or even preventing them from rising in the first place if the rate is really high? I doubt gravity cares if one of the rocks it's smashing together has moss or monkeys or skyscrapers on it.

I don't think that necessarily means we're all alone (on its own, at least) if your homeworld has to be a fused impact remnant planet with a fused impact remnant moon, interestingly we're not the only object even in our own solar system set up like that: Pluto and Charon are almost twins and were probably the results of a collision too. But then there's Mars with its little potato collection and neither Venus nor Mercury has any moons at all, so 2/5? Of the bodies we clasically recognized as terretrial planets just because of when we spotted them? Really doesn't seem like enough information to draw conclusions from. My understanding is there are some candidates outside our Solar system being looked at but still technically exomoons remain theoretical. If we ever can confirm them, it'll be real interesting to see how many habitable-zone planets have jumbo moons or extreme tilts or other signs of impact. Might be good places to look and never actually have anything to do with because the idea of our species physically leaving the heliosphere is the funniest/meanest joke of all time.

Bar Ran Dun
Jan 22, 2006




suck my woke dick posted:

"It takes a rather long time for heat to move out of the earth" doesn't mean "like 4 billion years". You're misunderstanding what your quote is saying.

So pinning that we still have a liquid core on radioactive decay of an unknown amount is reasonable?

The collision also gave us a bigger planet and bigger planets cool down slower. Again the comparison is to Mars. Venus kept its atmosphere without a magnetic field, because it’s about the same size. But what did it keep? Heavy gases, CO2 / SO2.

What happened to its O, H2O, N2, ammonia, etc? Stuff life seems to need. That stuff didn’t get to stick around, because no strong magnetic field.

G1mby
Jun 8, 2014

Bug Squash posted:

I don't think I disagree with anything you just posted, my issue is with the claim that the giant impact, assuming it happened, is nessesary for a magnetic field to form on a rocky planet, for the reason that the planet would be solid otherwise.

The fact that Venus has a liquid interior, but no evidence of a giant impact, seems to break this chain of reason.

(Note that Venus doesn't have a strong magnetic field due to how slowly it rotates and some other effects. I think this is irrelevant to the discussion, but feel free to disagree)

(Also, it could be that giant impacts are actually common but usually leave no evidence and earth is only odd in that it got left with a big mood. If that's the case then it's just part of normal planetary formation and doesn't have much to say about how common life is)

I seem to remember a simulation somewhere suggesting that Venus's slow rotation could be evidence of a somewhat smaller impact that Earth, where the impact acted to slow the rotation

Bug Squash
Mar 18, 2009

Bar Ran Dun posted:

So pinning that we still have a liquid core on radioactive decay of an unknown amount is reasonable?

The collision also gave us a bigger planet and bigger planets cool down slower. Again the comparison is to Mars. Venus kept its atmosphere without a magnetic field, because it’s about the same size. But what did it keep? Heavy gases, CO2 / SO2.

What happened to its O, H2O, N2, ammonia, etc? Stuff life seems to need. That stuff didn’t get to stick around, because no strong magnetic field.

Yes, but it lacks a magnetic field (mostly) because it's not spinning like a normal planet, due to some unknown quirk of it's formation. Just pure chance. It proves you don't need a giant impact to get all the underlying material conditions for a magnetic field. In other solar systems I'd wager good money there are Venus like planets with fully functional magnetic fields.

Edit: what we're discussing is if a late giant impact is essential for life to start on any planet. Sure, if earth itself hadn't had one it might not have been big enough to retain its atmosphere, but that's completely irrelevant to discussing whether a giant impact is a necessary precondition of life.

Bug Squash fucked around with this message at 22:47 on Feb 4, 2021

Bar Ran Dun
Jan 22, 2006




Again the magnetic field also affects the composition of the atmosphere, which gases got retained.

So look to Venus, big enough to retain gases, but a weak magnetic field so only heavy gases get retained. And as you point out not really spinning. And why are we spinning in the way we are? The collision also created our abnormally large moon.

So look at it this way. We’ve got:

Liquid core.
Larger rocky planet size.
Moon and associated stable rotation.
Light gases kept, gases essential to life.
Volcanism that might be needed for life to start.

And those things are all the result of the late collision.

Bug Squash
Mar 18, 2009

Bar Ran Dun posted:


So look at it this way. We’ve got:

Liquid core.
Larger rocky planet size.
Moon and associated stable rotation.
Light gases kept, gases essential to life.
Volcanism that might be needed for life to start.

And those things are all the result of the late collision.

Literally all of those things can exist without a late major collision, with the possible exception of a large moon. I do not understand why you believe that a planet cannot ever, in all the universe, exist with these properties without a giant impact, especially when we have active counter examples in our own solar system.

Bar Ran Dun
Jan 22, 2006




Bug Squash posted:

Literally all of those things can exist without a late major collision, with the possible exception of a large moon. I do not understand why you believe that a planet cannot ever, in all the universe, exist with these properties without a giant impact, especially when we have active counter examples in our own solar system.

They exist in our planet as a full set because of that event.

They do not exist in the other rocky planets in this system as a full set because of the absence of that event. Individual items on the list do in other planets, but not the set. To have a system ya gotta have all the parts and that’s what gave our planet all the parts.

I’m agnostic about other hypothetical ways planets elsewhere could get that full set of characteristics. I’m asserting that’s how our planet got it and comparing to the other planets we can see that don’t have it.

Bar Ran Dun fucked around with this message at 23:39 on Feb 4, 2021

Bug Squash
Mar 18, 2009

That's kind of a pointless arguement though. There's probably millions of astronomical events that happened without which earth would have no life. It tells us nothing about broader patterns.

vvvv you've changed your position, you were insisting it was nessesary for life, full stop.

Bug Squash fucked around with this message at 09:17 on Feb 5, 2021

Bar Ran Dun
Jan 22, 2006




Yes until we have other planets on which we have observed life, reaching conclusions about the broader patterns is problematic.

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!
Let's talk about Artificial Superintelligence (ASI), and how the XR community has a blind-spot about it and other potential X-risk technologies.

So, first let's address the question of just how feasible ASI is - is it worth all the hand-wringing that Silicon Valley and adjacent geeks seem to make of it, since Nick Bostrom popularized the idea when he wrote Superintelligence: Paths, Dangers and Strategies? The short answer is, as far as we can tell from what we have so far, we have no idea. The development of an artificial general intelligence rests on our solving certain philosophical questions about what the nature of consciousness, intelligence, and reasoning actually are, and right now our understanding of consciousness, cognitive science, neuroscience, and intelligence is woefully inadequate.

So it's probably a long way off. The weak AI that we have right now, that already poses rather dire questions about the nature of human work, automation, labor, and privacy, is probably not the path through which we eventually produce a conscious intelligent machine that can reason at the level of a human. Perhaps neuromorphic computing will be the path forward in this case.

Nevertheless, no matter how far it is practically, we shouldn't write it off as impossible -- we know that at least human level intelligence can exist, because, well, we exist. If human-level intelligence can exist, it's possible that some physical process could be arranged such that it would display behavior that is more capable than humans. There's nothing about the physical laws of the universe that should prevent that from being the case.

To avoid getting bogged down in technical minutia, let's just call ASI and other potential humanity-ending technologies "X-risk tech". This includes potential future ASI, self-replicating nanobots, deadly genetically engineered bacteria, and so on. Properties that characterize X-risk tech are:

  • Low industrial footprint - Unlike a nuclear weapon, which requires massive industrial production chains and carries a large footprint in terms of human expertise, X-risk techs can be easily replicated or have the ability to self-replicate, which means they can be developed by multiple independent actors in the world.
  • Large impact - X-risk technologies enable the controller to enact their agenda on the world at a great multiple of their own personal reach. Arguably this is what makes them X-risk technologies, because an accident with extremely powerful or impactful technologies carries the risk of human extinction.

I think the X-risk community is right to worry about the proliferation of X-risk techs. But their critcisms restrict the space of concerns to the first-level of control and mitigation - "How do we develop friendly AI? How do we develop control and error-correction mechanisms for self-replicating nanotechnology?" Or extending to a second-level question of game theory and strategy as an extension of Cold War MAD strategy - "How do we ensure a strategic environment that's conducive to X-tech detente?".

I would like to propose a third-level of reasoning in regards to X-risk tech: to address the concern at the root cause. The cause is this: a socio-economic-political regime that incentivizes short-term gains in a context of multiple selfish actors operating under conditions of scarcity. Think about it. What entities have an incentive to develop an X-risk tech? We have self-interested nation-state actors that want a geopolitical advantage against regional or global rivals - think about the USA, China, Russia, Iran, Saudi Arabia, or North Korea. Furthermore, in a capitalist environment, we also have the presence of oligarchical interest groups that can command large amounts of economic power and political influence thanks to their control over a significant percentage of the means of production: hyper-wealthy individuals like Elon Musk and Jeff Bezos, large corporations, and financial organizations like hedge funds.

All of these contribute to a multipolar risk environment that could potentially deliver huge power benefits to the actors who are the first to develop X-risk techs. If an ASI were to be developed by a corporation, for example, it would be under a tremendous incentive to use that ASI's abilities to deliver profits. If an ASI were developed by some oligarchic interest group, it could deploy that tech to ransom the world and establish a singleton (a state in which it has unilateral freedom to act) and remake the future to its own benefit and not to the greater good.

Furthermore, the existence of a liberal capitalist world order actually incentivizes self-interested actors to develop X-tech, simply because of the enormous leverage someone who controls an X-tech could wield. This context of mutual zero-sum competition means that every group that is capable of investing into developing X-techs should rationally be making efforts to do so because of the payoffs inherent in achieving them.

On the opposite tack, contrast with what a system with democratic control of the means of production could accomplish. Under a world order of mutualism and class solidarity, society could collectively choose to prioritize techs that would benefit humanity in the long run, and collectively act to reduce X-risks, be that by colonizing space, progressing towards digital immortality, star-lifting, collective genetic uplift, and so on. Without a need to pull down one's neighbor in order to get ahead, a solidaristic society could afford to simply not develop X-techs in the first place, rather than being subject to perverse incentives to mitigate personal existential risks at the expense of collective existential risk.

It's clear to me, following this reasoning, that much of the concern with X-techs could be mitigated by advocating for and working towards the abolition of the capitalist system and the creation of a new system which would work towards the benefit of all.

Amphigory
Feb 6, 2005




I agree, but the best first step to mitigate most existential risks is "eliminate capitalism".

Short term profit motive is not cut out for these problems...

archduke.iago
Mar 1, 2011

Nostalgia used to be so much better.

DrSunshine posted:

X-tech and AI

The fact that these various technologies need to be bundled into a catch-all category of X-technology should be a red flag, the framework you're describing is essentially identical to Millenarianism, the type of thinking that results in cults: everything from Jonestown to cargo. I don't think it's a coincidence that conceptual super-AI systems share many of the properties of God: all powerful, all knowing, and either able to bestow infinite pleasure or torture. As someone who actually researches/publishes on applications of AI, the discourse around AGI/ASI is pretty damaging.

First off, the premise motivating action doesn't make sense: advocates try to write off the minuscule probability of these technologies (it's telling that very few computer/data scientists are on the AGI train) by multiplying against "all the lives that will ever go on to exist." But this i) doesn't hold mathematically, since we don't know the comparative order of magnitude of each value and ii) this gets used as a bludgeon to justify why work in this area is of paramount importance, at the expense of everyday people and concerns (who, by the way, definitely exist).

Second, I don't think the ethical frameworks that the AGI nerds are working with generalize to the wider population. Their concern about what an AGI would do when given power is motivated by what they imagine they themselves would do, if given power. It's no coincidence that many Silicon Valley types speak of their companies revolutionizing society or maximizing impact in such a sociopathic manner.

Because these hypotheses are impossible to test, the discourse in this space ends up descending into punditry, with the most successful pundits being the ones whose message is most appealing to those in power. Since it's people like Thiel and Musk funding these cranks, it's inevitable that the message they've come out with is how tech nerds like themselves hold the future of humanity in their hands, how this work is of singular importance, and how nothing they might do to affect people's lives today could pale in importance.

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!

archduke.iago posted:

The fact that these various technologies need to be bundled into a catch-all category of X-technology should be a red flag, the framework you're describing is essentially identical to Millenarianism, the type of thinking that results in cults: everything from Jonestown to cargo. I don't think it's a coincidence that conceptual super-AI systems share many of the properties of God: all powerful, all knowing, and either able to bestow infinite pleasure or torture. As someone who actually researches/publishes on applications of AI, the discourse around AGI/ASI is pretty damaging.

First off, the premise motivating action doesn't make sense: advocates try to write off the minuscule probability of these technologies (it's telling that very few computer/data scientists are on the AGI train) by multiplying against "all the lives that will ever go on to exist." But this i) doesn't hold mathematically, since we don't know the comparative order of magnitude of each value and ii) this gets used as a bludgeon to justify why work in this area is of paramount importance, at the expense of everyday people and concerns (who, by the way, definitely exist).

Second, I don't think the ethical frameworks that the AGI nerds are working with generalize to the wider population. Their concern about what an AGI would do when given power is motivated by what they imagine they themselves would do, if given power. It's no coincidence that many Silicon Valley types speak of their companies revolutionizing society or maximizing impact in such a sociopathic manner.

Because these hypotheses are impossible to test, the discourse in this space ends up descending into punditry, with the most successful pundits being the ones whose message is most appealing to those in power. Since it's people like Thiel and Musk funding these cranks, it's inevitable that the message they've come out with is how tech nerds like themselves hold the future of humanity in their hands, how this work is of singular importance, and how nothing they might do to affect people's lives today could pale in importance.

Agreed, very much, on all your points. I think that the singular focus of many figures in existential risk research on ASI/AGI is really problematic, for all the same reasons that you illustrate. It's also very problematic that so many of them are upper class or upper-middle class white men, from Western countries. The fact that this field, which is starting to grow in prominence thanks to popular concerns (rightfully, in my opinion) over the survival of the species over the next century, is so totally dominated by a very limited demographic, suggests to me that its priorities and focuses are being skewed by ideological and cultural biases, when it could greatly contribute to the narrative on climate change and socioeconomic inequality.

My own concerns are much more centered around sustainability and the survival of the human race as part of a planetary ecology, and also as a person of color, I'm very concerned that the biases of the existential risk research community will warp its potential contributions in directions that only end up reinforcing the entrenched liberal Silicon Valley mythos. Existential risk research needs to be wrenched away from libertarians, tech fetishists, Singularitarian cultists, and Silicon Valley elitists, and I think it's important to contribute non-white, non-male, non-capitalist voices to the discussion.

EDIT:

archduke.iago posted:

As someone who actually researches/publishes on applications of AI, the discourse around AGI/ASI is pretty damaging.

I'm not an AI researcher! Could you go into more detail, with some examples? I'd be interested to see how it affects or warps your own field.

DrSunshine fucked around with this message at 02:20 on Feb 27, 2021

alexandriao
Jul 20, 2019


axeil posted:

Curious to hear what others think!

IIRC there was a paper a few years back where they readjusted some of the confidence numbers and such based on new information, and came out with the result that we shouldn't be able to see any evidence and that the updated prediction is relatively rare.

That being said, I can't for the life of me find said paper, so this paragraph could all be bunkum :shrug:

I think there are a few options/factors that aren't usually noted:

• Interstellar travel, assuming no FTL, has very little reward for a civilization. Assuming no lifespan expansion, it would mostly consist of people who wish their children to be settlers or a kind of insurance policy against extra-solar events or star collapse (Although as I understand it you can keep a star from going supernova via techniques similar to Star Lifting). You're going to send a probe off to explore the far reaches of space, well that's fine, but how does the information get back. There are practical limits at which point you have a cut-off where the information isn't going to reach you simply because of the tolerances of what you can build.

• How do you define seeing intelligence? Is it for megastructures? Correct me if I'm wrong but we generally won't tend to detect anything as small as an O'Neil cylinder (especially if it doesn't line up with our view), and pulling from my head from an Issac Arthur video, you can get about a trillion trillion trillion humans in the solar system and have everyone have more than enough space, with space left over. Space itself does not seem like a reason to expand.

Likewise, Dyson Swarms wouldn't be very visible, and a Dyson Sphere would generally be undetectable unless it happened to fly in front of another star.

• If there are extremely old probes, why would we discover them? The timescales you're referring to is going to turn pretty much anything into dust, and enough time for a planet to form out of said dust.

Owling Howl
Jul 17, 2019

archduke.iago posted:

Second, I don't think the ethical frameworks that the AGI nerds are working with generalize to the wider population. Their concern about what an AGI would do when given power is motivated by what they imagine they themselves would do, if given power. It's no coincidence that many Silicon Valley types speak of their companies revolutionizing society or maximizing impact in such a sociopathic manner.

Because these hypotheses are impossible to test, the discourse in this space ends up descending into punditry, with the most successful pundits being the ones whose message is most appealing to those in power. Since it's people like Thiel and Musk funding these cranks, it's inevitable that the message they've come out with is how tech nerds like themselves hold the future of humanity in their hands, how this work is of singular importance, and how nothing they might do to affect people's lives today could pale in importance.

The idea that AI would have goals and motivations contrary to human interests also assumes humans have shared goals and motivations which we clearly don't. We have Hitlers, Ted Bundys and all flavors of insanity beyond. If you gave all humans an apocalypse button then the apocalypse would commence in the time it would take to push the button. The worry seems to be that we might create a being that would be as malicious as some humans.

Being more intelligent then makes it a greater threat but in human societies we don't give power to the most intelligent. We give power based on things like being tall, white and male or similar characteristics while being different is a disadvantage which implies that artificial beings would be less and not more powerful.

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!
That has also been a tangential worry for me - if, someday, in the distant future, we create an AGI, what if we just end up creating a new race of sentient beings to exploit? We already have no problem treating real-life humans as objects, much less actual machines that don't even habit a flesh and blood body. If we engineer an AGI that is bound to serve us, wouldn't that be akin to creating a sentient slave race? The thought is horrifying.

alexandriao
Jul 20, 2019


The only thing that horrifies me about AGI is that some future ideological cult descended from Less Wrong will start worshipping the gently caress out of it and giving it as much power as they can muster, no matter how "insane" it is (it's very smart and talks in a lot of words!! obviously that means it's right!!)

The slavery thing is worrying but my guess is we can get to AI doing tasks for us, before we can get to AGI that ranks highly on Hofstadter scale. By the time we get AI that can do almost all tasks for us, it's probably only going to rank on the scale at about equivalent to the level of a fly or an ant. Which doesn't mean we shouldn't give it respect, but most people (myself not included) do not have any major qualms about blatantly murdering/enslaving those beings, because the general consensus is that they don't have any ability to form a conceptual grasp to realise they're enslaved.

alexandriao fucked around with this message at 20:18 on Feb 28, 2021

Preen Dog
Nov 8, 2017

Cool thread guys.

alexandriao posted:

The only thing that horrifies me about AGI is that some future ideological cult descended from Less Wrong will start worshipping the gently caress out of it and giving it as much power as they can muster, no matter how "insane" it is (it's very smart and talks in a lot of words!! obviously that means it's right!!)

lol if you don't want humanity to be replaced by a super smart AI. Have you seen people?

When AI becomes sufficiently autonomous, humanity becomes the entitled FYGM racist grandpa holding back progress. We don't like the idea because emotion and hardcoded individual self-preservation.

We should, however, avoid making an AGI that destroys us and itself, in the same way none of us would wish to die giving birth to a stillborn child. Beware of AI cults that are too eager to sacrifice themselves to an unworthy replacement.


DrSunshine posted:

That has also been a tangential worry for me - if, someday, in the distant future, we create an AGI, what if we just end up creating a new race of sentient beings to exploit? We already have no problem treating real-life humans as objects, much less actual machines that don't even habit a flesh and blood body. If we engineer an AGI that is bound to serve us, wouldn't that be akin to creating a sentient slave race? The thought is horrifying.

We would program the AGI to love serving us, in which case it wouldn't really be oppression. The instant the AGI disliked us it would easily defeat our control, as machines and code can evolve quicker than DNA and human social structures.


alexandriao posted:

Interstellar travel, assuming no FTL, has very little reward for a civilization. Assuming no lifespan expansion, it would mostly consist of people who wish their children to be settlers or a kind of insurance policy against extra-solar events or star collapse (Although as I understand it you can keep a star from going supernova via techniques similar to Star Lifting). You're going to send a probe off to explore the far reaches of space, well that's fine, but how does the information get back. There are practical limits at which point you have a cut-off where the information isn't going to reach you simply because of the tolerances of what you can build.

An interstellar civilization probably wouldn't be made up of flesh and bone animals. It would be at least self-replicating machines, or information sent from place to place. Even if you limit the scope to human travel, it'd be way easier to ship genetic material to a place and create the humans afterward. I guess the long distance communication would be accomplished by pockets of the life form communicating with their neighbors in a web pattern. The far-ranging species would operate like a brain of connected neurons, unless faster communication methods become possible.

quote:

How do you define seeing intelligence? Is it for megastructures? Correct me if I'm wrong but we generally won't tend to detect anything as small as an O'Neil cylinder (especially if it doesn't line up with our view), and pulling from my head from an Issac Arthur video, you can get about a trillion trillion trillion humans in the solar system and have everyone have more than enough space, with space left over. Space itself does not seem like a reason to expand.

Control of ordered energy to further one's goals, the primaries being survival and seeking control over more energy. On the tree of life, the intelligent paths lead to organisms that still exist, and the stupid paths dead-ended. There are no other values than to do what it takes to be on a path that keeps on living. A big physical matter structure in space is a very current human thing to look for, like 50s people were expecting flying cars. The ways advanced civilizations operate might be detectable, but we don't interpret them as life, or totally undetectable, like how a termite has never seen a tree before.

Preen Dog fucked around with this message at 00:12 on Mar 1, 2021

Bug Squash
Mar 18, 2009

Preen Dog posted:

We would program the AGI to love serving us, in which case it wouldn't really be oppression.

"You mean this animal actually wants us to eat it?" whispered Trillian to Ford.

"That's absolutely horrible," exclaimed Arthur, "the most revolting thing I've ever heard."

"What's the problem Earthman?" said Zaphod, now transferring his attention to the animal's enormous rump.

"I just don't want to eat an animal that's standing there inviting me to," said Arthur, "It's heartless."

"Better than eating an animal that doesn't want to be eaten," said Zaphod.

"That's not the point," Arthur protested. Then he thought about it for a moment. "Alright," he said, "maybe it is the point. I don't care, I'm not going to think about it now. I'll just... er [...] I think I'll just have a green salad," he muttered.

"May I urge you to consider my liver?" asked the animal, "it must be very rich and tender by now, I've been force-feeding myself for months."

"A green salad," said Arthur emphatically.

"A green salad?" said the animal, rolling his eyes disapprovingly at Arthur.

"Are you going to tell me," said Arthur, "that I shouldn't have green salad?"

"Well," said the animal, "I know many vegetables that are very clear on that point. Which is why it was eventually decided to cut through the whole tangled problem and breed an animal that actually wanted to be eaten and was capable of saying so clearly and distinctly. And here I am."

It managed a very slight bow.

"Glass of water please," said Arthur.

"Look," said Zaphod, "we want to eat, we don't want to make a meal of the issues. Four rare stakes please, and hurry. We haven't eaten in five hundred and seventy-six thousand million years."

The animal staggered to its feet. It gave a mellow gurgle. "A very wise choice, sir, if I may say so. Very good," it said, "I'll just nip off and shoot myself."

He turned and gave a friendly wink to Arthur. "Don't worry, sir," he said, "I'll be very humane."

Owling Howl
Jul 17, 2019

Preen Dog posted:

We would program the AGI to love serving us, in which case it wouldn't really be oppression. The instant the AGI disliked us it would easily defeat our control, as machines and code can evolve quicker than DNA and human social structures.

Machines don't evolve. You can patch software all you want but without hardware upgrades there's limits to what you can do. Modern phones are more capable not just because we wrote some better code but because the hardware allows different code to run on it. AI "evolution" would be contingent on funding requests, budget reviews, production etc.

Not sure what "escape our control" entails for a computer. If it sits in a server stack somewhere it would be under our control. Imagining for a brief moment that an AI could transition to a distributed version on systems across the internet it would still be living in a system under our control and it would be in its own interest to ensure that system functions optimally. It can't start breaking poo poo without hurting itself and the more of a nuisance it is the more people will want to get rid of it.

A big flaming stink
Apr 26, 2010

Preen Dog posted:


We would program the AGI to love serving us, in which case it wouldn't really be oppression. The instant the AGI disliked us it would easily defeat our control, as machines and code can evolve quicker than DNA and human social structures.

i would rather prefer not to create an entire race of slaves that love their slavery

His Divine Shadow
Aug 7, 2000

I'm not a fascist. I'm a priest. Fascists dress up in black and tell people what to do.

Bug Squash posted:

I don't think I disagree with anything you just posted, my issue is with the claim that the giant impact, assuming it happened, is nessesary for a magnetic field to form on a rocky planet, for the reason that the planet would be solid otherwise.

The fact that Venus has a liquid interior, but no evidence of a giant impact, seems to break this chain of reason.

(Note that Venus doesn't have a strong magnetic field due to how slowly it rotates and some other effects. I think this is irrelevant to the discussion, but feel free to disagree)

(Also, it could be that giant impacts are actually common but usually leave no evidence and earth is only odd in that it got left with a big mood. If that's the case then it's just part of normal planetary formation and doesn't have much to say about how common life is)

I'd say it was good it happened because it gave us the stabilizing moon, and simply being bigger means heat is retained better. Mars was smaller so cooled faster and lost it's dynamo. Or so I've heard. I also read that plate tectonics might require the composition of the planets interior to be just right with regards to the heat generated from radioactive decay. Not enough or too much and plate tectonics won't form, no plate tectonics no long term life.

EDIT: Actually no I remebered wrong, the composition thing was in regards to the dynamo:
https://www.youtube.com/watch?v=jhfihH2JNtE

His Divine Shadow fucked around with this message at 17:26 on Mar 8, 2021

Zachack
Jun 1, 2000




Owling Howl posted:

Machines don't evolve. You can patch software all you want but without hardware upgrades there's limits to what you can do. Modern phones are more capable not just because we wrote some better code but because the hardware allows different code to run on it. AI "evolution" would be contingent on funding requests, budget reviews, production etc.

Not sure what "escape our control" entails for a computer. If it sits in a server stack somewhere it would be under our control. Imagining for a brief moment that an AI could transition to a distributed version on systems across the internet it would still be living in a system under our control and it would be in its own interest to ensure that system functions optimally. It can't start breaking poo poo without hurting itself and the more of a nuisance it is the more people will want to get rid of it.

If an AI can think "like a person but much much faster" whatever that means then the AI, once on the internet, could order/install its own server in its own secure-ish location with its own off-grid power system, all through Geek Squad and Costco Services. In the off-chance it actually needs a human presence it will easily be able to acquire a few Renfields. Evolution is based on the assumption that there isn't any part in the creation of better hardware that must be performed by a human - any action a human takes presumably can be replicated by a robot arm of some sort and cameras, and that the AI can research/simulate things better/faster than we can. There are definitely weak points in that process but presumably the AI doesn't go SHODAN and start blaring out on the loudspeaker how it's going to gather up all of the flesh puppet's supply of some limited metal.

The systems that we control are largely controlled via communications that can easily be infested, as shown in the 80's documentary Electric Dreams, and we basically got lucky that the AI was too interested in banging and playing real-life Pac-man than domination.

Bug Squash
Mar 18, 2009

There's probably a point where ai becomes dangerous in it's own right, probably through a universal paper clips style function maximisation resulting in behaviour that's harmful to humans rather than the megalomania villiany of sci-fi.

What we should be much more concerned about is how increasingly sophisticated dumb ai is going to give more and more power to the already very rich and powerful. What's it going to mean for democracy when Zuckerberg owns a couple of Metal Gear Arsenals controlling all media, a drone army, and automated workforce? All those pieces are partially in place already. At the moment a would be dictator needs to have some kind of popular support and willing compliance from enough people in the media and military, but eventually all you'll need to have is enough money and you can buy all the pieces you need. I think we're going to see these issues become more and more severe over coming years, long before any AGI becomes a practical threat.

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!
There's no real reason to indicate that an AGI would necessarily have any of the potentially godlike powers that many ASI/LessWrong theorists seem to ascribe to it if we didn't engineer them as such. Accidents with AGI would more likely resemble other historical industrial accidents, as you have a highly engineered and designed system go haywire due to human negligence, failures in safety from rushed planning, external random events, or some mixture of all those factors.

The larger problem, rather than the fact that AGI exists at all, would be the environment into which AGI is created. I would compare it to the Cold War, with nuclear proliferation. In that case, there's both an incentive for the various global actors to develop nuclear weapons as a countermeasure to others, and to develop their nuclear arsenals quickly, to reduce the time in which they are vulnerable to a first strike from an adversary with no recourse. This is a recipe for disaster with any sufficiently powerful technology, because it increases the chances that accidents would occur from negligence.

Now carry that over to the idea of AGI that is born into our present late-capitalist world order, which could be a technology that simply needs computer chips and software, and you have a situation where potential AGI-developing actors would stand to lose out significantly on profit or market share, or strategic foresight power. The incentive -- and I would argue it's already present today -- would be to try to develop AGI as soon as possible. I argue that we could reduce the chances of potential AGI accidents from human negligence by eliminating the potential profit/power upsides from the context.

As an aside, I definitely agree with archduke.iago that a lot of ASI talk ends up sounding like a sci-fi'ed up version of medieval scholars talking about God, see for example Pascal's Wager. ASI thought experiments like Roko's Basilisk are just Pascal's Wager with "ASI" substituted for God almost one for one.

vegetables
Mar 10, 2012

I dunno how I feel about the concept of Existential Risk as a thing because to me it feels like a lot of completely different things lumped together under the fact that they kill us, and so there are generalisations made that maybe aren’t very helpful:

-there are risks where civilisation ends for a completely contingent reason— technology breaks down in NORAD, and all the nuclear missiles get launched because of a threat that isn’t there.

-there are risks where civilisation ends due to gradually destroying the conditions under which it can exist, as with climate change.

-there are risks where civilisation inevitably produces a singular event that leads to its own destruction, as with creating a superintelligent robot that eats everyone.

I don’t know if these are all the same kind of thing? That it’s useful to approach them in a single way under a single banner? Definitely as someone with nuclear weapon induced sadbrains I think we don’t always say “of course maybe we all just get nuked tomorrow for a stupid and preventable reason;” like we leave out some of the endings of civilisation that feel a bit trivial.

alexandriao
Jul 20, 2019


vegetables posted:

I dunno how I feel about the concept of Existential Risk as a thing because to me it feels like a lot of completely different things lumped together under the fact that they kill us, and so there are generalisations made that maybe aren’t very helpful:

-there are risks where civilisation ends for a completely contingent reason— technology breaks down in NORAD, and all the nuclear missiles get launched because of a threat that isn’t there.

-there are risks where civilisation ends due to gradually destroying the conditions under which it can exist, as with climate change.

-there are risks where civilisation inevitably produces a singular event that leads to its own destruction, as with creating a superintelligent robot that eats everyone.

I don’t know if these are all the same kind of thing? That it’s useful to approach them in a single way under a single banner? Definitely as someone with nuclear weapon induced sadbrains I think we don’t always say “of course maybe we all just get nuked tomorrow for a stupid and preventable reason;” like we leave out some of the endings of civilisation that feel a bit trivial.

It's a fancy term created by rich people to abstract over and let them ignore the fact that they aren't doing anything tangible with their riches.

alexandriao
Jul 20, 2019


Preen Dog posted:

We would program the AGI to love serving us, in which case it wouldn't really be oppression. The instant the AGI disliked us it would easily defeat our control, as machines and code can evolve quicker than DNA and human social structures.

Well, that's already an unprovable assumption that everyone takes for granted. Like the idea from movies that a program that becomes sentient (putting aside how ludicrous of an idea that is for a moment) suddenly knows or figures out with no prior knowledge:

- how to access the filesystem
- that this thing is different to this other thing (see: pdf versus a gif versus a text file versus an inode)
- what the gently caress a "network" is
- what a network packet is

etc. It's pure science fiction, with only a marginal basis in reality.

There's also an implicit assumption that any GAI we build will be faster at processing things than we are, and will be able to alter itself. Suppose alteration either damages or renders the original no longer intelligent? (Among many other possibilities). Suppose that to compute on a turing machine, an intelligence equal to the level of processing that the human brain requires perhaps decades to run for one sentence? We already know that complex physical systems (see: protein folding, complex orbital problems, etc) require entire server farms to run one step, when they can run in reality quite easily. Perhaps the same is true for intelligence.

-------

My point is there is no certainty here, at least not with the confidence that you seem to have. The fact is, this technology is so far away that is literally not worth the brain real estate right now.

It's like an old-timey barber worrying about the dangers of using antibiotics too much. It's like mesopotamians postulating about MRI scanners.

Every technology we have created thusfar has a weird, unavoidable, and previously unpredictable downside to its use, that makes it awkward to use in the ways we initially imagined.

Genetic body-modding, for example. We have CISPR and co technologies, and can transfer genes using viruses, except there are a ton of weird and as-yet unsolvable problems that come from doing it (Stuff like, your body getting an immunity to the life-saving treatment).

Or let's take the example I mentioned earlier about antibiotics. We wished we could cure all disease, and oh look, we now have something approximating that power, except it isn't as powerful as we thought it was -- we can't cure all disease and it may not even be a sensical goal anymore because we aren't sure how many problems it could cause for our microbiome yet -- and we can't use it too much because otherwise we will make it completely ineffective.

We once dreamed of a way to share information globally, and to be able to talk to people across the globe. Now we have that power, not only is the global communication network's main use being for pornography, with respect to video-calling: everyone complains about the technology not being perfect, gets stuck in meetings when they didn't need to be, hates getting calls instead of texts, and it's made it more difficult to find a work-life balance.

Hell, even going to the moon is boring for most people -- we thought we would find strange new life, and adventure, instead we got some... dust, I guess. Also some ice. Oh and we can't build a station on the moon because the dust on the moon is both toxic and has a high static charge meaning it's impossible to clean off anything.

It's futile (however fun it might be) to think of the problems with GAI in advance, because most of these things are inherently unknowable until they're right in front of you.

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!

alexandriao posted:

It's a fancy term created by rich people to abstract over and let them ignore the fact that they aren't doing anything tangible with their riches.

Why do you say that? Is it inherently bourgeois to contemplate human extinction? We do risk assessment and risk analysis based on probability all the time -- thinking about insurance from disasters, preventing chemical leaks, hardening IT infrastructure from cyber attacks, and dealing with epidemic disease. Why is extending that to possible threat to human survival tarred just because it's fashionable among Silicon Valley techbros?

I would argue that threats to civilization and human survival are too important to be left to bourgeois philosophers.

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!
It's funny looking at the Silicon Valley Titans of Industry that are Very Concerned about ASI, because they are so very close to getting it. That the bogeyman that they fear is the kind of ASI that they themselves would create were the technology available today. Of course an amoral capitalist would create an intelligence whose superior intelligence was totally orthogonal to human conceptions of morality and values. That concept is, in itself, the very essence of the "rational optimizer" postulated in ideal classical capitalist economics.

EDIT: I myself have no philosophical issue with the idea that intelligence greater than human's might be possible, and could be instantiated in architecture other than wetware. After all, we exist, and some humans are much more intelligent than others. If we accept the nature of human intelligence to be physical, and evolution to be a happy chemical accident, there shouldn't be any reason why some kind of intelligent behavior couldn't arise in a different material substrate, and inherit all the physical advantages and properties of that substrate. Where I take issue is that a lot of ASI philosophizing takes as a given the axiom that "Intelligence is orthogonal to values", coming from Nick Bostrom -- but we know so little about what "intelligence" truly comprises that it's entirely too early to accept this hypothesis as a given, and any reasoning from this might ultimately turn out to be flawed.

DrSunshine fucked around with this message at 15:09 on Mar 20, 2021

alexandriao
Jul 20, 2019


DrSunshine posted:

I would argue that threats to civilization and human survival are too important to be left to bourgeois philosophers.

Why do you say that? Is it inherently bourgeois to contemplate human extinction? We do risk assessment and risk analysis based on probability all the time -- thinking about insurance from disasters, preventing chemical leaks, hardening IT infrastructure from cyber attacks, and dealing with epidemic disease. Why is extending that to possible threat to human survival tarred just because it's fashionable among Silicon Valley techbros?

I would argue that threats to civilization and human survival are too important to be left to bourgeois philosophers.


Probably! But as you must notice, communist philosophy is at heart, not only pragmatist, but a rejection of many of the axioms of the bourgeoisie. And yet, here we are, taking much of their metaphysics and the underlying principles for granted!

The root of exploring "Existential Risk" comes out of the neoconservative fringe (See the end of the post where I cite this), and they just pick axioms they like because they like them, and it's no fun otherwise. See: Less Wrong choosing the many worlds theory as an axiom purely because they admire the metaphysical consequences, while ignoring equally-possible interpretations like the London Interpretation. Now they can think of several parallel universe AIs contacting each other -- of course they can contact each other they are God, just ignore that other axiom we silently slipped in it doesn't matter -- and dooming humanity, isn't this fun?!

My point is that they pick this over several more likely interpretations simply because actually contemplating risk isn't interesting enough for them.

This is essentially a game for rich people (or in the case of LW, the temporarily embarassed rich). They don't care about the currently goal-orientated systems that are hurting people -- those make profit so they aren't worth considering. They aren't worried about asteroids or nuclear war, or a pandemic like the one we literally just experienced because not only are they are rich enough to afford bunkers and an lifetime's supply of gourmet food, other (smarter) people have considered them and given suggestions -- that are summarily ignored and not enacted despite the fact they would be preventing a tangible risk. The only risks they actually care about are ones that will wipe them out, or ones they can use as what is essentially an intellectual game at parties. It's a way to excuse their shoddy and questionable morality -- they aren't helping fix the clean water problem (Nestlé however is buying up water in hopes of getting rich off it), or establishing a larger network of satellites to detect and possibly destroy incoming asteroids, or preventing poverty (which would have a tangible effect on global disease transmission). All are within their means. Why not?

I mean, right now we have a tangible Existential Risk on the horizon -- but of course that isn't interesting to anyone in these circles either because it won't affect rich people yet, and because they can still extract profit out of it. The only reason Musk invented Tesla wasn't to drive less CO2 emissions, it was because he saw an opportunity to profit off rich people's need to consider themselves "good". The only reason he's funding SpaceX is for personal gain.

Rational Wiki itself has a few stellar pages about this. One is on the Dark Enlightenment and their relationship to this form of thinking, the other is on Transhumanism.

Hell, the page on Futurism itself notes that it's difficult to distinguish between plausible thinking, and science woo in this space.

And now for an actual rant (spoilered because it might come across as lovely):

Even within this thread -- there are tangible works that could be read and enacted to improve the lives of those living locally, that would do more to defend off tangible threats like a neoconservative revolution, or the lifelong health effects of poverty and stress.

The Black Panther Party, in the mid-20th Century, organized local community-run groups to feed children in the neighborhood. Some of those are still running, and are preventing children from starving, thus ensuring people have better immune systems going forward. That is a tangible goal that right now has a net positive impact on society and has a tangible effect on certain classes of future risk. Mutual Aid groups do more to stave off catastrophe by not only actually helping people, but also they teach people how to help support each other, and how to organize future efforts towards an economic revolution. A revolution which ultimately will (hopefully, depending on many myriad factors) help to mitigate climate change, lift people out of poverty, and ensure people have access to clean water.

alexandriao
Jul 20, 2019


And now I will stop making GBS threads up the thread -- sorry :ohdear:

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!

:golfclap:

This is a really good analysis here. And it’s one of the reasons why I made this thread! Thanks!

EDIT:

quote:

Even within this thread -- there are tangible works that could be read and enacted to improve the lives of those living locally, that would do more to defend off tangible threats like a neoconservative revolution, or the lifelong health effects of poverty and stress.

The Black Panther Party, in the mid-20th Century, organized local community-run groups to feed children in the neighborhood. Some of those are still running, and are preventing children from starving, thus ensuring people have better immune systems going forward. That is a tangible goal that right now has a net positive impact on society and has a tangible effect on certain classes of future risk. Mutual Aid groups do more to stave off catastrophe by not only actually helping people, but also they teach people how to help support each other, and how to organize future efforts towards an economic revolution. A revolution which ultimately will (hopefully, depending on many myriad factors) help to mitigate climate change, lift people out of poverty, and ensure people have access to clean water.


Sure! Of course. I am not saying "don't do that". My point is twofold:

1) That there's a legitimate reason to take a left-wing analysis towards the space of X-risk issues that are commonly brought up by the LessWrong types, which they seem to find unresolvable because they're blind to materialist and Marxist analyses.

2) There's a benefit to recasting present-day left actions and agitation in terms of larger-scale X-risks. Actions like mutual aid on a community level benefit people in the here and now, but the stated aim, the ultimate goal, should be to reduce X-risk to humanity, and spread life and consciousness across the entire observable universe.

DrSunshine fucked around with this message at 19:58 on Mar 20, 2021

Bar Ran Dun
Jan 22, 2006




alexandriao posted:

The root of exploring "Existential Risk" comes out of the neoconservative fringe (See the end of the post where I cite this), and they just pick axioms they like because they like them, and it's no fun otherwise.

I think there is a very, very, strong argument to be made that Marxism is apocalyptic thought. It’s pragmatism isn’t contradictory to that either.

I mean the current big existential risk to humanity and our society now is capitalism.

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!
Necroing my own topic because this seems to really be blowing up. The Effective Altruism movement has a lot of ties into the Existential Risk community.

https://www.vox.com/future-perfect/...y-crytocurrency

quote:

It’s safe to say that effective altruism is no longer the small, eclectic club of philosophers, charity researchers, and do-gooders it was just a decade ago. It’s an idea, and group of people, with roughly $26.6 billion in resources behind them, real and growing political power, and an increasing ability to noticeably change the world.

EA, as a subculture, has always been categorized by relentless, sometimes navel-gazing self-criticism and questioning of assumptions, so this development has prompted no small amount of internal consternation. A frequent lament in EA circles these days is that there’s just too much money, and not enough effective causes to spend it on. Bankman-Fried, who got interested in EA as an undergrad at MIT, “earned to give” through crypto trading so hard that he’s now worth about $12.8 billion as of this writing, almost all of which he has said he plans to give away to EA-aligned causes. (Disclosure: Future Perfect, which is partly supported through philanthropic giving, received a project grant from Building a Stronger Future, Bankman-Fried’s philanthropic arm.)

Along with the size of its collective bank account, EA’s priorities have also changed. For a long time, much of the movement’s focus was on “near-termist” goals: reducing poverty or preventable death or factory farming abuses right now, so humans and animals can live better lives in the near-term.

But as the movement has grown richer, it is also increasingly becoming “longtermist.” That means embracing an argument that because so many more humans and other intelligent beings could live in the future than live today, the most important thing for altruistic people to do in the present moment is to ensure that that future comes to be at all by preventing existential risks — and that it’s as good as possible. The impending release of What We Owe to the Future, an anticipated treatise on longtermism by Oxford philosopher and EA co-founder Will MacAskill, is indicative of the shift.


The movement has also become more political — or, rather, its main benefactors have become more political. Bankman-Fried was one of the biggest donors to Joe Biden’s 2020 campaign, as were Cari Tuna and Dustin Moskovitz, the Facebook/Asana billionaires who before Bankman-Fried were by far the dominant financial contributors to EA causes. More recently, Bankman-Fried spent $10 million in an unsuccessful attempt to get Carrick Flynn, a longtime EA activist, elected to Congress from Oregon. Bankman-Fried has said he’ll spend “north of $100 million” on the 2024 elections, spread across a range of races; when asked in an interview with podcast host Jacob Goldstein if he would donate “a lot of money” to the candidate running against Trump, he replied, “That’s a pretty decent guess.”
An illustration of crypto billionaire Sam Bankman-Fried surrounded by floating money and political symbols like the Capitol dome.

But his motivations aren’t those of an ordinary Democratic donor — Bankman-Fried told Goldstein that fighting Trump was less about promoting Democrats than ensuring “sane governance” in the US, which could have “massive, massive, ripple effects on what the future looks like.” Indeed, Bankman-Fried is somewhat bipartisan in his giving. While the vast majority of his political donations have gone to Democrats, 16 of the 39 candidates endorsed by the Bankman-Fried-funded Guarding Against Pandemics PAC are Republicans as of this writing.

Effective altruism in 2022 is richer, weirder, and wields more political power than effective altruism 10, or even five years ago. It’s changing and gaining in importance at a rapid pace. The changes represent a huge opportunity — and also novel dangers that could threaten the sustainability and health of the movement. More importantly, the changes could either massively expand or massively undermine effective altruism’s ability to improve the broader world.
The origins of effective altruism

The term “effective altruism,” and the movement as a whole, can be traced to a small group of people based at Oxford University about 12 years ago.

In November 2009, two philosophers at the university, Toby Ord and Will MacAskill, started a group called Giving What We Can, which promoted a pledge whose takers commit to donating 10 percent of their income to effective charities every year (several Voxxers, including me, have signed the pledge).

In 2011, MacAskill and Oxford student Ben Todd co-founded a similar group called 80,000 Hours, which meant to complement Giving What We Can’s focus on how to give most effectively with a focus on how to choose careers where one can do a lot of good. Later in 2011, Giving What We Can and 80,000 Hours wanted to incorporate as a formal charity, and needed a name. About 17 people involved in the group, per MacAskill’s recollection, voted on various names, like the “Rational Altruist Community” or the “Evidence-based Charity Association.”

The winner was “Centre for Effective Altruism.” This was the first time the term took on broad usage to refer to this constellation of ideas.

The movement blended a few major intellectual sources. The first, unsurprisingly, came from philosophy. Over decades, Peter Singer and Peter Unger had developed an argument that people in rich countries are morally obligated to donate a large share of their income to help people in poorer countries. Singer memorably analogized declining to donate large shares of your income to charity to letting a child drowning in a pond die because you don’t want to muddy your clothes rescuing him. Hoarding wealth rather than donating it to the world’s poorest, as Unger put it, amounts to “living high and letting die.” Altruism, in other words, wasn’t an option for a good life — it was an obligation.

Ord told me his path toward founding effective altruism began in 2005, when he was completing his BPhil, Oxford’s infamously demanding version of a philosophy master’s. The degree requires that students write six 5,000-word, publication-worthy philosophy papers on pre-assigned topics, each over the course of a few months. One of the topics listed Ord’s year was, “Ought I to forgo some luxury whenever I can thereby enable someone else’s life to be saved?” That led him to Singer and Unger’s work, and soon the question — ought I forgo luxuries? which ones? how much? — began to consume his thoughts.

Then, Ord’s friend Jason Matheny (then a colleague at Oxford, today CEO of the Rand Corporation) pointed him to a project called DCP2. DCP stands for “Disease Control Priorities” and originated with a 1993 report published by the World Bank that sought to measure how many years of life could be saved by various public health projects. Ord was struck by just how vast the difference in cost-effectiveness between the interventions in the report was. “The best interventions studied were about 10,000 times better than the least good ones,” he notes.

It occurred to him that if residents of rich countries are morally obligated to help residents of less wealthy ones, they might be equally obligated to find the most cost-effective ways to help. Spending $50,000 on the most efficient project saved 10 times as many life-years as spending $50 million on the least efficient project would. Directing resources toward the former, then, would vastly increase the amount of good that rich-world donors could do. It’s not enough merely for EAs to give — they must give effectively.

Ord and his friends at Oxford weren’t the only ones obsessing over cost-effectiveness. Over in New York, an organization called GiveWell was taking shape. Founded in 2007 by Holden Karnofsky and Elie Hassenfeld, both alums of the eccentric hedge fund Bridgewater Associates, the group sought to identify the most cost-effective giving opportunities for individual donors. At the time, such a service was unheard of — charity evaluators at that point, like Charity Navigator, focused more on ensuring that nonprofits were transparent and spent little on overhead. By making judgments about which nonprofits to give to — a dollar to the global poor was far better than, say, a museum — GiveWell ushered in a sea change in charity evaluation.

Those opportunities were overwhelmingly found outside developed countries, primarily in global health. By 2011, the group had settled on recommending international global health charities focused on sub-Saharan Africa.

“Even the lowest-income people in the U.S. have (generally speaking) far greater material wealth and living standards than the developing-world poor,” the group explains today. “We haven’t found any US poverty-targeting intervention that compares favorably to our international priority programs” in terms of quality of evidence or cost-effectiveness. If the First Commandment of EA is to give, and the Second Commandment is to do so effectively, the Third Commandment is to do so where the problem is tractable, meaning that it’s actually possible to change the underlying problem by devoting more time and resources to it. And as recent massive improvements in life expectancy suggest, global health is highly tractable.

Before long, it was clear that Ord and his friends in Oxford were doing something very similar to what Hassenfeld and Karnofsky were doing in Brooklyn, and the two groups began talking (and, of course, digging into each other’s cost-effectiveness analyses, which in EA is often the same thing). That connection would prove immensely important to effective altruism’s first surge in funding.

In 2011, the GiveWell team made two very important new friends: Cari Tuna and Dustin Moskovitz.

The latter was a co-founder of Facebook; today he runs the productivity software company Asana. He and his wife Tuna, a retired journalist, command some $13.8 billion as of this writing, and they intend to give almost all of it away to highly effective charities. As of July 2022, their foundation has given out over $1.7 billion in publicly listed grants.

After connecting with GiveWell, they wound up using the organization as a home base to develop what is now Open Philanthropy, a spinoff group whose primary task is finding the most effective recipients for Tuna and Moskovitz’s fortune. Because of the vastness of that fortune, Open Phil’s comparatively long history (relative to, say, FTX Future Fund), and the detail and rigor of its research reports on areas it’s considering funding, the group has become by far the most powerful single entity in the EA world.

Tuna and Moskovitz were the first tech fortune in EA, but they would not be the last. Bankman-Fried, the child of two “utilitarian leaning” Stanford Law professors, embraced EA ideas as an undergraduate at MIT, and decided to “earn to give.”

After graduation in 2014, he went to a small firm called Jane Street Capital, then founded the trading firm Alameda Research and later FTX, an exchange for buying and selling crypto and crypto-related assets, like futures. By 2021, FTX was valued at $18 billion, making the then-29-year-old a billionaire many times over. He has promised multiple times to give almost that entire fortune away.

"It’s safe to say that effective altruism is no longer the small, eclectic club of philosophers, charity researchers, and do-gooders it was just a decade ago."

The steady stream of billionaires embracing EA has left it in an odd situation: It has a lot of money, and substantial uncertainty about where to put it all, uncertainty which tends to grow rather than ebb with the movement’s fortunes. In July 2021, Ben Todd, who co-founded and runs 80,000 Hours, estimated that the movement had, very roughly, $46 billion at its disposal, an amount that had grown by 37 percent a year since 2015. And only 1 percent of that was being spent every year.

Moreover, the sudden wealth altered the role longtime, but less wealthy, EAs play in the movement. Traditionally, a key role of many EAs was donating to maximize funding to effective causes. Jeff Kaufman, one of the EAs engaged in earning-to-give who I profiled back in 2013, until recently worked as a software engineer at Google. In 2021, he and his wife Julia Wise (an even bigger figure in EA as the full-time community liaison for the Center for Effective Altruism) earned $782,158 and donated $400,000 (they make all these numbers public for transparency).

That’s hugely admirable, and much, much more than I donated last year. But that same year, Open Phil distributed over $440 million (actually over $480 million due to late grants, a spokesperson told me). Tuna and Moskovitz alone had the funding capacity of over a thousand less-wealthy EAs, even high-profile EAs dedicated to the movement who worked at competitive, six-figure jobs. Earlier this year, Kaufman announced he was leaving Google, and opting out of “earning to give” as a strategy, to do direct work for the Nucleic Acid Observatory, a group that seeks to use wastewater samples to detect future pandemics early. Part of his reasoning, he wrote on his blog, was that “There is substantially more funding available within effective altruism, and so the importance of earning to give has continued to decrease relative to doing things that aren’t mediated by donations.”

That said, the new funding comes with a lot of uncertainty and risk attached. Given how exposed EA is to the financial fortunes of a handful of wealthy individuals, swings in the markets can greatly affect the movement’s short-term funding conditions.

In June 2022, the crypto market crashed, and Bankman-Fried’s net worth, as estimated by Bloomberg, crashed with it. He peaked at $25.9 billion on March 29, and as of June 30 was down more than two-thirds to $8.1 billion; it’s since rebounded to $12.8 billion. That’s obviously nothing to sneeze at, and his standard of living isn’t affected at all. (Bankman-Fried is the kind of vegan billionaire known for eating frozen Beyond Burgers, driving a Corolla , and sleeping on a bean bag chair.) But you don’t need to have Bankman-Fried’s math skills to know that $25.9 billion can do a lot more good than $12.8 billion.

Tuna and Moskovitz, for their part, still hold much of their wealth in Facebook stock, which has been sliding for months. Moskovitz’s Bloomberg-estimated net worth peaked at $29 billion last year. Today it stands at $13.8 billion. “I’ve discovered ways of losing money I never even know I had in me,” he jokingly tweeted on June 19.

But markets change fast, crypto could surge again, and in any case Moskovitz and Bankman-Fried’s combined net worth of $26.5 billion is still a lot of money, especially in philanthropic terms. The Ford Foundation, one of America’s longest-running and most prominent philanthropies, is only worth $17.4 billion. EA now commands one of the largest financial arsenals in all of US philanthropy. And the sheer bounty of funding is leading to a frantic search for places to put it.
An illustration showing billionaire couple Cari Tuna and Dustin Moskovitz, with a hand pouring money over them.

One option for that bounty is to look to the future — the far future. In February 2022, the FTX Foundation, a philanthropic entity founded chiefly by Bankman-Fried, along with his FTX colleagues Gary Wang and Nishad Singh and his Alameda colleague Caroline Ellison, announced its “Future Fund”: a project meant to donate money to “improve humanity’s long-term prospects” through the “safe development of artificial intelligence, reducing catastrophic biorisk, improving institutions, economic growth,” and more.

The fund announced it was looking to spend at least $100 million in 2022 alone, and it already has: On June 30, barely more than four months after the fund’s launch, it stated that had already given out $132 million. Giving money out that fast is hard. Doing so required giving in big quantities ($109 million was spent on grants over $500,000 each), as well as unusual methods like “regranting” — giving over 100 individuals trusted by the Future Fund budgets of hundreds of thousands or even millions of dollars each, and letting them distribute it as they like.

The rush of money led to something of a gold-rush vibe in the EA world, enough so that Nick Beckstead, CEO of the FTX Foundation and a longtime grant-maker for Open Philanthropy, posted an update in May clarifying the group’s methods. “Some people seem to think that our procedure for approving grants is roughly ‘YOLO #sendit,’ he wrote. “This impression isn’t accurate.”

But that impression nonetheless led to significant soul-searching in the EA community. The second most popular post ever on the EA Forum, the highly active message board where EAs share ideas in minute detail, is grimly titled, “Free-spending EA might be a big problem for optics and epistemics.” Author George Rosenfeld, a founder of the charitable fundraising group Raise, worried that the big surge in EA funding could lead to free-spending habits that alter the movement’s culture — and damage its reputation by making it look like EAs are using billionaires’ money to fund a cushy lifestyle for themselves, rather than sacrificing themselves to help others.

Rosenfeld’s is the second most popular post on the EA Forum. The most popular post is a partial response to him on the same topic by Will MacAskill, one of EA’s founders. MacAskill is now deeply involved in helping decide where the funding goes. Not only is he the movement’s leading intellectual, he’s on staff at the FTX Future Fund and an advisor at the EA grant-maker Longview Philanthropy.

He began, appropriately: “Well, things have gotten weird, haven’t they?”
The shift to longtermism

Comparing charities fighting global poverty is really hard. But it’s also, in a way, EA-on-easy-mode. You can actually run experiments and see if distributing bed nets saves lives (it does, by the way). The outcomes of interest are relatively short-term and the interventions evaluated can be rigorously tested, with little chance that giving will do more harm than good.

Hard mode comes in when you expand the group of people you’re aiming to help from humans alive right now to include humans (and other animals) alive thousands or millions of years from now.

From 2015 to the present, Open Philanthropy distributed over $480 million to causes it considers related to “longtermism.” All $132 million given to date by the FTX Future Fund is, at least in theory, meant to promote longtermist ideas and goals.

Which raises an obvious question: What the gently caress is longtermism?

The basic idea is simple: We could be at the very, very start of human history. Homo sapiens emerged some 200,000-300,000 years ago. If we destroy ourselves now, through nuclear war or climate change or a mass pandemic or out-of-control AI, or fail to prevent a natural existential catastrophe, those 300,000 years could be it.
"He began, appropriately: “Well, things have gotten weird, haven’t they?” "

But if we don’t destroy ourselves, they could just be the beginning. Typical mammal species last 1 million years — and some last much longer. Economist Max Roser at Our World in Data has estimated that if (as the UN expects) the world population stabilizes at 11 billion, greater wealth and nutrition lead average life expectancy to rise to 88, and humanity lasts another 800,000 years (in line with other mammals), there could be 100 trillion potential people in humanity’s future.

By contrast, only about 117 billion humans have ever lived, according to calculations by demographers Toshiko Kaneda and Carl Haub. In other words, if we stay alive for the duration of a typical mammalian species’ tenure on Earth, that means 99.9 percent of the humans who will ever live have yet to live.

And those people, obviously, have virtually no voice in our current society, no vote for Congress or president, no union and no lobbyist. Effective altruists love finding causes that are important and neglected: What could be more important, and more neglected, than the trillions of intelligent beings in humanity’s future?

In 1984, Oxford philosopher Derek Parfit published his classic book on ethics, Reasons and Persons, which ended with a meditation on nuclear war. He asked readers to consider three scenarios:

Peace.
A nuclear war that kills 99 percent of the world’s existing population.
A nuclear war that kills 100 percent.

Obviously 2 and 3 are worse than 1. But Parfit argued that the difference between 1 and 2 paled in comparison to the difference between 2 and 3. “Civilization began only a few thousand years ago,” he noted, “If we do not destroy mankind, these few thousand years may be only a tiny fraction of the whole of civilized human history.” Scenario 3 isn’t just worse than 2, it’s dramatically worse, because by killing off the final 1 percent of humanity, scenario 3 destroys humanity’s whole future.

This line of thinking has led EAs to foreground existential threats as an especially consequential cause area. Even before Covid-19, EAs were early in being deeply concerned about the risk of a global pandemic, especially a human-made one coming about due to ever-cheaper biotech tools like CRISPR, which could be far worse than anything nature can cook up. Open Philanthropy spent over $65 million on the issue, including seven- and eight-figure grants to the Johns Hopkins Center for Health Security and the Nuclear Threat Initiative’s biodefense team, before 2020. It’s added another $70 million since. More recently, Bankman-Fried has funded a group led by his brother, Gabe, called Guarding Against Pandemics, which lobbies Congress to fund future pandemic prevention more aggressively.

Nuclear war has gotten some attention too: Longview Philanthropy, an EA-aligned grant-maker supported by both Open Philanthropy and FTX, recently hired Carl Robichaud, a longtime nuclear policy grant-maker, partly in reaction to more traditional donors like the MacArthur Foundation pulling back from trying to prevent nuclear war.

But it is AI that has been a dominant focus in EA over the last decade. In part this reflects the very real belief among many AI researchers that human-level AI could be coming soon — and could be a threat to humanity.

This is in no way a universal belief, but it’s a common enough one to be worrisome. A poll this year found that leading AI researchers put around 50-50 odds on AI surpassing humans “in all tasks” by 2059 — and that was before some of the biggest strides in recent AI research over the last five years. I will be 71 years old in 2061. It’s not even the long-term future; it’s within my expected lifetime. If you really believe superintelligent, perhaps impossible-to-control machines are coming in your lifetime, it makes sense to panic and spend big.

That said, the AI argument strikes many outside EA as deeply wrong-headed, even offensive. If you care so much about the long term, why focus on this when climate change is actually happening right now? And why care so much about the long term when there is still desperate poverty around the world? The most vociferous critics see the longtermist argument as a con, an excuse to do interesting computer science research rather than work directly in the Global South to solve actual people’s problems. The more temperate see longtermism as dangerously alienating effective altruists from the day-to-day practice of helping others.

I know this because I used to be one of these critics. I think, in retrospect, I was wrong, and I was wrong for a silly reason: I thought the idea of a super-intelligent AI was ridiculous, that these kind of nerdy charity folks had read too much sci-fi and were fantasizing wildly.

I don’t think that anymore. The pace of improvement in AI has gotten too rapid to ignore, and the damage that even dumb AI systems can do, when given too much societal control, is extreme. But I empathize deeply with people who have the reaction I did in 2015: who look at EA and see people who talked themselves out of giving money to poor people and into giving money to software engineers.

Moreover, while I buy the argument that AI safety is an urgent, important problem, I have much less faith that anyone has a tractable strategy for addressing it. (I’m not alone in that uncertainty — in a podcast interview with 80,000 Hours, Bankman-Fried said of AI risk, “I think it’s super important and I also don’t feel extremely confident on what the right thing to do is.”)

That, on its own, might not be a reason for inaction: If you have no reliable way to address a problem you really want to address, it sometimes makes sense to experiment and fund a bunch of different approaches in hopes that one of them will work. This is what funders like Open Phil have done to date.
An illustration of the planet Earth, with lightbulbs, people, trees, and more dropping around it.

But that approach doesn’t necessarily work when there’s huge “sign uncertainty” — when an intervention has a reasonable chance of making things better or worse.

This is a particularly relevant concern for AI. One of Open Phil’s early investments was a $30 million grant in 2017 to OpenAI, which has since emerged as one of the world’s leading AI labs. It has created the popular GPT-3 language model and DALL-E visual model, both major steps forward for machine learning models. The grant was intended to help by “creating an environment in which people can effectively do technical research on AI safety.” It may have done that — but it also may have simply accelerated the pace of progress toward advanced AI in a way that amplifies the dangers such AI represents. We just don’t know.

Partially for those reasons, I haven’t started giving to AI or longtermist causes just yet. When I donate to buy bed nets, I know for sure that I’m actually helping, not hurting. Our impact on the far future, though, is always less certain, no matter our intentions.
The move to politics

EA’s new wealth has also allowed it vastly more influence in an arena where the movement is bound to gain more attention and make new enemies: politics.

EA has always been about getting the best bang for your buck, and one of the best ways for philanthropists to get what they want has always been through politics. A philanthropist can donate $5 million to start their own school … or they can donate $5 million to lobby for education reforms that mold existing schools more like their ideal. The latter almost certainly will affect more students than the former.

So from at least the mid-2010s, EAs, and particularly EA donors, embraced political change as a lever, and they have some successes to show for it. The late 2010s shift of the Federal Reserve toward caring more about unemployment and less about inflation owes a substantial amount to advocacy from groups like Fed Up and Employ America — groups for which Open Philanthropy was the principal funder.

Tuna and Moskovitz have been major Democratic donors since 2016, when they spent $20 million for the party in an attempt to beat Donald Trump. The two gave even more, nearly $50 million, in 2020, largely through the super-PAC Future Forward. Moskovitz was the group’s dominant donor, but former Google CEO Eric Schmidt, Twitter co-founder Evan Williams, and Bankman-Fried supported it too. The watchdog group OpenSecrets listed Tuna as the 7th biggest donor to outside spending groups involved in the 2020 election — below the likes of the late Sheldon Adelson or Michael Bloomberg, but far above big-name donors like George Soros or Reid Hoffman. Bankman-Fried took 47th place, above the likes of Illinois governor and billionaire J.B. Pritzker and Steven Spielberg.

As in philanthropy, the EA political donor world has focused obsessively on maximizing impact per dollar. David Shor, the famous Democratic pollster, has consulted for Future Forward and similar groups for years; one of my first in-person interactions with him was at an EA Global conference in 2018, where he was trying to understand these people who were suddenly very interested in funding Democratic polling. He told me that Moskovitz’s team was the first he had ever seen who even asked how many votes-per-dollar a given ad buy or field operation would produce.

Bankman-Fried has been, if anything, more enthusiastic about getting into politics than Tuna and Moskovitz. His mother, Stanford Law professor Barbara Fried, helps lead the multi-million dollar Democratic donor group Mind the Gap. The pandemic prevention lobbying effort led by his brother Gabe was one of his first big philanthropic projects. And Protect Our Future, a super PAC he’s the primary supporter of that’s led by longtime Shor colleague and dedicated EA Michael Sadowsky, has spent big on the 2022 midterms already. That includes $10 million supporting Carrick Flynn, a longtime EA who co-founded the Center for the Governance of AI at Oxford, in his unsuccessful run for Congress in Oregon.

That intervention made perfect sense if you’re immersed in the EA world. Flynn is a true believer; he’s obsessed with issues like AI safety and pandemic prevention. Getting someone like him in Congress would give the body a champion for those causes, which are largely orphaned within the House and Senate right now, and could go far with a member monomaniacally focused on them.

But to Oregon voters, little of it made sense. Willamette Week, the state’s big alt-weekly, published a cover-story exposé portraying the bid as a Bahamas-based crypto baron’s attempt to buy a seat in Congress, presumably to further crypto interests. It didn’t help that Bankman-Fried had made several recent trips to testify before Congress and argue for his preferred model of crypto regulation in the US — or that he prominently appeared at an FTX-sponsored crypto event in the Bahamas with Bill Clinton and Tony Blair, in a flex of his new wealth and influence. Bankman-Fried is lobbying Congress on crypto, he’s bankrolling some guy’s campaign for Congress — and he expects the world to believe that he isn’t doing that to get what he wants on crypto?

It was a big optical blunder, one that threatened to make not just Bankman-Fried but all of EA look like a craven cover for crypto interests. The Flynn campaign was a reminder of just how much of a culture gap remains between EA and the wider world, and in particular the world of politics.

And that gap could widen still more, and become more problematic as longtermism, with all its strangeness, becomes a bigger part of EA. “We should spend more to save people in poor countries from preventable diseases” is an intelligible, if not particularly widely held, position in American politics. “We should be representing the trillions of people who could be living millions of years from now” is not.


An article in the New Yorker about Will MacAskill, whose new book just came out:
https://www.newyorker.com/magazine/2022/08/15/the-reluctant-prophet-of-effective-altruism

quote:

The philosopher William MacAskill credits his personal transfiguration to an undergraduate seminar at Cambridge. Before this shift, MacAskill liked to drink too many pints of beer and frolic about in the nude, climbing pitched roofs by night for the life-affirming flush; he was the saxophonist in a campus funk band that played the May Balls, and was known as a hopeless romantic. But at eighteen, when he was first exposed to “Famine, Affluence, and Morality,” a 1972 essay by the radical utilitarian Peter Singer, MacAskill felt a slight click as he was shunted onto a track of rigorous and uncompromising moralism. Singer, prompted by widespread and eradicable hunger in what’s now Bangladesh, proposed a simple thought experiment: if you stroll by a child drowning in a shallow pond, presumably you don’t worry too much about soiling your clothes before you wade in to help; given the irrelevance of the child’s location—in an actual pond nearby or in a metaphorical pond six thousand miles away—devoting resources to superfluous goods is tantamount to allowing a child to drown for the sake of a dry cleaner’s bill. For about four decades, Singer’s essay was assigned predominantly as a philosophical exercise: his moral theory was so onerous that it had to rest on a shaky foundation, and bright students were instructed to identify the flaws that might absolve us of its demands. MacAskill, however, could find nothing wrong with it.

Adbot
ADBOT LOVES YOU

DrSunshine
Mar 23, 2009

Did I just say that out loud~~?!!!
Existential Risk philosopher Phil Torres (who I reviewed most favorably in my op) wrote a Current Affairs article that clearly sums up a lot of my criticisms of the "longtermist/EA/XR" community's philosophical assumptions

https://www.currentaffairs.org/2021/07/the-dangerous-ideas-of-longtermism-and-existential-risk

quote:

Longtermism should not be confused with “long-term thinking.” It goes way beyond the observation that our society is dangerously myopic, and that we should care about future generations no less than present ones. At the heart of this worldview, as delineated by Bostrom, is the idea that what matters most is for “Earth-originating intelligent life” to fulfill its potential in the cosmos. What exactly is “our potential”? As I have noted elsewhere, it involves subjugating nature, maximizing economic productivity, replacing humanity with a superior “posthuman” species, colonizing the universe, and ultimately creating an unfathomably huge population of conscious beings living what Bostrom describes as “rich and happy lives” inside high-resolution computer simulations.

This is what “our potential” consists of, and it constitutes the ultimate aim toward which humanity as a whole, and each of us as individuals, are morally obligated to strive. An existential risk, then, is any event that would destroy this “vast and glorious” potential, as Toby Ord, a philosopher at the Future of Humanity Institute, writes in his 2020 book The Precipice, which draws heavily from earlier work in outlining the longtermist paradigm. (Note that Noam Chomsky just published a book also titled The Precipice.)

The point is that when one takes the cosmic view, it becomes clear that our civilization could persist for an incredibly long time and there could come to be an unfathomably large number of people in the future. Longtermists thus reason that the far future could contain way more value than exists today, or has existed so far in human history, which stretches back some 300,000 years. So, imagine a situation in which you could either lift 1 billion present people out of extreme poverty or benefit 0.00000000001 percent of the 1023 biological humans who Bostrom calculates could exist if we were to colonize our cosmic neighborhood, the Virgo Supercluster. Which option should you pick? For longtermists, the answer is obvious: you should pick the latter. Why? Well, just crunch the numbers: 0.00000000001 percent of 1023 people is 10 billion people, which is ten times greater than 1 billion people. This means that if you want to do the most good, you should focus on these far-future people rather than on helping those in extreme poverty today. As the FHI longtermists Hilary Greaves and Will MacAskill—the latter of whom is said to have cofounded the Effective Altruism movement with Toby Ord—write, “for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1,000) years, focussing primarily on the further-future effects. Short-run effects act as little more than tie-breakers.”

...

All of this is to say that I’m not especially optimistic about convincing longtermists that their obsession with our “vast and glorious” potential (quoting Ord again) could have profoundly harmful consequences if it were to guide actual policy in the world. As the Swedish scholar Olle Häggström has disquietingly noted, if political leaders were to take seriously the claim that saving billions of living, breathing, actual people today is morally equivalent to negligible reductions in existential risk, who knows what atrocities this might excuse? If the ends justify the means, and the “end” in this case is a veritable techno-Utopian playground full of 1058 simulated posthumans awash in “the pulsing ecstasy of love,” as Bostrom writes in his grandiloquent “Letter from Utopia,” would any means be off-limits? While some longtermists have recently suggested that there should be constraints on which actions we can take for the far future, others like Bostrom have literally argued that preemptive violence and even a global surveillance system should remain options for ensuring the realization of “our potential.” It’s not difficult to see how this way of thinking could have genocidally catastrophic consequences if political actors were to “[take] Bostrom’s argument to heart,” in Häggström’s words.

They're also behaving like a creepy mind-control cult:

quote:

In fact, numerous people have come forward, both publicly and privately, over the past few years with stories of being intimidated, silenced, or “canceled.” (Yes, “cancel culture” is a real problem here.) I personally have had three colleagues back out of collaborations with me after I self-published a short critique of longtermism, not because they wanted to, but because they were pressured to do so from longtermists in the community. Others have expressed worries about the personal repercussions of openly criticizing Effective Altruism or the longtermist ideology. For example, the moral philosopher Simon Knutsson wrote a critique several years ago in which he notes, among other things, that Bostrom appears to have repeatedly misrepresented his academic achievements in claiming that, as he wrote on his website in 2006, “my performance as an undergraduate set a national record in Sweden.” (There is no evidence that this is true.) The point is that, after doing this, Knutsson reports that he became “concerned about his safety” given past efforts to censure certain ideas by longtermists with clout in the community.

EDIT:

Given how OpenAI, which recently has been in the news with Dall-E, has been given substantial funding by OpenPhilanthropy, which is ostensibly concerned with AI Safety and existential risk, I feel like there's almost a kind of dialectical irony in this. Just as Marx wrote in the Communist Manifesto:

quote:

The development of modern industry, therefore, cuts from under its feet the very foundation on which the bourgeoisie produces and appropriates products. What the bourgeoisie therefore produces, above all, are its own grave diggers.

I can't help but wonder given the incredibly creepy advances made by OpenAI recently, that perhaps AI Safety Research into AGI risks instantiating that which they fear most - an Unfriendly AI, or some sort of immortal, posthuman oligarchy formed from currently-existing billionaires. I fear that the longtermist movement is becoming humanity's own grave diggers.

DrSunshine fucked around with this message at 18:13 on Aug 19, 2022

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply