Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Aramis
Sep 22, 2009



One aspect of this that keeps me up at night is that acknowledging a risk as being existential opens the door to corrective measures that would be otherwise ethically inconceivable.

The real problem for me arises when you turn this around: People who would like to push for inhuman policies, say genocide, have a vested interest in letting risks that are potentially existential, such as global warming, exacerbate themselves until the point where they can push their agenda.

What I would like is to somehow make today's denialists materially responsible for the actions that they might have a hand in making necessary. But the legwork for this needs to be started now, before the risk becomes clearly existential in the first place. And that's not fair either, since greed-based or optimism-based denialism, while still bad, do not warrant the type of hell I'd wish on a theoretical genocidal denialist.

Is there a good ethics framework out there that can tackle this kind of stuff from a reasonably practical standpoint?

Aramis fucked around with this message at 23:22 on Oct 15, 2020

Adbot
ADBOT LOVES YOU

Aramis
Sep 22, 2009



Raenir Salazar posted:

I was tempted to make a thread but it probably falls under this thread, is the working class approaching obsolescence? CGP makes a pretty compelling argument that when insurance rates make robot/automation more competitive than human labour than workers, primarily blue collar labour the world labour is going to be rapidly phased out for machines that don't complain and don't unionize.

The development of automation has slowed down a bit since that video but given enough time I find it difficult to imagine that the "working class" will continue to exist as we know it and become comprised by grunt coders and what is commonly referred to as the "precariate", people in precarious sociofinancial positions but not necessarily labour or blue collar positions.

I think the downwards pressure imposed by technological progress and innovation is going to push hundreds of millions of people out of the middle class over the next few decades.

People like to point out the falling cost of automation, and how it's poised to replace workers, but I think the painted picture is a little bit misleading. The cost of customised automation has, and will retain for a long time, a very large upfront cost that is amortised over time, making it suitable only for heavily repeated work. On the flip side, general-purpose automation, usable for small-batch tasks is way harder to develop and doesn't actually bring anything to the table for large-scale manufacturing that bespoke automation does.

This creates a tension where the R&D purse-holders don't "really" have an interest in general-purpose automation, since the ROI doesn't really make sense over the timescales they are interested in. This leaves us with effectively hobbyist R&D driving general-purpose automation. Progress is still being made, but the pace is several order of magnitude slower than industrial R&D.

A good example of this is 3D printing, ostensibly the most successful general-purpose automation tool out-there. Almost all of the "serious" 3D printers out there are used as prototyping machines (as opposed to production machines), with the notable exception of the medical field where they are legitimately the only way to manufacture certain things (and at that point, it's not automation anymore, since there is no alternative).

What I'm getting at here is that I'm fairly confident that the scope of tasks that are poised to be replaced with automation is vastly over-estimated, because the R&D investment for a large portion of it has an effective ceiling.

As usual, there could be some massive breakthrough that changes everything, but I wouldn't rely on the expectation that it's bound to happen any day now.

Aramis fucked around with this message at 15:29 on Oct 19, 2020

Aramis
Sep 22, 2009



suck my woke dick posted:

I think the answer that current existential risk people would give is that the risk is so large that intersectionality doesn't matter. Who cares about whether the apocalypse kills marginalised groups first if privileged groups are merely next in line to die.

It really depends on where you draw the line between existential and quasi-existential risk.

Global warming is a good example of that. It's possibly (and arguably likely) an existential risk that will be eventually "downgraded" to a risk that will be existential for a portion of the population, but not humanity as a whole. And the division is certainly intersectional in nature. This becomes immediately relevant because intersectionality will certainly be involved in the process of determining what actions should/will be taken to attempt mitigation of the existential risk.

Aramis fucked around with this message at 17:46 on Oct 18, 2020

Aramis
Sep 22, 2009



DrSunshine posted:

Anyway, a distinction that I've added into the taxonomy of XRs in my mind is conditional existential risk versus final existential risk. Expressed in the terminology of probability, P(A|B) is the probability of XR A given conditional risk B, while P(A) is the total probability of XR. An example of a conditional XR would be, again, abrupt global climate change, where it enhances overall factors for extinction, all the while being somewhat difficult of a candidate for extinction on its own, while a final XR would be a Ceres-sized asteroid crashing into the Earth. I think it's worth making this distinction because while not all GCRs are XRs, some GCRs could conditionally become XRs, either on their own, or by enhancing the risk of subsequent GCRs that push overall into total extinction.

This is an interesting distinction, but I think it needs to be partnered with a separate "mitigability" axis in order to be of any real use. final existential risk contains too many events that are fundamentally conversation-ending beyond discussions about acceptance. I'd contend that it consists mostly of such events. The fact that you instinctively went for "Ceres-sized asteroid crashing into the Earth" as a representative example of the category kind of attests to that.

Aramis
Sep 22, 2009



Crumbskull posted:

Personally I believe consciousness was a cosmic mistake, a wound inflicted on a universe that does not deserve it, and it will be no great loss if it is gone. I get why this is a minority opinion though lol.

Biological life is shockingly effective at increasing entropy, to the point where abiogenesys can be seen as a thermodynamic evolutionary strategy. On top of that, the more complex the life, the more efficient said life is at that conversion. What I'm getting at is that there is a very real argument to be made that life, as well as consciousness, is an attempt by the universe to hasten its inevitable heat death.

It's not a mistake, it's a suicide attempt.

Aramis
Sep 22, 2009



DrSunshine posted:

I don't think this is a very enlightening statement. All you've done is make an observation about life as a negentropic process and equated the second law of Thermodynamics to suicide, just to give it that wooo dark and edgy nihilistic vibe. It's poetic but ultimately fatuous. Are you saying that a universe that is full of lifeless rocks and gas would be more preferable? Moreover using the term "attempt" and "suicide" attributes agency to the universe, when all it is doing is acting out laws of physics. Furthermore, if we take the strong anthropic principle to be sound, it would appear that life (and perhaps by extension, consciousness) in a universe with our given arrangement of physical constants would be inevitable - just another physical process that should be guaranteed to occur in a universe that happened to form the way it has. In that sense you couldn't ascribe any moral or subjective value to life's existence, it simply is in the same sense that black holes are.

My post was specifically targeted at refuting Crumskull's assertion that life and consciousness are pure happenstance. I'm not actually ascribing intent to the universe here in the same way that Darwinian evolution does not "attempt" to adapt species to their environment,.

The suicide comment was, indeed, just poetic edginess.

If you want the actual argument free of flourishes, it's simple: On one hand the universe appears to be biased in favour of processes that increase entropy, and on the other hand, biological life as we know it is very good at increasing entropy within the temperature window around the melting point of water. As a result, one could argue that the emergence of life and its subsequent constant complexification is directly linked to the universe's bias towards higher entropy.

If anything, I find the thought motivating. Living a meaningful life despite the seemingly inevitable end of things is an act of rebellion, and doing so when the closest thing to a purpose you have is to hasten that end is even more rebellious.

Adbot
ADBOT LOVES YOU

Aramis
Sep 22, 2009



I guess I should have clarified that I'm not just talking out of my rear end here, and was contributing, and grossly oversimplifying, some actual research that I really should have linked in my original post

Sorry about that, here are some links:

https://www.quantamagazine.org/a-new-thermodynamics-theory-of-the-origin-of-life-20140122
https://www.quantamagazine.org/first-support-for-a-physics-theory-of-life-20170726/

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply