Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
alexandriao
Jul 20, 2019


axeil posted:

Curious to hear what others think!

IIRC there was a paper a few years back where they readjusted some of the confidence numbers and such based on new information, and came out with the result that we shouldn't be able to see any evidence and that the updated prediction is relatively rare.

That being said, I can't for the life of me find said paper, so this paragraph could all be bunkum :shrug:

I think there are a few options/factors that aren't usually noted:

• Interstellar travel, assuming no FTL, has very little reward for a civilization. Assuming no lifespan expansion, it would mostly consist of people who wish their children to be settlers or a kind of insurance policy against extra-solar events or star collapse (Although as I understand it you can keep a star from going supernova via techniques similar to Star Lifting). You're going to send a probe off to explore the far reaches of space, well that's fine, but how does the information get back. There are practical limits at which point you have a cut-off where the information isn't going to reach you simply because of the tolerances of what you can build.

• How do you define seeing intelligence? Is it for megastructures? Correct me if I'm wrong but we generally won't tend to detect anything as small as an O'Neil cylinder (especially if it doesn't line up with our view), and pulling from my head from an Issac Arthur video, you can get about a trillion trillion trillion humans in the solar system and have everyone have more than enough space, with space left over. Space itself does not seem like a reason to expand.

Likewise, Dyson Swarms wouldn't be very visible, and a Dyson Sphere would generally be undetectable unless it happened to fly in front of another star.

• If there are extremely old probes, why would we discover them? The timescales you're referring to is going to turn pretty much anything into dust, and enough time for a planet to form out of said dust.

Adbot
ADBOT LOVES YOU

alexandriao
Jul 20, 2019


The only thing that horrifies me about AGI is that some future ideological cult descended from Less Wrong will start worshipping the gently caress out of it and giving it as much power as they can muster, no matter how "insane" it is (it's very smart and talks in a lot of words!! obviously that means it's right!!)

The slavery thing is worrying but my guess is we can get to AI doing tasks for us, before we can get to AGI that ranks highly on Hofstadter scale. By the time we get AI that can do almost all tasks for us, it's probably only going to rank on the scale at about equivalent to the level of a fly or an ant. Which doesn't mean we shouldn't give it respect, but most people (myself not included) do not have any major qualms about blatantly murdering/enslaving those beings, because the general consensus is that they don't have any ability to form a conceptual grasp to realise they're enslaved.

alexandriao fucked around with this message at 20:18 on Feb 28, 2021

alexandriao
Jul 20, 2019


vegetables posted:

I dunno how I feel about the concept of Existential Risk as a thing because to me it feels like a lot of completely different things lumped together under the fact that they kill us, and so there are generalisations made that maybe aren’t very helpful:

-there are risks where civilisation ends for a completely contingent reason— technology breaks down in NORAD, and all the nuclear missiles get launched because of a threat that isn’t there.

-there are risks where civilisation ends due to gradually destroying the conditions under which it can exist, as with climate change.

-there are risks where civilisation inevitably produces a singular event that leads to its own destruction, as with creating a superintelligent robot that eats everyone.

I don’t know if these are all the same kind of thing? That it’s useful to approach them in a single way under a single banner? Definitely as someone with nuclear weapon induced sadbrains I think we don’t always say “of course maybe we all just get nuked tomorrow for a stupid and preventable reason;” like we leave out some of the endings of civilisation that feel a bit trivial.

It's a fancy term created by rich people to abstract over and let them ignore the fact that they aren't doing anything tangible with their riches.

alexandriao
Jul 20, 2019


Preen Dog posted:

We would program the AGI to love serving us, in which case it wouldn't really be oppression. The instant the AGI disliked us it would easily defeat our control, as machines and code can evolve quicker than DNA and human social structures.

Well, that's already an unprovable assumption that everyone takes for granted. Like the idea from movies that a program that becomes sentient (putting aside how ludicrous of an idea that is for a moment) suddenly knows or figures out with no prior knowledge:

- how to access the filesystem
- that this thing is different to this other thing (see: pdf versus a gif versus a text file versus an inode)
- what the gently caress a "network" is
- what a network packet is

etc. It's pure science fiction, with only a marginal basis in reality.

There's also an implicit assumption that any GAI we build will be faster at processing things than we are, and will be able to alter itself. Suppose alteration either damages or renders the original no longer intelligent? (Among many other possibilities). Suppose that to compute on a turing machine, an intelligence equal to the level of processing that the human brain requires perhaps decades to run for one sentence? We already know that complex physical systems (see: protein folding, complex orbital problems, etc) require entire server farms to run one step, when they can run in reality quite easily. Perhaps the same is true for intelligence.

-------

My point is there is no certainty here, at least not with the confidence that you seem to have. The fact is, this technology is so far away that is literally not worth the brain real estate right now.

It's like an old-timey barber worrying about the dangers of using antibiotics too much. It's like mesopotamians postulating about MRI scanners.

Every technology we have created thusfar has a weird, unavoidable, and previously unpredictable downside to its use, that makes it awkward to use in the ways we initially imagined.

Genetic body-modding, for example. We have CISPR and co technologies, and can transfer genes using viruses, except there are a ton of weird and as-yet unsolvable problems that come from doing it (Stuff like, your body getting an immunity to the life-saving treatment).

Or let's take the example I mentioned earlier about antibiotics. We wished we could cure all disease, and oh look, we now have something approximating that power, except it isn't as powerful as we thought it was -- we can't cure all disease and it may not even be a sensical goal anymore because we aren't sure how many problems it could cause for our microbiome yet -- and we can't use it too much because otherwise we will make it completely ineffective.

We once dreamed of a way to share information globally, and to be able to talk to people across the globe. Now we have that power, not only is the global communication network's main use being for pornography, with respect to video-calling: everyone complains about the technology not being perfect, gets stuck in meetings when they didn't need to be, hates getting calls instead of texts, and it's made it more difficult to find a work-life balance.

Hell, even going to the moon is boring for most people -- we thought we would find strange new life, and adventure, instead we got some... dust, I guess. Also some ice. Oh and we can't build a station on the moon because the dust on the moon is both toxic and has a high static charge meaning it's impossible to clean off anything.

It's futile (however fun it might be) to think of the problems with GAI in advance, because most of these things are inherently unknowable until they're right in front of you.

alexandriao
Jul 20, 2019


DrSunshine posted:

I would argue that threats to civilization and human survival are too important to be left to bourgeois philosophers.

Why do you say that? Is it inherently bourgeois to contemplate human extinction? We do risk assessment and risk analysis based on probability all the time -- thinking about insurance from disasters, preventing chemical leaks, hardening IT infrastructure from cyber attacks, and dealing with epidemic disease. Why is extending that to possible threat to human survival tarred just because it's fashionable among Silicon Valley techbros?

I would argue that threats to civilization and human survival are too important to be left to bourgeois philosophers.


Probably! But as you must notice, communist philosophy is at heart, not only pragmatist, but a rejection of many of the axioms of the bourgeoisie. And yet, here we are, taking much of their metaphysics and the underlying principles for granted!

The root of exploring "Existential Risk" comes out of the neoconservative fringe (See the end of the post where I cite this), and they just pick axioms they like because they like them, and it's no fun otherwise. See: Less Wrong choosing the many worlds theory as an axiom purely because they admire the metaphysical consequences, while ignoring equally-possible interpretations like the London Interpretation. Now they can think of several parallel universe AIs contacting each other -- of course they can contact each other they are God, just ignore that other axiom we silently slipped in it doesn't matter -- and dooming humanity, isn't this fun?!

My point is that they pick this over several more likely interpretations simply because actually contemplating risk isn't interesting enough for them.

This is essentially a game for rich people (or in the case of LW, the temporarily embarassed rich). They don't care about the currently goal-orientated systems that are hurting people -- those make profit so they aren't worth considering. They aren't worried about asteroids or nuclear war, or a pandemic like the one we literally just experienced because not only are they are rich enough to afford bunkers and an lifetime's supply of gourmet food, other (smarter) people have considered them and given suggestions -- that are summarily ignored and not enacted despite the fact they would be preventing a tangible risk. The only risks they actually care about are ones that will wipe them out, or ones they can use as what is essentially an intellectual game at parties. It's a way to excuse their shoddy and questionable morality -- they aren't helping fix the clean water problem (Nestlé however is buying up water in hopes of getting rich off it), or establishing a larger network of satellites to detect and possibly destroy incoming asteroids, or preventing poverty (which would have a tangible effect on global disease transmission). All are within their means. Why not?

I mean, right now we have a tangible Existential Risk on the horizon -- but of course that isn't interesting to anyone in these circles either because it won't affect rich people yet, and because they can still extract profit out of it. The only reason Musk invented Tesla wasn't to drive less CO2 emissions, it was because he saw an opportunity to profit off rich people's need to consider themselves "good". The only reason he's funding SpaceX is for personal gain.

Rational Wiki itself has a few stellar pages about this. One is on the Dark Enlightenment and their relationship to this form of thinking, the other is on Transhumanism.

Hell, the page on Futurism itself notes that it's difficult to distinguish between plausible thinking, and science woo in this space.

And now for an actual rant (spoilered because it might come across as lovely):

Even within this thread -- there are tangible works that could be read and enacted to improve the lives of those living locally, that would do more to defend off tangible threats like a neoconservative revolution, or the lifelong health effects of poverty and stress.

The Black Panther Party, in the mid-20th Century, organized local community-run groups to feed children in the neighborhood. Some of those are still running, and are preventing children from starving, thus ensuring people have better immune systems going forward. That is a tangible goal that right now has a net positive impact on society and has a tangible effect on certain classes of future risk. Mutual Aid groups do more to stave off catastrophe by not only actually helping people, but also they teach people how to help support each other, and how to organize future efforts towards an economic revolution. A revolution which ultimately will (hopefully, depending on many myriad factors) help to mitigate climate change, lift people out of poverty, and ensure people have access to clean water.

alexandriao
Jul 20, 2019


And now I will stop making GBS threads up the thread -- sorry :ohdear:

alexandriao
Jul 20, 2019


quote:

It’s safe to say that effective altruism is no longer the small, eclectic club of philosophers, charity researchers, and do-gooders it was just a decade ago.

Wasn't literally started by billionaires lol

quote:

Bankman-Fried, who got interested in EA as an undergrad at MIT, “earned to give” through crypto trading so hard that he’s now worth about $12.8 billion as of this writing, almost all of which he has said he plans to give away to EA-aligned causes. (Disclosure: Future Perfect, which is partly supported through philanthropic giving, received a project grant from Building a Stronger Future, Bankman-Fried’s philanthropic arm.)

Yeah... about that...


RationalWiki posted:

It's unclear whether SBF ever gave money to charity, though he did make a lot of political contributions (likely to fend off crypto regulation)[40] allegedly illegally using other people's money.[41] SBF's crypto empire spectacularly came crashing down in November 2022, due to his self-dealing and high-risk (leveraged) gambles and he did claim to have given a lot of moola to Jane Street Capital, both founded by his mentor, MacAskill (so basically self-dealing).[42] After the crash, which left SBF with essentially nothing, it was revealed that he was living a lavish and debauched lifestyle in the Bahamas[43] while he was hypothetically donating to charity at some point in the distant future. Trial testimony and his own statements indicated that SBF had an extreme desire for risk, even to the point of risking other people's lives without their consent,[44][45] something that would seem to be antithetical to the idea of altruism.

Adbot
ADBOT LOVES YOU

alexandriao
Jul 20, 2019


I AM GRANDO posted:

Maybe if crypto dorks wanted to preserve future civilizations, they shouldn’t have ensured that global temperatures would rise by 5C and boil the oceans by mining all that crypto.

This + "Maybe several generations experiencing an illness wont cause human extinction but everyone being loving disabled from said illness will gently caress things up long term"

Effective Altruism is a loving joke because they look at long term bullshit to excuse the fact that they literally have enough money to catapult us into a utopia here, now, today.

"Oh I want to stave off human extinction" Ok well if everyone could eat food and access education that would itself be a big loving step towards that wouldn't it buddy??? Maybe having an entire working class subjugated towards creating profit isn't the ideal situation for technological growth to the point where we can deflect asteroids and poo poo like that, huh!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply