Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Owling Howl
Jul 17, 2019

DrSunshine posted:

The above conclusion could be a potential Fermi Paradox answer - the reason why we don't see a universe full of ancient alien civilizations or the remains of their colossal megastructures is because all intelligent civilizations, us included, are around the same level of advancement and just haven't had the time to reach each other yet. We are the among the first, and all of us began around the same time: as soon as it became possible.

While it seems most probable that life exists elsewhere in the universe the lack of signs of life does not seem surprising. Communicating your presence to another galaxy would almost certainly have to be intentional, is probably expensive and difficult to achieve and it's not obvious any civilization would necesarrily bother with it. The nearest galaxy is 70000 light years away which makes two-way communication pointless so all you can do is blast out enough power that it's meaningfully noticeable across whatever number of galaxies you want to focus on and hope someone after millenia notices and decides to reply. I don't imagine it would be a high priority on the to-do list of any civilization.

Adbot
ADBOT LOVES YOU

Owling Howl
Jul 17, 2019

archduke.iago posted:

Second, I don't think the ethical frameworks that the AGI nerds are working with generalize to the wider population. Their concern about what an AGI would do when given power is motivated by what they imagine they themselves would do, if given power. It's no coincidence that many Silicon Valley types speak of their companies revolutionizing society or maximizing impact in such a sociopathic manner.

Because these hypotheses are impossible to test, the discourse in this space ends up descending into punditry, with the most successful pundits being the ones whose message is most appealing to those in power. Since it's people like Thiel and Musk funding these cranks, it's inevitable that the message they've come out with is how tech nerds like themselves hold the future of humanity in their hands, how this work is of singular importance, and how nothing they might do to affect people's lives today could pale in importance.

The idea that AI would have goals and motivations contrary to human interests also assumes humans have shared goals and motivations which we clearly don't. We have Hitlers, Ted Bundys and all flavors of insanity beyond. If you gave all humans an apocalypse button then the apocalypse would commence in the time it would take to push the button. The worry seems to be that we might create a being that would be as malicious as some humans.

Being more intelligent then makes it a greater threat but in human societies we don't give power to the most intelligent. We give power based on things like being tall, white and male or similar characteristics while being different is a disadvantage which implies that artificial beings would be less and not more powerful.

Owling Howl
Jul 17, 2019

Preen Dog posted:

We would program the AGI to love serving us, in which case it wouldn't really be oppression. The instant the AGI disliked us it would easily defeat our control, as machines and code can evolve quicker than DNA and human social structures.

Machines don't evolve. You can patch software all you want but without hardware upgrades there's limits to what you can do. Modern phones are more capable not just because we wrote some better code but because the hardware allows different code to run on it. AI "evolution" would be contingent on funding requests, budget reviews, production etc.

Not sure what "escape our control" entails for a computer. If it sits in a server stack somewhere it would be under our control. Imagining for a brief moment that an AI could transition to a distributed version on systems across the internet it would still be living in a system under our control and it would be in its own interest to ensure that system functions optimally. It can't start breaking poo poo without hurting itself and the more of a nuisance it is the more people will want to get rid of it.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply