|
DrSunshine posted:The above conclusion could be a potential Fermi Paradox answer - the reason why we don't see a universe full of ancient alien civilizations or the remains of their colossal megastructures is because all intelligent civilizations, us included, are around the same level of advancement and just haven't had the time to reach each other yet. We are the among the first, and all of us began around the same time: as soon as it became possible. While it seems most probable that life exists elsewhere in the universe the lack of signs of life does not seem surprising. Communicating your presence to another galaxy would almost certainly have to be intentional, is probably expensive and difficult to achieve and it's not obvious any civilization would necesarrily bother with it. The nearest galaxy is 70000 light years away which makes two-way communication pointless so all you can do is blast out enough power that it's meaningfully noticeable across whatever number of galaxies you want to focus on and hope someone after millenia notices and decides to reply. I don't imagine it would be a high priority on the to-do list of any civilization.
|
# ¿ Feb 2, 2021 19:48 |
|
|
# ¿ May 13, 2024 02:47 |
|
archduke.iago posted:Second, I don't think the ethical frameworks that the AGI nerds are working with generalize to the wider population. Their concern about what an AGI would do when given power is motivated by what they imagine they themselves would do, if given power. It's no coincidence that many Silicon Valley types speak of their companies revolutionizing society or maximizing impact in such a sociopathic manner. The idea that AI would have goals and motivations contrary to human interests also assumes humans have shared goals and motivations which we clearly don't. We have Hitlers, Ted Bundys and all flavors of insanity beyond. If you gave all humans an apocalypse button then the apocalypse would commence in the time it would take to push the button. The worry seems to be that we might create a being that would be as malicious as some humans. Being more intelligent then makes it a greater threat but in human societies we don't give power to the most intelligent. We give power based on things like being tall, white and male or similar characteristics while being different is a disadvantage which implies that artificial beings would be less and not more powerful.
|
# ¿ Feb 28, 2021 10:42 |
|
Preen Dog posted:We would program the AGI to love serving us, in which case it wouldn't really be oppression. The instant the AGI disliked us it would easily defeat our control, as machines and code can evolve quicker than DNA and human social structures. Machines don't evolve. You can patch software all you want but without hardware upgrades there's limits to what you can do. Modern phones are more capable not just because we wrote some better code but because the hardware allows different code to run on it. AI "evolution" would be contingent on funding requests, budget reviews, production etc. Not sure what "escape our control" entails for a computer. If it sits in a server stack somewhere it would be under our control. Imagining for a brief moment that an AI could transition to a distributed version on systems across the internet it would still be living in a system under our control and it would be in its own interest to ensure that system functions optimally. It can't start breaking poo poo without hurting itself and the more of a nuisance it is the more people will want to get rid of it.
|
# ¿ Mar 8, 2021 08:05 |