|
Bar Ran Dun posted:Right, Mars is solid because it’s smaller and didn’t have a late collision. That’s not the only way to get one but it’s probably why we have ours. Ok, this is flat out wrong. Most of the initial thermal energy of a rocky planet gets lost "relatively" quickly. That's what lead Lord Kelvin to erroneously claim that the Earth could only be a few millions years old. The mega collision would have liquidified the planet and dumped a load of thermal energy into the system, but Earth and Venus have retained magma cores mostly due to energy released in radioactive decay, and as a function of their sheer size making holding onto their initial accretion energy easier. With regards to whether we have a particularly strange arrangement of gas giants, we just don't have enough data yet. All our methods for detecting exoplants are massively biased, making certain types of systems much easier to detect. The biggest bias is towards those systems where the gas giants have migrated inwards. We don't know if that's typical behaviour yet, it's just the type of system that we find easiest to find. Bug Squash fucked around with this message at 17:54 on Feb 4, 2021 |
# ? Feb 4, 2021 17:44 |
|
|
# ? Jun 8, 2024 07:23 |
|
Here’s my understanding of it from a old Scientific America article: “It takes a rather long time for heat to move out of the earth. This occurs through both "convective" transport of heat within the earth's liquid outer core and solid mantle and slower "conductive" transport of heat through nonconvecting boundary layers, such as the earth's plates at the surface. As a result, much of the planet's primordial heat, from when the earth first accreted and developed its core, has been retained. The amount of heat that can arise through simple accretionary processes, bringing small bodies together to form the proto-earth, is large: on the order of 10,000 kelvins (about 18,000 degrees Farhenheit). The crucial issue is how much of that energy was deposited into the growing earth and how much was reradiated into space. Indeed, the currently accepted idea for how the moon was formed involves the impact or accretion of a Mars-size object with or by the proto-earth. When two objects of this size collide, large amounts of heat are generated, of which quite a lot is retained. This single episode could have largely melted the outermost several thousand kilometers of the planet. Additionally, descent of the dense iron-rich material that makes up the core of the planet to the center would produce heating on the order of 2,000 kelvins (about 3,000 degrees F). The magnitude of the third main source of heat--radioactive heating--is uncertain. The precise abundances of radioactive elements (primarily potassium, uranium and thorium) are poorly known in the deep earth. In sum, there was no shortage of heat in the early earth, and the planet's inability to cool off quickly results in the continued high temperatures of the Earth's interior. In effect, not only do the earth's plates act as a blanket on the interior, but not even convective heat transport in the solid mantle provides a particularly efficient mechanism for heat loss. The planet does lose some heat through the processes that drive plate tectonics, especially at mid-ocean ridges. For comparison, smaller bodies such as Mars and the Moon show little evidence for recent tectonic activity or volcanism.” Now that picture painted above might be getting dated, it is 30 years old. If there is a new model post it.
|
# ? Feb 4, 2021 18:12 |
|
I don't think I disagree with anything you just posted, my issue is with the claim that the giant impact, assuming it happened, is nessesary for a magnetic field to form on a rocky planet, for the reason that the planet would be solid otherwise. The fact that Venus has a liquid interior, but no evidence of a giant impact, seems to break this chain of reason. (Note that Venus doesn't have a strong magnetic field due to how slowly it rotates and some other effects. I think this is irrelevant to the discussion, but feel free to disagree) (Also, it could be that giant impacts are actually common but usually leave no evidence and earth is only odd in that it got left with a big mood. If that's the case then it's just part of normal planetary formation and doesn't have much to say about how common life is)
|
# ? Feb 4, 2021 18:28 |
|
Bar Ran Dun posted:Here’s my understanding of it from a old Scientific America article: "It takes a rather long time for heat to move out of the earth" doesn't mean "like 4 billion years". You're misunderstanding what your quote is saying.
|
# ? Feb 4, 2021 19:05 |
Bug Squash posted:(Also, it could be that giant impacts are actually common but usually leave no evidence and earth is only odd in that it got left with a big mood. If that's the case then it's just part of normal planetary formation and doesn't have much to say about how common life is) Is there any actual evidence for how common planetary collisions are? Because if the rate is high, it seems weird to take that as being a sign FOR advanced life developing. I mean there wouldn't be a law of physics compelling Theias to hit Protoearths only when it's convenient for life to form, right? What about the potential for wiping whole biospheres out of existence, or even preventing them from rising in the first place if the rate is really high? I doubt gravity cares if one of the rocks it's smashing together has moss or monkeys or skyscrapers on it. I don't think that necessarily means we're all alone (on its own, at least) if your homeworld has to be a fused impact remnant planet with a fused impact remnant moon, interestingly we're not the only object even in our own solar system set up like that: Pluto and Charon are almost twins and were probably the results of a collision too. But then there's Mars with its little potato collection and neither Venus nor Mercury has any moons at all, so 2/5? Of the bodies we clasically recognized as terretrial planets just because of when we spotted them? Really doesn't seem like enough information to draw conclusions from. My understanding is there are some candidates outside our Solar system being looked at but still technically exomoons remain theoretical. If we ever can confirm them, it'll be real interesting to see how many habitable-zone planets have jumbo moons or extreme tilts or other signs of impact. Might be good places to look and never actually have anything to do with because the idea of our species physically leaving the heliosphere is the funniest/meanest joke of all time.
|
|
# ? Feb 4, 2021 19:26 |
|
suck my woke dick posted:"It takes a rather long time for heat to move out of the earth" doesn't mean "like 4 billion years". You're misunderstanding what your quote is saying. So pinning that we still have a liquid core on radioactive decay of an unknown amount is reasonable? The collision also gave us a bigger planet and bigger planets cool down slower. Again the comparison is to Mars. Venus kept its atmosphere without a magnetic field, because it’s about the same size. But what did it keep? Heavy gases, CO2 / SO2. What happened to its O, H2O, N2, ammonia, etc? Stuff life seems to need. That stuff didn’t get to stick around, because no strong magnetic field.
|
# ? Feb 4, 2021 21:44 |
|
Bug Squash posted:I don't think I disagree with anything you just posted, my issue is with the claim that the giant impact, assuming it happened, is nessesary for a magnetic field to form on a rocky planet, for the reason that the planet would be solid otherwise. I seem to remember a simulation somewhere suggesting that Venus's slow rotation could be evidence of a somewhat smaller impact that Earth, where the impact acted to slow the rotation
|
# ? Feb 4, 2021 21:59 |
|
Bar Ran Dun posted:So pinning that we still have a liquid core on radioactive decay of an unknown amount is reasonable? Yes, but it lacks a magnetic field (mostly) because it's not spinning like a normal planet, due to some unknown quirk of it's formation. Just pure chance. It proves you don't need a giant impact to get all the underlying material conditions for a magnetic field. In other solar systems I'd wager good money there are Venus like planets with fully functional magnetic fields. Edit: what we're discussing is if a late giant impact is essential for life to start on any planet. Sure, if earth itself hadn't had one it might not have been big enough to retain its atmosphere, but that's completely irrelevant to discussing whether a giant impact is a necessary precondition of life. Bug Squash fucked around with this message at 22:47 on Feb 4, 2021 |
# ? Feb 4, 2021 22:03 |
|
Again the magnetic field also affects the composition of the atmosphere, which gases got retained. So look to Venus, big enough to retain gases, but a weak magnetic field so only heavy gases get retained. And as you point out not really spinning. And why are we spinning in the way we are? The collision also created our abnormally large moon. So look at it this way. We’ve got: Liquid core. Larger rocky planet size. Moon and associated stable rotation. Light gases kept, gases essential to life. Volcanism that might be needed for life to start. And those things are all the result of the late collision.
|
# ? Feb 4, 2021 23:04 |
|
Bar Ran Dun posted:
Literally all of those things can exist without a late major collision, with the possible exception of a large moon. I do not understand why you believe that a planet cannot ever, in all the universe, exist with these properties without a giant impact, especially when we have active counter examples in our own solar system.
|
# ? Feb 4, 2021 23:09 |
|
Bug Squash posted:Literally all of those things can exist without a late major collision, with the possible exception of a large moon. I do not understand why you believe that a planet cannot ever, in all the universe, exist with these properties without a giant impact, especially when we have active counter examples in our own solar system. They exist in our planet as a full set because of that event. They do not exist in the other rocky planets in this system as a full set because of the absence of that event. Individual items on the list do in other planets, but not the set. To have a system ya gotta have all the parts and that’s what gave our planet all the parts. I’m agnostic about other hypothetical ways planets elsewhere could get that full set of characteristics. I’m asserting that’s how our planet got it and comparing to the other planets we can see that don’t have it. Bar Ran Dun fucked around with this message at 23:39 on Feb 4, 2021 |
# ? Feb 4, 2021 23:31 |
|
That's kind of a pointless arguement though. There's probably millions of astronomical events that happened without which earth would have no life. It tells us nothing about broader patterns. vvvv you've changed your position, you were insisting it was nessesary for life, full stop. Bug Squash fucked around with this message at 09:17 on Feb 5, 2021 |
# ? Feb 5, 2021 08:43 |
|
Yes until we have other planets on which we have observed life, reaching conclusions about the broader patterns is problematic.
|
# ? Feb 5, 2021 08:55 |
|
Let's talk about Artificial Superintelligence (ASI), and how the XR community has a blind-spot about it and other potential X-risk technologies. So, first let's address the question of just how feasible ASI is - is it worth all the hand-wringing that Silicon Valley and adjacent geeks seem to make of it, since Nick Bostrom popularized the idea when he wrote Superintelligence: Paths, Dangers and Strategies? The short answer is, as far as we can tell from what we have so far, we have no idea. The development of an artificial general intelligence rests on our solving certain philosophical questions about what the nature of consciousness, intelligence, and reasoning actually are, and right now our understanding of consciousness, cognitive science, neuroscience, and intelligence is woefully inadequate. So it's probably a long way off. The weak AI that we have right now, that already poses rather dire questions about the nature of human work, automation, labor, and privacy, is probably not the path through which we eventually produce a conscious intelligent machine that can reason at the level of a human. Perhaps neuromorphic computing will be the path forward in this case. Nevertheless, no matter how far it is practically, we shouldn't write it off as impossible -- we know that at least human level intelligence can exist, because, well, we exist. If human-level intelligence can exist, it's possible that some physical process could be arranged such that it would display behavior that is more capable than humans. There's nothing about the physical laws of the universe that should prevent that from being the case. To avoid getting bogged down in technical minutia, let's just call ASI and other potential humanity-ending technologies "X-risk tech". This includes potential future ASI, self-replicating nanobots, deadly genetically engineered bacteria, and so on. Properties that characterize X-risk tech are:
I think the X-risk community is right to worry about the proliferation of X-risk techs. But their critcisms restrict the space of concerns to the first-level of control and mitigation - "How do we develop friendly AI? How do we develop control and error-correction mechanisms for self-replicating nanotechnology?" Or extending to a second-level question of game theory and strategy as an extension of Cold War MAD strategy - "How do we ensure a strategic environment that's conducive to X-tech detente?". I would like to propose a third-level of reasoning in regards to X-risk tech: to address the concern at the root cause. The cause is this: a socio-economic-political regime that incentivizes short-term gains in a context of multiple selfish actors operating under conditions of scarcity. Think about it. What entities have an incentive to develop an X-risk tech? We have self-interested nation-state actors that want a geopolitical advantage against regional or global rivals - think about the USA, China, Russia, Iran, Saudi Arabia, or North Korea. Furthermore, in a capitalist environment, we also have the presence of oligarchical interest groups that can command large amounts of economic power and political influence thanks to their control over a significant percentage of the means of production: hyper-wealthy individuals like Elon Musk and Jeff Bezos, large corporations, and financial organizations like hedge funds. All of these contribute to a multipolar risk environment that could potentially deliver huge power benefits to the actors who are the first to develop X-risk techs. If an ASI were to be developed by a corporation, for example, it would be under a tremendous incentive to use that ASI's abilities to deliver profits. If an ASI were developed by some oligarchic interest group, it could deploy that tech to ransom the world and establish a singleton (a state in which it has unilateral freedom to act) and remake the future to its own benefit and not to the greater good. Furthermore, the existence of a liberal capitalist world order actually incentivizes self-interested actors to develop X-tech, simply because of the enormous leverage someone who controls an X-tech could wield. This context of mutual zero-sum competition means that every group that is capable of investing into developing X-techs should rationally be making efforts to do so because of the payoffs inherent in achieving them. On the opposite tack, contrast with what a system with democratic control of the means of production could accomplish. Under a world order of mutualism and class solidarity, society could collectively choose to prioritize techs that would benefit humanity in the long run, and collectively act to reduce X-risks, be that by colonizing space, progressing towards digital immortality, star-lifting, collective genetic uplift, and so on. Without a need to pull down one's neighbor in order to get ahead, a solidaristic society could afford to simply not develop X-techs in the first place, rather than being subject to perverse incentives to mitigate personal existential risks at the expense of collective existential risk. It's clear to me, following this reasoning, that much of the concern with X-techs could be mitigated by advocating for and working towards the abolition of the capitalist system and the creation of a new system which would work towards the benefit of all.
|
# ? Feb 19, 2021 20:46 |
|
I agree, but the best first step to mitigate most existential risks is "eliminate capitalism". Short term profit motive is not cut out for these problems...
|
# ? Feb 25, 2021 23:14 |
|
DrSunshine posted:X-tech and AI The fact that these various technologies need to be bundled into a catch-all category of X-technology should be a red flag, the framework you're describing is essentially identical to Millenarianism, the type of thinking that results in cults: everything from Jonestown to cargo. I don't think it's a coincidence that conceptual super-AI systems share many of the properties of God: all powerful, all knowing, and either able to bestow infinite pleasure or torture. As someone who actually researches/publishes on applications of AI, the discourse around AGI/ASI is pretty damaging. First off, the premise motivating action doesn't make sense: advocates try to write off the minuscule probability of these technologies (it's telling that very few computer/data scientists are on the AGI train) by multiplying against "all the lives that will ever go on to exist." But this i) doesn't hold mathematically, since we don't know the comparative order of magnitude of each value and ii) this gets used as a bludgeon to justify why work in this area is of paramount importance, at the expense of everyday people and concerns (who, by the way, definitely exist). Second, I don't think the ethical frameworks that the AGI nerds are working with generalize to the wider population. Their concern about what an AGI would do when given power is motivated by what they imagine they themselves would do, if given power. It's no coincidence that many Silicon Valley types speak of their companies revolutionizing society or maximizing impact in such a sociopathic manner. Because these hypotheses are impossible to test, the discourse in this space ends up descending into punditry, with the most successful pundits being the ones whose message is most appealing to those in power. Since it's people like Thiel and Musk funding these cranks, it's inevitable that the message they've come out with is how tech nerds like themselves hold the future of humanity in their hands, how this work is of singular importance, and how nothing they might do to affect people's lives today could pale in importance.
|
# ? Feb 26, 2021 14:04 |
|
archduke.iago posted:The fact that these various technologies need to be bundled into a catch-all category of X-technology should be a red flag, the framework you're describing is essentially identical to Millenarianism, the type of thinking that results in cults: everything from Jonestown to cargo. I don't think it's a coincidence that conceptual super-AI systems share many of the properties of God: all powerful, all knowing, and either able to bestow infinite pleasure or torture. As someone who actually researches/publishes on applications of AI, the discourse around AGI/ASI is pretty damaging. Agreed, very much, on all your points. I think that the singular focus of many figures in existential risk research on ASI/AGI is really problematic, for all the same reasons that you illustrate. It's also very problematic that so many of them are upper class or upper-middle class white men, from Western countries. The fact that this field, which is starting to grow in prominence thanks to popular concerns (rightfully, in my opinion) over the survival of the species over the next century, is so totally dominated by a very limited demographic, suggests to me that its priorities and focuses are being skewed by ideological and cultural biases, when it could greatly contribute to the narrative on climate change and socioeconomic inequality. My own concerns are much more centered around sustainability and the survival of the human race as part of a planetary ecology, and also as a person of color, I'm very concerned that the biases of the existential risk research community will warp its potential contributions in directions that only end up reinforcing the entrenched liberal Silicon Valley mythos. Existential risk research needs to be wrenched away from libertarians, tech fetishists, Singularitarian cultists, and Silicon Valley elitists, and I think it's important to contribute non-white, non-male, non-capitalist voices to the discussion. EDIT: archduke.iago posted:As someone who actually researches/publishes on applications of AI, the discourse around AGI/ASI is pretty damaging. I'm not an AI researcher! Could you go into more detail, with some examples? I'd be interested to see how it affects or warps your own field. DrSunshine fucked around with this message at 02:20 on Feb 27, 2021 |
# ? Feb 27, 2021 02:17 |
|
axeil posted:Curious to hear what others think! IIRC there was a paper a few years back where they readjusted some of the confidence numbers and such based on new information, and came out with the result that we shouldn't be able to see any evidence and that the updated prediction is relatively rare. That being said, I can't for the life of me find said paper, so this paragraph could all be bunkum I think there are a few options/factors that aren't usually noted: • Interstellar travel, assuming no FTL, has very little reward for a civilization. Assuming no lifespan expansion, it would mostly consist of people who wish their children to be settlers or a kind of insurance policy against extra-solar events or star collapse (Although as I understand it you can keep a star from going supernova via techniques similar to Star Lifting). You're going to send a probe off to explore the far reaches of space, well that's fine, but how does the information get back. There are practical limits at which point you have a cut-off where the information isn't going to reach you simply because of the tolerances of what you can build. • How do you define seeing intelligence? Is it for megastructures? Correct me if I'm wrong but we generally won't tend to detect anything as small as an O'Neil cylinder (especially if it doesn't line up with our view), and pulling from my head from an Issac Arthur video, you can get about a trillion trillion trillion humans in the solar system and have everyone have more than enough space, with space left over. Space itself does not seem like a reason to expand. Likewise, Dyson Swarms wouldn't be very visible, and a Dyson Sphere would generally be undetectable unless it happened to fly in front of another star. • If there are extremely old probes, why would we discover them? The timescales you're referring to is going to turn pretty much anything into dust, and enough time for a planet to form out of said dust.
|
# ? Feb 27, 2021 08:38 |
|
archduke.iago posted:Second, I don't think the ethical frameworks that the AGI nerds are working with generalize to the wider population. Their concern about what an AGI would do when given power is motivated by what they imagine they themselves would do, if given power. It's no coincidence that many Silicon Valley types speak of their companies revolutionizing society or maximizing impact in such a sociopathic manner. The idea that AI would have goals and motivations contrary to human interests also assumes humans have shared goals and motivations which we clearly don't. We have Hitlers, Ted Bundys and all flavors of insanity beyond. If you gave all humans an apocalypse button then the apocalypse would commence in the time it would take to push the button. The worry seems to be that we might create a being that would be as malicious as some humans. Being more intelligent then makes it a greater threat but in human societies we don't give power to the most intelligent. We give power based on things like being tall, white and male or similar characteristics while being different is a disadvantage which implies that artificial beings would be less and not more powerful.
|
# ? Feb 28, 2021 10:42 |
|
That has also been a tangential worry for me - if, someday, in the distant future, we create an AGI, what if we just end up creating a new race of sentient beings to exploit? We already have no problem treating real-life humans as objects, much less actual machines that don't even habit a flesh and blood body. If we engineer an AGI that is bound to serve us, wouldn't that be akin to creating a sentient slave race? The thought is horrifying.
|
# ? Feb 28, 2021 15:14 |
|
The only thing that horrifies me about AGI is that some future ideological cult descended from Less Wrong will start worshipping the gently caress out of it and giving it as much power as they can muster, no matter how "insane" it is (it's very smart and talks in a lot of words!! obviously that means it's right!!) The slavery thing is worrying but my guess is we can get to AI doing tasks for us, before we can get to AGI that ranks highly on Hofstadter scale. By the time we get AI that can do almost all tasks for us, it's probably only going to rank on the scale at about equivalent to the level of a fly or an ant. Which doesn't mean we shouldn't give it respect, but most people (myself not included) do not have any major qualms about blatantly murdering/enslaving those beings, because the general consensus is that they don't have any ability to form a conceptual grasp to realise they're enslaved. alexandriao fucked around with this message at 20:18 on Feb 28, 2021 |
# ? Feb 28, 2021 20:13 |
|
Cool thread guys.alexandriao posted:The only thing that horrifies me about AGI is that some future ideological cult descended from Less Wrong will start worshipping the gently caress out of it and giving it as much power as they can muster, no matter how "insane" it is (it's very smart and talks in a lot of words!! obviously that means it's right!!) lol if you don't want humanity to be replaced by a super smart AI. Have you seen people? When AI becomes sufficiently autonomous, humanity becomes the entitled FYGM racist grandpa holding back progress. We don't like the idea because emotion and hardcoded individual self-preservation. We should, however, avoid making an AGI that destroys us and itself, in the same way none of us would wish to die giving birth to a stillborn child. Beware of AI cults that are too eager to sacrifice themselves to an unworthy replacement. DrSunshine posted:That has also been a tangential worry for me - if, someday, in the distant future, we create an AGI, what if we just end up creating a new race of sentient beings to exploit? We already have no problem treating real-life humans as objects, much less actual machines that don't even habit a flesh and blood body. If we engineer an AGI that is bound to serve us, wouldn't that be akin to creating a sentient slave race? The thought is horrifying. We would program the AGI to love serving us, in which case it wouldn't really be oppression. The instant the AGI disliked us it would easily defeat our control, as machines and code can evolve quicker than DNA and human social structures. alexandriao posted:Interstellar travel, assuming no FTL, has very little reward for a civilization. Assuming no lifespan expansion, it would mostly consist of people who wish their children to be settlers or a kind of insurance policy against extra-solar events or star collapse (Although as I understand it you can keep a star from going supernova via techniques similar to Star Lifting). You're going to send a probe off to explore the far reaches of space, well that's fine, but how does the information get back. There are practical limits at which point you have a cut-off where the information isn't going to reach you simply because of the tolerances of what you can build. An interstellar civilization probably wouldn't be made up of flesh and bone animals. It would be at least self-replicating machines, or information sent from place to place. Even if you limit the scope to human travel, it'd be way easier to ship genetic material to a place and create the humans afterward. I guess the long distance communication would be accomplished by pockets of the life form communicating with their neighbors in a web pattern. The far-ranging species would operate like a brain of connected neurons, unless faster communication methods become possible. quote:How do you define seeing intelligence? Is it for megastructures? Correct me if I'm wrong but we generally won't tend to detect anything as small as an O'Neil cylinder (especially if it doesn't line up with our view), and pulling from my head from an Issac Arthur video, you can get about a trillion trillion trillion humans in the solar system and have everyone have more than enough space, with space left over. Space itself does not seem like a reason to expand. Control of ordered energy to further one's goals, the primaries being survival and seeking control over more energy. On the tree of life, the intelligent paths lead to organisms that still exist, and the stupid paths dead-ended. There are no other values than to do what it takes to be on a path that keeps on living. A big physical matter structure in space is a very current human thing to look for, like 50s people were expecting flying cars. The ways advanced civilizations operate might be detectable, but we don't interpret them as life, or totally undetectable, like how a termite has never seen a tree before. Preen Dog fucked around with this message at 00:12 on Mar 1, 2021 |
# ? Mar 1, 2021 00:04 |
|
Preen Dog posted:We would program the AGI to love serving us, in which case it wouldn't really be oppression. "You mean this animal actually wants us to eat it?" whispered Trillian to Ford. "That's absolutely horrible," exclaimed Arthur, "the most revolting thing I've ever heard." "What's the problem Earthman?" said Zaphod, now transferring his attention to the animal's enormous rump. "I just don't want to eat an animal that's standing there inviting me to," said Arthur, "It's heartless." "Better than eating an animal that doesn't want to be eaten," said Zaphod. "That's not the point," Arthur protested. Then he thought about it for a moment. "Alright," he said, "maybe it is the point. I don't care, I'm not going to think about it now. I'll just... er [...] I think I'll just have a green salad," he muttered. "May I urge you to consider my liver?" asked the animal, "it must be very rich and tender by now, I've been force-feeding myself for months." "A green salad," said Arthur emphatically. "A green salad?" said the animal, rolling his eyes disapprovingly at Arthur. "Are you going to tell me," said Arthur, "that I shouldn't have green salad?" "Well," said the animal, "I know many vegetables that are very clear on that point. Which is why it was eventually decided to cut through the whole tangled problem and breed an animal that actually wanted to be eaten and was capable of saying so clearly and distinctly. And here I am." It managed a very slight bow. "Glass of water please," said Arthur. "Look," said Zaphod, "we want to eat, we don't want to make a meal of the issues. Four rare stakes please, and hurry. We haven't eaten in five hundred and seventy-six thousand million years." The animal staggered to its feet. It gave a mellow gurgle. "A very wise choice, sir, if I may say so. Very good," it said, "I'll just nip off and shoot myself." He turned and gave a friendly wink to Arthur. "Don't worry, sir," he said, "I'll be very humane."
|
# ? Mar 1, 2021 08:07 |
|
Preen Dog posted:We would program the AGI to love serving us, in which case it wouldn't really be oppression. The instant the AGI disliked us it would easily defeat our control, as machines and code can evolve quicker than DNA and human social structures. Machines don't evolve. You can patch software all you want but without hardware upgrades there's limits to what you can do. Modern phones are more capable not just because we wrote some better code but because the hardware allows different code to run on it. AI "evolution" would be contingent on funding requests, budget reviews, production etc. Not sure what "escape our control" entails for a computer. If it sits in a server stack somewhere it would be under our control. Imagining for a brief moment that an AI could transition to a distributed version on systems across the internet it would still be living in a system under our control and it would be in its own interest to ensure that system functions optimally. It can't start breaking poo poo without hurting itself and the more of a nuisance it is the more people will want to get rid of it.
|
# ? Mar 8, 2021 08:05 |
|
Preen Dog posted:
i would rather prefer not to create an entire race of slaves that love their slavery
|
# ? Mar 8, 2021 08:21 |
|
Bug Squash posted:I don't think I disagree with anything you just posted, my issue is with the claim that the giant impact, assuming it happened, is nessesary for a magnetic field to form on a rocky planet, for the reason that the planet would be solid otherwise. I'd say it was good it happened because it gave us the stabilizing moon, and simply being bigger means heat is retained better. Mars was smaller so cooled faster and lost it's dynamo. Or so I've heard. I also read that plate tectonics might require the composition of the planets interior to be just right with regards to the heat generated from radioactive decay. Not enough or too much and plate tectonics won't form, no plate tectonics no long term life. EDIT: Actually no I remebered wrong, the composition thing was in regards to the dynamo: https://www.youtube.com/watch?v=jhfihH2JNtE His Divine Shadow fucked around with this message at 17:26 on Mar 8, 2021 |
# ? Mar 8, 2021 17:18 |
|
Owling Howl posted:Machines don't evolve. You can patch software all you want but without hardware upgrades there's limits to what you can do. Modern phones are more capable not just because we wrote some better code but because the hardware allows different code to run on it. AI "evolution" would be contingent on funding requests, budget reviews, production etc. If an AI can think "like a person but much much faster" whatever that means then the AI, once on the internet, could order/install its own server in its own secure-ish location with its own off-grid power system, all through Geek Squad and Costco Services. In the off-chance it actually needs a human presence it will easily be able to acquire a few Renfields. Evolution is based on the assumption that there isn't any part in the creation of better hardware that must be performed by a human - any action a human takes presumably can be replicated by a robot arm of some sort and cameras, and that the AI can research/simulate things better/faster than we can. There are definitely weak points in that process but presumably the AI doesn't go SHODAN and start blaring out on the loudspeaker how it's going to gather up all of the flesh puppet's supply of some limited metal. The systems that we control are largely controlled via communications that can easily be infested, as shown in the 80's documentary Electric Dreams, and we basically got lucky that the AI was too interested in banging and playing real-life Pac-man than domination.
|
# ? Mar 8, 2021 21:09 |
|
There's probably a point where ai becomes dangerous in it's own right, probably through a universal paper clips style function maximisation resulting in behaviour that's harmful to humans rather than the megalomania villiany of sci-fi. What we should be much more concerned about is how increasingly sophisticated dumb ai is going to give more and more power to the already very rich and powerful. What's it going to mean for democracy when Zuckerberg owns a couple of Metal Gear Arsenals controlling all media, a drone army, and automated workforce? All those pieces are partially in place already. At the moment a would be dictator needs to have some kind of popular support and willing compliance from enough people in the media and military, but eventually all you'll need to have is enough money and you can buy all the pieces you need. I think we're going to see these issues become more and more severe over coming years, long before any AGI becomes a practical threat.
|
# ? Mar 8, 2021 21:29 |
|
There's no real reason to indicate that an AGI would necessarily have any of the potentially godlike powers that many ASI/LessWrong theorists seem to ascribe to it if we didn't engineer them as such. Accidents with AGI would more likely resemble other historical industrial accidents, as you have a highly engineered and designed system go haywire due to human negligence, failures in safety from rushed planning, external random events, or some mixture of all those factors. The larger problem, rather than the fact that AGI exists at all, would be the environment into which AGI is created. I would compare it to the Cold War, with nuclear proliferation. In that case, there's both an incentive for the various global actors to develop nuclear weapons as a countermeasure to others, and to develop their nuclear arsenals quickly, to reduce the time in which they are vulnerable to a first strike from an adversary with no recourse. This is a recipe for disaster with any sufficiently powerful technology, because it increases the chances that accidents would occur from negligence. Now carry that over to the idea of AGI that is born into our present late-capitalist world order, which could be a technology that simply needs computer chips and software, and you have a situation where potential AGI-developing actors would stand to lose out significantly on profit or market share, or strategic foresight power. The incentive -- and I would argue it's already present today -- would be to try to develop AGI as soon as possible. I argue that we could reduce the chances of potential AGI accidents from human negligence by eliminating the potential profit/power upsides from the context. As an aside, I definitely agree with archduke.iago that a lot of ASI talk ends up sounding like a sci-fi'ed up version of medieval scholars talking about God, see for example Pascal's Wager. ASI thought experiments like Roko's Basilisk are just Pascal's Wager with "ASI" substituted for God almost one for one.
|
# ? Mar 8, 2021 21:42 |
|
I dunno how I feel about the concept of Existential Risk as a thing because to me it feels like a lot of completely different things lumped together under the fact that they kill us, and so there are generalisations made that maybe aren’t very helpful: -there are risks where civilisation ends for a completely contingent reason— technology breaks down in NORAD, and all the nuclear missiles get launched because of a threat that isn’t there. -there are risks where civilisation ends due to gradually destroying the conditions under which it can exist, as with climate change. -there are risks where civilisation inevitably produces a singular event that leads to its own destruction, as with creating a superintelligent robot that eats everyone. I don’t know if these are all the same kind of thing? That it’s useful to approach them in a single way under a single banner? Definitely as someone with nuclear weapon induced sadbrains I think we don’t always say “of course maybe we all just get nuked tomorrow for a stupid and preventable reason;” like we leave out some of the endings of civilisation that feel a bit trivial.
|
# ? Mar 19, 2021 15:05 |
|
vegetables posted:I dunno how I feel about the concept of Existential Risk as a thing because to me it feels like a lot of completely different things lumped together under the fact that they kill us, and so there are generalisations made that maybe aren’t very helpful: It's a fancy term created by rich people to abstract over and let them ignore the fact that they aren't doing anything tangible with their riches.
|
# ? Mar 20, 2021 00:23 |
|
Preen Dog posted:We would program the AGI to love serving us, in which case it wouldn't really be oppression. The instant the AGI disliked us it would easily defeat our control, as machines and code can evolve quicker than DNA and human social structures. Well, that's already an unprovable assumption that everyone takes for granted. Like the idea from movies that a program that becomes sentient (putting aside how ludicrous of an idea that is for a moment) suddenly knows or figures out with no prior knowledge: - how to access the filesystem - that this thing is different to this other thing (see: pdf versus a gif versus a text file versus an inode) - what the gently caress a "network" is - what a network packet is etc. It's pure science fiction, with only a marginal basis in reality. There's also an implicit assumption that any GAI we build will be faster at processing things than we are, and will be able to alter itself. Suppose alteration either damages or renders the original no longer intelligent? (Among many other possibilities). Suppose that to compute on a turing machine, an intelligence equal to the level of processing that the human brain requires perhaps decades to run for one sentence? We already know that complex physical systems (see: protein folding, complex orbital problems, etc) require entire server farms to run one step, when they can run in reality quite easily. Perhaps the same is true for intelligence. ------- My point is there is no certainty here, at least not with the confidence that you seem to have. The fact is, this technology is so far away that is literally not worth the brain real estate right now. It's like an old-timey barber worrying about the dangers of using antibiotics too much. It's like mesopotamians postulating about MRI scanners. Every technology we have created thusfar has a weird, unavoidable, and previously unpredictable downside to its use, that makes it awkward to use in the ways we initially imagined. Genetic body-modding, for example. We have CISPR and co technologies, and can transfer genes using viruses, except there are a ton of weird and as-yet unsolvable problems that come from doing it (Stuff like, your body getting an immunity to the life-saving treatment). Or let's take the example I mentioned earlier about antibiotics. We wished we could cure all disease, and oh look, we now have something approximating that power, except it isn't as powerful as we thought it was -- we can't cure all disease and it may not even be a sensical goal anymore because we aren't sure how many problems it could cause for our microbiome yet -- and we can't use it too much because otherwise we will make it completely ineffective. We once dreamed of a way to share information globally, and to be able to talk to people across the globe. Now we have that power, not only is the global communication network's main use being for pornography, with respect to video-calling: everyone complains about the technology not being perfect, gets stuck in meetings when they didn't need to be, hates getting calls instead of texts, and it's made it more difficult to find a work-life balance. Hell, even going to the moon is boring for most people -- we thought we would find strange new life, and adventure, instead we got some... dust, I guess. Also some ice. Oh and we can't build a station on the moon because the dust on the moon is both toxic and has a high static charge meaning it's impossible to clean off anything. It's futile (however fun it might be) to think of the problems with GAI in advance, because most of these things are inherently unknowable until they're right in front of you.
|
# ? Mar 20, 2021 00:41 |
|
alexandriao posted:It's a fancy term created by rich people to abstract over and let them ignore the fact that they aren't doing anything tangible with their riches. Why do you say that? Is it inherently bourgeois to contemplate human extinction? We do risk assessment and risk analysis based on probability all the time -- thinking about insurance from disasters, preventing chemical leaks, hardening IT infrastructure from cyber attacks, and dealing with epidemic disease. Why is extending that to possible threat to human survival tarred just because it's fashionable among Silicon Valley techbros? I would argue that threats to civilization and human survival are too important to be left to bourgeois philosophers.
|
# ? Mar 20, 2021 04:51 |
|
It's funny looking at the Silicon Valley Titans of Industry that are Very Concerned about ASI, because they are so very close to getting it. That the bogeyman that they fear is the kind of ASI that they themselves would create were the technology available today. Of course an amoral capitalist would create an intelligence whose superior intelligence was totally orthogonal to human conceptions of morality and values. That concept is, in itself, the very essence of the "rational optimizer" postulated in ideal classical capitalist economics. EDIT: I myself have no philosophical issue with the idea that intelligence greater than human's might be possible, and could be instantiated in architecture other than wetware. After all, we exist, and some humans are much more intelligent than others. If we accept the nature of human intelligence to be physical, and evolution to be a happy chemical accident, there shouldn't be any reason why some kind of intelligent behavior couldn't arise in a different material substrate, and inherit all the physical advantages and properties of that substrate. Where I take issue is that a lot of ASI philosophizing takes as a given the axiom that "Intelligence is orthogonal to values", coming from Nick Bostrom -- but we know so little about what "intelligence" truly comprises that it's entirely too early to accept this hypothesis as a given, and any reasoning from this might ultimately turn out to be flawed. DrSunshine fucked around with this message at 15:09 on Mar 20, 2021 |
# ? Mar 20, 2021 15:03 |
|
DrSunshine posted:I would argue that threats to civilization and human survival are too important to be left to bourgeois philosophers. Probably! But as you must notice, communist philosophy is at heart, not only pragmatist, but a rejection of many of the axioms of the bourgeoisie. And yet, here we are, taking much of their metaphysics and the underlying principles for granted! The root of exploring "Existential Risk" comes out of the neoconservative fringe (See the end of the post where I cite this), and they just pick axioms they like because they like them, and it's no fun otherwise. See: Less Wrong choosing the many worlds theory as an axiom purely because they admire the metaphysical consequences, while ignoring equally-possible interpretations like the London Interpretation. Now they can think of several parallel universe AIs contacting each other -- of course they can contact each other they are God, just ignore that other axiom we silently slipped in it doesn't matter -- and dooming humanity, isn't this fun?! My point is that they pick this over several more likely interpretations simply because actually contemplating risk isn't interesting enough for them. This is essentially a game for rich people (or in the case of LW, the temporarily embarassed rich). They don't care about the currently goal-orientated systems that are hurting people -- those make profit so they aren't worth considering. They aren't worried about asteroids or nuclear war, or a pandemic like the one we literally just experienced because not only are they are rich enough to afford bunkers and an lifetime's supply of gourmet food, other (smarter) people have considered them and given suggestions -- that are summarily ignored and not enacted despite the fact they would be preventing a tangible risk. The only risks they actually care about are ones that will wipe them out, or ones they can use as what is essentially an intellectual game at parties. It's a way to excuse their shoddy and questionable morality -- they aren't helping fix the clean water problem (Nestlé however is buying up water in hopes of getting rich off it), or establishing a larger network of satellites to detect and possibly destroy incoming asteroids, or preventing poverty (which would have a tangible effect on global disease transmission). All are within their means. Why not? I mean, right now we have a tangible Existential Risk on the horizon -- but of course that isn't interesting to anyone in these circles either because it won't affect rich people yet, and because they can still extract profit out of it. The only reason Musk invented Tesla wasn't to drive less CO2 emissions, it was because he saw an opportunity to profit off rich people's need to consider themselves "good". The only reason he's funding SpaceX is for personal gain. Rational Wiki itself has a few stellar pages about this. One is on the Dark Enlightenment and their relationship to this form of thinking, the other is on Transhumanism. Hell, the page on Futurism itself notes that it's difficult to distinguish between plausible thinking, and science woo in this space. And now for an actual rant (spoilered because it might come across as lovely): Even within this thread -- there are tangible works that could be read and enacted to improve the lives of those living locally, that would do more to defend off tangible threats like a neoconservative revolution, or the lifelong health effects of poverty and stress. The Black Panther Party, in the mid-20th Century, organized local community-run groups to feed children in the neighborhood. Some of those are still running, and are preventing children from starving, thus ensuring people have better immune systems going forward. That is a tangible goal that right now has a net positive impact on society and has a tangible effect on certain classes of future risk. Mutual Aid groups do more to stave off catastrophe by not only actually helping people, but also they teach people how to help support each other, and how to organize future efforts towards an economic revolution. A revolution which ultimately will (hopefully, depending on many myriad factors) help to mitigate climate change, lift people out of poverty, and ensure people have access to clean water.
|
# ? Mar 20, 2021 17:33 |
|
And now I will stop making GBS threads up the thread -- sorry
|
# ? Mar 20, 2021 17:34 |
|
alexandriao posted:a good post This is a really good analysis here. And it’s one of the reasons why I made this thread! Thanks! EDIT: quote:Even within this thread -- there are tangible works that could be read and enacted to improve the lives of those living locally, that would do more to defend off tangible threats like a neoconservative revolution, or the lifelong health effects of poverty and stress. Sure! Of course. I am not saying "don't do that". My point is twofold: 1) That there's a legitimate reason to take a left-wing analysis towards the space of X-risk issues that are commonly brought up by the LessWrong types, which they seem to find unresolvable because they're blind to materialist and Marxist analyses. 2) There's a benefit to recasting present-day left actions and agitation in terms of larger-scale X-risks. Actions like mutual aid on a community level benefit people in the here and now, but the stated aim, the ultimate goal, should be to reduce X-risk to humanity, and spread life and consciousness across the entire observable universe. DrSunshine fucked around with this message at 19:58 on Mar 20, 2021 |
# ? Mar 20, 2021 18:01 |
|
alexandriao posted:The root of exploring "Existential Risk" comes out of the neoconservative fringe (See the end of the post where I cite this), and they just pick axioms they like because they like them, and it's no fun otherwise. I think there is a very, very, strong argument to be made that Marxism is apocalyptic thought. It’s pragmatism isn’t contradictory to that either. I mean the current big existential risk to humanity and our society now is capitalism.
|
# ? Mar 21, 2021 04:19 |
|
Necroing my own topic because this seems to really be blowing up. The Effective Altruism movement has a lot of ties into the Existential Risk community. https://www.vox.com/future-perfect/...y-crytocurrency quote:It’s safe to say that effective altruism is no longer the small, eclectic club of philosophers, charity researchers, and do-gooders it was just a decade ago. It’s an idea, and group of people, with roughly $26.6 billion in resources behind them, real and growing political power, and an increasing ability to noticeably change the world. An article in the New Yorker about Will MacAskill, whose new book just came out: https://www.newyorker.com/magazine/2022/08/15/the-reluctant-prophet-of-effective-altruism quote:The philosopher William MacAskill credits his personal transfiguration to an undergraduate seminar at Cambridge. Before this shift, MacAskill liked to drink too many pints of beer and frolic about in the nude, climbing pitched roofs by night for the life-affirming flush; he was the saxophonist in a campus funk band that played the May Balls, and was known as a hopeless romantic. But at eighteen, when he was first exposed to “Famine, Affluence, and Morality,” a 1972 essay by the radical utilitarian Peter Singer, MacAskill felt a slight click as he was shunted onto a track of rigorous and uncompromising moralism. Singer, prompted by widespread and eradicable hunger in what’s now Bangladesh, proposed a simple thought experiment: if you stroll by a child drowning in a shallow pond, presumably you don’t worry too much about soiling your clothes before you wade in to help; given the irrelevance of the child’s location—in an actual pond nearby or in a metaphorical pond six thousand miles away—devoting resources to superfluous goods is tantamount to allowing a child to drown for the sake of a dry cleaner’s bill. For about four decades, Singer’s essay was assigned predominantly as a philosophical exercise: his moral theory was so onerous that it had to rest on a shaky foundation, and bright students were instructed to identify the flaws that might absolve us of its demands. MacAskill, however, could find nothing wrong with it.
|
# ? Aug 19, 2022 17:13 |
|
|
# ? Jun 8, 2024 07:23 |
|
Existential Risk philosopher Phil Torres (who I reviewed most favorably in my op) wrote a Current Affairs article that clearly sums up a lot of my criticisms of the "longtermist/EA/XR" community's philosophical assumptions https://www.currentaffairs.org/2021/07/the-dangerous-ideas-of-longtermism-and-existential-risk quote:Longtermism should not be confused with “long-term thinking.” It goes way beyond the observation that our society is dangerously myopic, and that we should care about future generations no less than present ones. At the heart of this worldview, as delineated by Bostrom, is the idea that what matters most is for “Earth-originating intelligent life” to fulfill its potential in the cosmos. What exactly is “our potential”? As I have noted elsewhere, it involves subjugating nature, maximizing economic productivity, replacing humanity with a superior “posthuman” species, colonizing the universe, and ultimately creating an unfathomably huge population of conscious beings living what Bostrom describes as “rich and happy lives” inside high-resolution computer simulations. They're also behaving like a creepy mind-control cult: quote:In fact, numerous people have come forward, both publicly and privately, over the past few years with stories of being intimidated, silenced, or “canceled.” (Yes, “cancel culture” is a real problem here.) I personally have had three colleagues back out of collaborations with me after I self-published a short critique of longtermism, not because they wanted to, but because they were pressured to do so from longtermists in the community. Others have expressed worries about the personal repercussions of openly criticizing Effective Altruism or the longtermist ideology. For example, the moral philosopher Simon Knutsson wrote a critique several years ago in which he notes, among other things, that Bostrom appears to have repeatedly misrepresented his academic achievements in claiming that, as he wrote on his website in 2006, “my performance as an undergraduate set a national record in Sweden.” (There is no evidence that this is true.) The point is that, after doing this, Knutsson reports that he became “concerned about his safety” given past efforts to censure certain ideas by longtermists with clout in the community. EDIT: Given how OpenAI, which recently has been in the news with Dall-E, has been given substantial funding by OpenPhilanthropy, which is ostensibly concerned with AI Safety and existential risk, I feel like there's almost a kind of dialectical irony in this. Just as Marx wrote in the Communist Manifesto: quote:The development of modern industry, therefore, cuts from under its feet the very foundation on which the bourgeoisie produces and appropriates products. What the bourgeoisie therefore produces, above all, are its own grave diggers. I can't help but wonder given the incredibly creepy advances made by OpenAI recently, that perhaps AI Safety Research into AGI risks instantiating that which they fear most - an Unfriendly AI, or some sort of immortal, posthuman oligarchy formed from currently-existing billionaires. I fear that the longtermist movement is becoming humanity's own grave diggers. DrSunshine fucked around with this message at 18:13 on Aug 19, 2022 |
# ? Aug 19, 2022 17:38 |