|
I'm intensely skeptical that AI poses any threat to human civilization or dominance. Does anyone have a convincing argument that spending talking about it has any more relevance to the real world than planning for an invasion from comic book villain Darkseid? It's like saying cataract eye lasers are basically proto Omega Beams so therefore let's start planning anti-parademon contingencies now (ps I just invested millions in parademon research btw they're a HUGE threat for real).
|
# ¿ Dec 20, 2014 12:18 |
|
|
# ¿ May 18, 2024 18:12 |
|
EB Nulshit posted:Doesn't have to be scifi military AI with a scary "eradicate threats" "mission". If an AI developed by Google decided that the best way to eliminate email spam or car accidents was to eliminate people, that would also be bad. Thankfully this is a literal fantasy that will never, ever, ever happen.
|
# ¿ Dec 21, 2014 05:45 |
|
Samuelthebold posted:Call me crazy, but I actually think there's a greater chance that we'll just be cruel to AI, not the other way around, and that that will be the way it goes forever with few exceptions. AI machines will be like dogs, but instead of telling them to go sit in the bad-dog chair, we'll punish them by pressing a red button to "kill" them, then basically recycle their bodies. Whoa, your post just made me realize that 2001 was a retelling of Old Yeller. More on point, the backstory of 2001 was that HAL was driven "crazy" by bad/self-contradictory instructions from the humans who programmed it, which is almost certainly how any real AI would end up being a threat, if such a thing were to happen (it won't. worst case scenario for the foreseeable future is a company's AI making a bad decision and selling off stock and loosing a lot of money, or mis-identifying a target and causing a hellfire missile to be launched at an innocent truck, which would be bad but not paradigm-changing things). FRINGE posted:Aside from the current-day increasingly robotized manufacturing and processing plants, all our theoretical AI would have to do is have access to money and force. Then they would be the same as every other anti-human planet raping CEO. The difference would be that the AI would have literally no concern for things that humans do. (Food, air, water.) If the goal was to be some kind of solar Matrioshka brain, then the planet would not need to survive at all. You're just eliding the problems WSN brought up, saying they would be resolved through "money and force" without explaining how the structural and logistic problems would be overcome, which is the whole crux of WSN's argument. Even a completely automated AI factory run by an AI could be overcome by shutting off the power or, worst case scenario, dropping a bomb on it. To overcome these objections, you'd have to assume that the AI controlled not only the factory, but the entire power grid, and not just the power grid, but the entire energy production chain, and also it would control the police, the military, etc. "Money and force" is too vague to be meaningfully discussed. Also Hawking is not an AI researcher, Musk has a vested financial interest in hyping AI, and AI researchers do not have a consensus that it is worth thinking about now, to put it mildly. In a chronological list of problems worth thinking about, malevolent, humanity-overthrowing AI falls somewhere between "naturally evolved octopus intelligence" and "proton decay."
|
# ¿ Dec 26, 2014 05:03 |