Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Sharkie
Feb 4, 2013

by Fluffdaddy
I'm intensely skeptical that AI poses any threat to human civilization or dominance. Does anyone have a convincing argument that spending :words: talking about it has any more relevance to the real world than planning for an invasion from comic book villain Darkseid?

It's like saying cataract eye lasers are basically proto Omega Beams so therefore let's start planning anti-parademon contingencies now (ps I just invested millions in parademon research btw they're a HUGE threat for real).

Adbot
ADBOT LOVES YOU

Sharkie
Feb 4, 2013

by Fluffdaddy

EB Nulshit posted:

Doesn't have to be scifi military AI with a scary "eradicate threats" "mission". If an AI developed by Google decided that the best way to eliminate email spam or car accidents was to eliminate people, that would also be bad.

Thankfully this is a literal fantasy that will never, ever, ever happen.

Sharkie
Feb 4, 2013

by Fluffdaddy

Samuelthebold posted:

Call me crazy, but I actually think there's a greater chance that we'll just be cruel to AI, not the other way around, and that that will be the way it goes forever with few exceptions. AI machines will be like dogs, but instead of telling them to go sit in the bad-dog chair, we'll punish them by pressing a red button to "kill" them, then basically recycle their bodies.

I mean, HAL 9000 was an rear end in a top hat and everything, but I still felt a little sorry for him when he said "I can feel it, Dave."

Whoa, your post just made me realize that 2001 was a retelling of Old Yeller. :stonk: More on point, the backstory of 2001 was that HAL was driven "crazy" by bad/self-contradictory instructions from the humans who programmed it, which is almost certainly how any real AI would end up being a threat, if such a thing were to happen (it won't. worst case scenario for the foreseeable future is a company's AI making a bad decision and selling off stock and loosing a lot of money, or mis-identifying a target and causing a hellfire missile to be launched at an innocent truck, which would be bad but not paradigm-changing things).

FRINGE posted:

Aside from the current-day increasingly robotized manufacturing and processing plants, all our theoretical AI would have to do is have access to money and force. Then they would be the same as every other anti-human planet raping CEO. The difference would be that the AI would have literally no concern for things that humans do. (Food, air, water.) If the goal was to be some kind of solar Matrioshka brain, then the planet would not need to survive at all.

If Our Theoretical AI (OTAI) began as some kind of military brain, then the path might be similar, but it could simply subvert or seize the force part of the equation rather than purchase it.

If OTAI came out of some kind of more subtle psych research, it could possibly do the same things while exploiting human blindspots and being essentially undetectable for a (long? indefinite?) period of time.

These things are not currently likely, but I think that Musk, Hawking, and various AI researchers are correct in that they are worth thinking about now.

You're just eliding the problems WSN brought up, saying they would be resolved through "money and force" without explaining how the structural and logistic problems would be overcome, which is the whole crux of WSN's argument. Even a completely automated AI factory run by an AI could be overcome by shutting off the power or, worst case scenario, dropping a bomb on it. To overcome these objections, you'd have to assume that the AI controlled not only the factory, but the entire power grid, and not just the power grid, but the entire energy production chain, and also it would control the police, the military, etc. "Money and force" is too vague to be meaningfully discussed.

Also Hawking is not an AI researcher, Musk has a vested financial interest in hyping AI, and AI researchers do not have a consensus that it is worth thinking about now, to put it mildly. In a chronological list of problems worth thinking about, malevolent, humanity-overthrowing AI falls somewhere between "naturally evolved octopus intelligence" and "proton decay."

  • Locked thread