Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Pochoclo
Feb 4, 2008

No...
Clapping Larry
There's some sci-fi novelette which has AI that is so incredibly autistic that it's incomprehensible to humans. They need "handlers" who are capable of making interpretations of what the AIs communicate, of course they are incomplete and the handler ends up painting it with his own bias.

I mean, we can't communicate effectively with chimpanzees, and they're the species outside our own that is arguably closest to us. What guarantees do we have that we'll be capable of communicating with an artificial intellect, one that is entirely outside of our biological origin? I think at the very least it will be hard as gently caress.

Adbot
ADBOT LOVES YOU

Pochoclo
Feb 4, 2008

No...
Clapping Larry

Main Paineframe posted:

The fact that we designed and built literally every I/O method it has? That sci-fi tale you talk about sounds like it was inspired by the days of early computing, when the work was largely done by specialized computer operators trained in the arcane I/O methods used decades ago, and misinterpreting the output of a computer isn't exactly uncommon even today. But no matter how arcane the I/O method got, it was always designed and built by a human who intended for it to be comprehensible somehow to other humans. If a computer decides on its own to stop producing coherent output in a format that can be understood by humans, it has entered a state popularly known as "broken", and will likely be taken offline for repair.

Okay, I guess I put it wrong. On a strict I/O basis we would be able to communicate, give and receive data, yeah, but what I meant is that the electronic sentience might have a set of intrinsic values, a personality, a worldview, or whatever, that would be so alien to human comprehension that it might not be able to make us understand it, and it might not internalize ours in turn.
So we might be able to tell the AI "Hey, help me optimize the traffic on this city" and the computer might respond with "Okay, here's how you need to remap the roads and the traffic control network", to name a silly example, but if we asked it "Hey computer, what do you want in life? How do you feel?" it might have an internal analogue but there might be no way to understand it for us.
This is of course 100% theorycrafting because we have no loving idea of anything even close to that anyway, but hey.

  • Locked thread