Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Slow News Day
Jul 4, 2007

Warcabbit posted:

Well, if you think over aeons, it's basically not that unsurprising a concept - it's a lot easier to send unmanned probes everywhere, and I suspect AI-containing probes are how we'd explore the galaxy at some point. Sublight ones.

It's not so much a concern as something to consider.

It is something to consider, just not for the filthy, techie-hating plebes of D&D.

Adbot
ADBOT LOVES YOU

Dr.Zeppelin
Dec 5, 2003

If you're gonna criticize any tech person for caring about their starry-eyed techtopia instead of the real world, it seems like an odd choice to go after one of the few who backs an actual physical product that has the potential to improve the world instead of an app that exploits the hell out of a regulatory blind spot or a subsistence lifestyle product straight out of a cyberpunk nightmare.

Pope Guilty
Nov 6, 2006

The human animal is a beautiful and terrible creature, capable of limitless compassion and unfathomable cruelty.

Dr.Zeppelin posted:

If you're gonna criticize any tech person for caring about their starry-eyed techtopia instead of the real world, it seems like an odd choice to go after one of the few who backs an actual physical product that has the potential to improve the world instead of an app that exploits the hell out of a regulatory blind spot or a subsistence lifestyle product straight out of a cyberpunk nightmare.

Not to mention a dude who's got a bunch of patents that he's letting anybody who wants to use them do so.

Dr.Zeppelin
Dec 5, 2003

Pope Guilty posted:

Not to mention a dude who's got a bunch of patents that he's letting anybody who wants to use them do so.

To be fair it's really easy to parse that as a cynical and pragmatic business move by saying that in the long run Tesla makes way more money by selling cars that are convenient to charge than they do by monopolizing the ability to charge them.

Hodgepodge
Jan 29, 2006
Probation
Can't post for 232 days!

Dr.Zeppelin posted:

To be fair it's really easy to parse that as a cynical and pragmatic business move by saying that in the long run Tesla makes way more money by selling cars that are convenient to charge than they do by monopolizing the ability to charge them.

On the other hand, in the same place Apple would probably be making plans for the three Apple Car Charge Stations available per major city.

Corporations are often poo poo when it comes to doing the right thing, even when it is actually good for their bottom line.

woke wedding drone
Jun 1, 2003

by exmarx
Fun Shoe
I hope we're not just the irrigation system for agriculture superintelligence.

I hope we're not just the lubrication for industrial superintelligence.

I hope we're not the :supaburn:

MatchaZed
Feb 14, 2010

We Can Do It!


SedanChair posted:

I hope we're not just the irrigation system for agriculture superintelligence.

I hope we're not just the lubrication for industrial superintelligence.

I hope we're not the :supaburn:

To be fair, even Stephen loving Hawking is worried about sentient AI.

Not that that really makes much of a difference considering neither of them work in AI research.

HootTheOwl
May 13, 2012

Hootin and shootin
Why would an AI care about humans? We're a threat to ourselves, not it.

computer parts
Nov 18, 2010

PLEASE CLAP

Pope Guilty posted:

Not to mention a dude who's got a bunch of patents that he's letting anybody who wants to use them do so.

No, he said he wouldn't litigate on breaking those patents. He's not licensing them out because ~~reasons~~.

Chokes McGee
Aug 7, 2008

This is Urotsuki.

HootTheOwl posted:

Why would an AI care about humans? We're a threat to ourselves, not it.

Yeah, I love how everyone who discusses AI becoming sentient immediately jumps to THEY WILL WIPE US ALL OUT. Project much? :rolleyes:

ufarn
May 30, 2009
It's entirely possible that the AI will have better social skills than tech workers, so I'm kinda down with the Singularity.

Warcabbit
Apr 26, 2008

Wedge Regret
Yeah, it's not a thing to be worried about so much as 'they're probably going to be what outlives humanity and colonizes the stars' because of various things like human meat puppets requiring things like air and food.

I'm actually very interested in the evolutionary patterns and behavior of AI, because it won't have the underlying urges of the human / mammal / lizard / whatever brain stem. Unless it's a scanned human brain that all AI is based on. (I am making no predictions.)

Honestly, my opinion on AI is that sooner or later, we're going to have systems that we can socialize with and work with and not be able to tell they're not human. Will those be true AI? I don't know. Will they be 'for all intents and purposes' AI? I'd guess so.

Kobayashi
Aug 13, 2004

by Nyc_Tattoo
Eh, I don't know much about AI, but Diane Forsythe showed that the AI work in the 80s tended to reflect the biases and assumptions of the (mostly white, male) researchers. The thought that singularity-esque AI, built by humans, would somehow not reflect humanity strikes me as techno-utopian thinking. I can understand why people are making fun of Musk for worrying about such a seemingly remote problem, but the tech scene today is already producing poo poo that it doesn't understand. The entire "Big Data" buzzword parade is all about identifying the "what" without a goddamn care about the "why." The US government already uses remotely-piloted drones to kill people based on metadata alone. It's not much of a stretch to imagine a world where an automated algorithm mines the metadata, some keyboard jockey gives it a cursory glance, pushes a button, and then another automated algorithm drops a bomb on someone. Is that really all that different from ooga-booga Matrix/Terminator AI?

Kobayashi fucked around with this message at 15:42 on Aug 6, 2014

woke wedding drone
Jun 1, 2003

by exmarx
Fun Shoe
Those are all issues explained by runaway and unaccountable institutions. Emergence=/=AI, unless we call the court at Versailles AI.

Pope Guilty
Nov 6, 2006

The human animal is a beautiful and terrible creature, capable of limitless compassion and unfathomable cruelty.
There's only one known kind of intelligent mind so far, so suggesting that another would be very similar to that isn't too outre.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



Nessus posted:

I assume she is fishmech.

I can't believe I'm saying this but you're being really unfair to fishmech there

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



Pope Guilty posted:

There's only one known kind of intelligent mind so far, so suggesting that another would be very similar to that isn't too outre.
You know here's the thing, I actually think this is a stupid and fetishistic idea which does material harm to society.

It is very true that humans do a lot of awful things on a regular basis. There are also some humans who genuinely are sociopathic monsters. They are few, and all gags about politicians aside, the sociopathic monsters seem just as likely to be drifters or assholes working at the Pizza Hut as they do to be politicians.

Humans do have a lot of blinders and prejudices, and these ingrained layers of outer dirt and psychological crap can become so deeply ingrained that someone who on a personal level is empathic, feeds a kitten, writes a check to a telethon, and hugs their children can then take a plane into Washington and vote for bombing a house full of foreign children. However, this is a very different phenomenon from saying humans are 'fundamentally bad,' because it is due to acculturation, not some genetic impulse to relentlessly hurt, dominate, and exploit each other in whatever our current metaphor for the lovely emergent behavior of capitalism is.

But in a crisis most people try to work together. Or at least they don't gently caress each other up, not in the aggregate, not typically. They may abandon their respect for property law (and then be shot by police officers, themselves acculturated into the police-officer head trip, where financial incentives and ways of thinking make it reasonable to shoot people taking food and trinkets from a store when the city has drowned). But for whatever reason everyone in this drat country's media industry is horribly fascinated with the prospect of any bad situation making everyone automatically murder and abuse each other, as if by inner nature, never mind that if this really was the fundamental human nature we would have eaten ourselves to death before leaving Africa.

And since, as "everyone knows," people are all monsters when you let 'em be, you reinforce the ideas that we need strong mechanisms of control and that we need to bomb others into submission at all times; that employees must be kept down, that skilled workers will organize to rip your business apart rather than 'to improve their situation.' Because they have to really be monsters under the surface. Everyone knows.

Sedanchair is right: most (I will not say all) of organized lovely human behavior is due to institutions getting out of hand, not because of Original Sin.

e: If crises aren't persuasive, consider other situations. Even if you swear at someone on your drive to work, think of all the cohesive social situations that led up to you having gotten up that morning, gotten in a vehicle someone else almost certainly built, and gone off to your semi-steady place of work. If everyone was robbing, killing and abusing each other constantly this would not be the case. Obviously systemic discrimination etc. exists, but that's institutions getting out of hand again, leavened by the fact that there are over three hundred million individual humans in the US alone, and some of them are going to act out on any given day. Some! Not most.

Nessus fucked around with this message at 17:33 on Aug 6, 2014

ReindeerF
Apr 20, 2002

Rubber Dinghy Rapids Bro
Are you spitting in the eye of Western society's implicit obsession with internally consistent rulesets and things to govern and enforce them? Don't you know that if we pass a law and create a regulation for every single situation, leaving the rest to a judiciary based on precedent, everything will work out? Computer overlords are just the ultimate extension of our general philosophy.

computer parts
Nov 18, 2010

PLEASE CLAP

ReindeerF posted:

Are you spitting in the eye of Western society's implicit obsession with internally consistent rulesets and things to govern and enforce them? Don't you know that if we pass a law and create a regulation for every single situation, leaving the rest to a judiciary based on precedent, everything will work out? Computer overlords are just the ultimate extension of our general philosophy.

"A judiciary based on precedent" is the opposite of most Western legal systems.

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



computer parts posted:

"A judiciary based on precedent" is the opposite of most Western legal systems.
I figure here this means 'well, we ruled this way 70 years ago on a similar issue, so we're going to take that as guiding, unless we don't want to because we really need to stick it to Obama or it makes Kennedy uncomfortable' (our present system).

Besides, if the laws were (largely) just, and the AIs (allowing for occasional accident) followed the laws, presumably we would have very little to fear in terms of being openly enslaved or exterminated by those AIs. There would be loopholes and I guess they could be exploitative I suppose. What I guess is the real lurking underlying horror here is that they realize that laws don't mean poo poo if you're powerful enough to ignore them, and they assume Skynet would figure this out, probably sooner rather than later.

computer parts
Nov 18, 2010

PLEASE CLAP

Nessus posted:


Besides, if the laws were (largely) just, and the AIs (allowing for occasional accident) followed the laws, presumably we would have very little to fear in terms of being openly enslaved or exterminated by those AIs. There would be loopholes and I guess they could be exploitative I suppose. What I guess is the real lurking underlying horror here is that they realize that laws don't mean poo poo if you're powerful enough to ignore them, and they assume Skynet would figure this out, probably sooner rather than later.

If we're at the point where people are basically cordoned off on Earth and the AI has interstellar reach I don't see why it would want to exterminate people. At worst it just makes more sense to stop providing for and/or just blockade our solar system and then continue on its merry way through the galaxy.

If you mean just in general then presumably you'd program the AI to provide for humanity, and without that it wouldn't have a purpose. An existential crisis, in other words, without our presence.

Jeffrey of YOSPOS
Dec 22, 2005

GET LOSE, YOU CAN'T COMPARE WITH MY POWERS
Even if you program an AI to respect whatever values you decide, if the AI is to be self-improving you have to make sure it never "improves" in a way that doesn't preserve those values.

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



computer parts posted:

If we're at the point where people are basically cordoned off on Earth and the AI has interstellar reach I don't see why it would want to exterminate people. At worst it just makes more sense to stop providing for and/or just blockade our solar system and then continue on its merry way through the galaxy.
We also assume the AIs are going to internalize the values of Western capitalism and biological expansion and reproduction when they are not even biological life forms. I guess to be fair if we're worrying then it makes sense to consider the nightmare scenarios rather than 'AIs of sufficient complexity spontaneously develop a Buddha-like desire to help others, with the only flaw being a constant suggestion to abandon economic pursuits.'

Augustin Iturbide
Jun 4, 2012
Personally I'm pretty sure most AI conversations are a moot point because why would anyone build an AI that's like a person? Most of the reason we build artificial intelligence today is to perform tasks very quickly that a person can't do or would get bored doing over and over again. It seems like adding any kind of sentience to our computers that mine asteroids or predict the weather or whatever future computers we build is intensely counterproductive.

egg tats
Apr 3, 2010

Augustin Iturbide posted:

Personally I'm pretty sure most AI conversations are a moot point because why would anyone build an AI that's like a person? Most of the reason we build artificial intelligence today is to perform tasks very quickly that a person can't do or would get bored doing over and over again. It seems like adding any kind of sentience to our computers that mine asteroids or predict the weather or whatever future computers we build is intensely counterproductive.

There are definately some "pure" CS researchers looking to create an AI for the sake of itself, and even if they weren't pure, once you get an AI with free will, the capacity to improve itself, and self determination, it becomes trivial to create one that has all of that, but only has the drive to process lots of pizza delivery orders. You could probably ask the first AI to create the second one for fun and get it done in seconds.

There are only 2 dangers we'd face from an AI. The first is if we forced it to value growth at all costs. Which is something only an insane person would do because it's clearly a bad idea to give something the drive to turn the entire solar system into a single computer. The other would be if we can't convince it that we don't think it's a threat, because almost every piece of fiction we have with Artificial Intelligence outside of a few obscure Sci-Fi Authors, is about that AI killing everything. And in that case it's more likely to just gently caress off to the moon and leave us alone forever.

Warcabbit
Apr 26, 2008

Wedge Regret

Pope Guilty posted:

There's only one known kind of intelligent mind so far, so suggesting that another would be very similar to that isn't too outre.

Not enough information to make a judgement, really. That being said, personally, I don't think the first AI will be entirely a planned thing, but rather an emergent behavior of multiple half-assed networked systems.

http://www.zdnet.com/cias-amazon-cloud-goes-live-firewalled-and-private-7000032314/

... okay, we're hosed.

Augustin Iturbide
Jun 4, 2012

senae posted:

There are definately some "pure" CS researchers looking to create an AI for the sake of itself, and even if they weren't pure, once you get an AI with free will, the capacity to improve itself, and self determination, it becomes trivial to create one that has all of that, but only has the drive to process lots of pizza delivery orders. You could probably ask the first AI to create the second one for fun and get it done in seconds.

Sure, but why would we give an AI free will or self determination in the first place? It seems counterproductive to what we use computers for. It'd be like saying how great it would be to uplift our cows so they can walk themselves into the slaughterhouse. It adds a bunch of problems in exchange for 'solving' problems that aren't problems.

I Am The Scum
May 8, 2007
The devil made me do it

Augustin Iturbide posted:

Personally I'm pretty sure most AI conversations are a moot point because why would anyone build an AI that's like a person?

But what about :siren: my robot girlfriend? :siren:

Lycus
Aug 5, 2008

Half the posters in this forum have been made up. This website is a goddamn ghost town.
The sort of people that want robot girlfriends probably don't want them to be capable of independent thought.

Assepoester
Jul 18, 2004
Probation
Can't post for 10 years!
Melman v2

HootTheOwl posted:

Why would an AI care about humans? We're a threat to ourselves, not it.
https://www.youtube.com/watch?v=cTLMjHrb_w4

ContinuityNewTimes
Dec 30, 2010

Я выдуман напрочь
What if the ai was stupid.

egg tats
Apr 3, 2010

Augustin Iturbide posted:

Sure, but why would we give an AI free will or self determination in the first place? It seems counterproductive to what we use computers for. It'd be like saying how great it would be to uplift our cows so they can walk themselves into the slaughterhouse. It adds a bunch of problems in exchange for 'solving' problems that aren't problems.

Because we can? Scientists do poo poo for nothing but the sake of doing it all the time, it's basically their default state. Even if we start off exclusively making AIs with a predefined purpose, eventually the tech is going to be cheap enough and plentiful enough that some nerd somewhere will make Data.

Pope Guilty
Nov 6, 2006

The human animal is a beautiful and terrible creature, capable of limitless compassion and unfathomable cruelty.

Nessus posted:

You know here's the thing, I actually think this is a stupid and fetishistic idea which does material harm to society.

It is very true that humans do a lot of awful things on a regular basis. There are also some humans who genuinely are sociopathic monsters. They are few, and all gags about politicians aside, the sociopathic monsters seem just as likely to be drifters or assholes working at the Pizza Hut as they do to be politicians.

Humans do have a lot of blinders and prejudices, and these ingrained layers of outer dirt and psychological crap can become so deeply ingrained that someone who on a personal level is empathic, feeds a kitten, writes a check to a telethon, and hugs their children can then take a plane into Washington and vote for bombing a house full of foreign children. However, this is a very different phenomenon from saying humans are 'fundamentally bad,' because it is due to acculturation, not some genetic impulse to relentlessly hurt, dominate, and exploit each other in whatever our current metaphor for the lovely emergent behavior of capitalism is.

But in a crisis most people try to work together. Or at least they don't gently caress each other up, not in the aggregate, not typically. They may abandon their respect for property law (and then be shot by police officers, themselves acculturated into the police-officer head trip, where financial incentives and ways of thinking make it reasonable to shoot people taking food and trinkets from a store when the city has drowned). But for whatever reason everyone in this drat country's media industry is horribly fascinated with the prospect of any bad situation making everyone automatically murder and abuse each other, as if by inner nature, never mind that if this really was the fundamental human nature we would have eaten ourselves to death before leaving Africa.

And since, as "everyone knows," people are all monsters when you let 'em be, you reinforce the ideas that we need strong mechanisms of control and that we need to bomb others into submission at all times; that employees must be kept down, that skilled workers will organize to rip your business apart rather than 'to improve their situation.' Because they have to really be monsters under the surface. Everyone knows.

Sedanchair is right: most (I will not say all) of organized lovely human behavior is due to institutions getting out of hand, not because of Original Sin.

e: If crises aren't persuasive, consider other situations. Even if you swear at someone on your drive to work, think of all the cohesive social situations that led up to you having gotten up that morning, gotten in a vehicle someone else almost certainly built, and gone off to your semi-steady place of work. If everyone was robbing, killing and abusing each other constantly this would not be the case. Obviously systemic discrimination etc. exists, but that's institutions getting out of hand again, leavened by the fact that there are over three hundred million individual humans in the US alone, and some of them are going to act out on any given day. Some! Not most.

I have no idea what any of this has to do with my post. I said that it's not unlikely that other intelligences would be similar to ours and suddenly I'm Thomas Hobbes?

HootTheOwl
May 13, 2012

Hootin and shootin

And this makes even less sense: Where does the food come from when you blot out the sun!?

Hodgepodge
Jan 29, 2006
Probation
Can't post for 232 days!

Jeffrey posted:

Even if you program an AI to respect whatever values you decide, if the AI is to be self-improving you have to make sure it never "improves" in a way that doesn't preserve those values.

When we talk about the behavior of "AI," we inevitably simply talk about humanity.

Every parent has faced this quandary and experienced the limitations of one's ability to control another intelligent being's behaviors.

Likewise, we became a lot less concerned about AIs deciding to create a nuclear holocaust once the system called the "Cold War" disintegrated.

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



Pope Guilty posted:

I have no idea what any of this has to do with my post. I said that it's not unlikely that other intelligences would be similar to ours and suddenly I'm Thomas Hobbes?
Yeah, I think I'd read detail where it did not exist in your post.

Mc Do Well
Aug 2, 2008

by FactsAreUseless
Helios in 'Deus Ex' was pretty cool.

PupsOfWar
Dec 6, 2013

I'm not cool with King Schmidt because I just know he would gently caress up NPR

FRINGE
May 23, 2003
title stolen for lf posting

ufarn posted:

It's entirely possible highly probable that the AI will have better social skills than tech workers, so I'm kinda down with the Singularity.




Pope Guilty posted:

There's only one known kind of intelligent mind so far, so suggesting that another would be very similar to that isn't too outre.
Dolphins, crows, apes, squid, etc.

Just because we dont know how to speak their language is not proof that intelligence is absent. Research points to the fact that intelligence is actually present, and it was blindness that missed it to begin with.

If anything pressured an AI to be "like us" it would be the language it was first given so that it could communicate with us, after that ... ?

http://news.discovery.com/animals/zoo-animals/angry-crows-memory-life-threatening-behavior-110628.htm

quote:

Crows remember the faces of threatening humans and often react by scolding and bringing in others to mob the perceived miscreant, according to a new study published in the latest Proceedings of the Royal Society B.

Since the mob members also then indirectly learn about the threatening person, the findings demonstrate how just a single crow's bad experience with a particular human can spread information about this individual throughout entire crow communities.

http://www.forbes.com/sites/brucedorminey/2012/10/18/dolphin-speak-bustin-the-code-on-flippers-rhymes/

quote:

The dolphin’s whistles have been sampled, statistically-parsed and then analyzed to determine whether certain whistle types can be predicted from the same or another whistle type. Results show that dolphin whistle repertoires contain higher-order internal structure or organizational complexity. This suggests their whistle “language” contains elements loosely analogous to grammar or syntax in human language.

http://web.stanford.edu/dept/news/pr/00/000323gilly.html
http://ocean.si.edu/blog/so-you-think-youre-smarter-cephalopod

quote:

A new study reveals that newborn squid actually learn through the process of trial and error, much like humans do, and that these early-life experiences can physically change a squid's nervous system in ways that may be permanent.

quote:

Recently, as our understanding of cephalopods has improved, we’ve begun to wonder: Are these animals intelligent? It depends on how you define "intelligence."

James Wood, a teuthologist (cephalopod scientist), imagined creating an intelligence test for humans, by an octopus:

“So the octopus thinks: ‘All right. I’m going to make an intelligence test for humans, because they show a little bit of promise, in a very few ways.’ And the first question the octopus comes up with is this: How many color patterns can your severed arm produce in one second?”

Adbot
ADBOT LOVES YOU

FilthyImp
Sep 30, 2002

Anime Deviant
Have you ever driven/walked/ride shared your way into an unfamiliar part of town and immediately wished you had taken the long way around?

There's an app for that!

  • Locked thread