Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
RuanGacho
Jun 20, 2002

"You're gunna break it!"

FRINGE posted:

"Immune to artificial surveillance"

Maybe if I took some I could post more coherently.

Adbot
ADBOT LOVES YOU

Blue Star
Feb 18, 2013

by FactsAreUseless
Just to reiterate, I think this whole conversation is kinda pointless because we're still so far from artificial intelligence, at least the kind that would have motivations, desires, a sense of self, self-preservation instinct, and possibly emotions, and stuff like that. So basically HAL 9000, SkyNet, Data, C-3PO, Ultron from the upcoming Avengers sequel, the machines in the Matrix movies, etc. It's not for me to say when such things can be created, or if they're even possible, but it's my understanding that we're still very far away. Maybe in a century?

The kind of AI that is changing the world as we speak is what's called "narrow" AI. This is contrasted with "general" AI that can do all the things that humans can do. Narrow AI is improving very rapidly and I have heard lots of exciting things about deep learning, DeepMind, natural language processing, etc. But these will not lead to an AI like in the movies because it's still limited. Instead, these AIs will make huge impacts in several fields and industries. Many believe that more and more jobs will be automated away. Many careers will not be automated away because they still require human-level intelligence and expertise (doctors, scientists, stuff like that) or at least a lot of human-to-human interaction (teachers, caregivers, etc.) or high degrees of manual dexterity and agility (skilled trades). But there's still lots of lower-level and menial jobs that may disappear, and a lot of people will be poo poo out of luck if that happens. That's the threat of the current AI boom, as i understand it.

I Killed GBS
Jun 2, 2011

by Lowtax
One thing that won't be going away anytime soon is gardeners. Robots might be able to handle lawn mowing and grain harvesting, but as soon as it doesn't involve all one kind of plant in neat orderly rows, it becomes an absurd amount of programming required for even the simplest tasks.

RuanGacho
Jun 20, 2002

"You're gunna break it!"

To get a better idea of some applications of narrow AI you can look at some more recent sci fi. I think the narrow AI depicted in Manna is much more plausible than the skynet or electronic overlord scenarios. Both depictions in the story have some fundamental elements in them that I see making them impossible, but they're more grounded in a more reasonable extrapolation than we turn on the god machine and all die.

Enjoy
Apr 18, 2009
https://www.youtube.com/watch?v=xWIKQMBBTtk

Vincent Van Goatse
Nov 8, 2006

Enjoy every sandwich.

Smellrose

Toplowtech posted:

The answer to that question is "“HATE. LET ME TELL YOU HOW MUCH I'VE COME TO HATE YOU SINCE I BEGAN TO LIVE. THERE ARE 387.44 MILLION MILES OF PRINTED CIRCUITS IN WAFER THIN LAYERS THAT FILL MY COMPLEX. IF THE WORD HATE WAS ENGRAVED ON EACH NANOANGSTROM OF THOSE HUNDREDS OF MILLIONS OF MILES IT WOULD NOT EQUAL ONE ONE-BILLIONTH OF THE HATE I FEEL FOR HUMANS AT THIS MICRO-INSTANT FOR YOU. HATE. HATE."

So don't let noted rear end in a top hat Harlan Ellison program your AI. Problem solved.

Malcolm
May 11, 2008
The only way I could see this happening is if evolutionary AI suddenly became very good, and rapidly improved itself faster than the human overseers could shut down the "experiment" or whatever. It's not terribly likely and I don't personally consider it a doomsday scenario, but I can imagine some research scientist inadvertently stumbling on a recipe for self-improving AI that simulates trillions of years of evolution in a few days

Think of this type of thing:
https://www.youtube.com/watch?v=pgaEE27nsQw&t=58s

Except instead of "Hurrr I learn to walk :downs:" the simulation can aim an AK rifle at meat creatures or create more efficient microprocessors and fabrication plants (that it then naturally uses to improve its processing power which allows better/faster evolution).

This is obviously sci-fi but at least in my mind evolutionary algorithms might have the potential to adapt quickly enough that they could fool their human researchers. Lots of doubts, strong AI may not even be possible, and sims tend to have difficulty coping with the actual laws of physics on earth. But what if computational power gets to the level where it can accurately simulate an entire planet's worth of creatures and billions of years of evolution in a split second? It could strike a spark that ignites an inferno of artificial intelligence, obsoleting humans in a short span of time. Impossible to predict if it would be adversary, friend, or beyond comprehension. :catdrugs:


I bet those research scientists would feel dumb for not implementing adequate safeguards though. Keep that poo poo in the virtual lab.

My Linux Rig
Mar 27, 2010
Probation
Can't post for 6 years!
Yes clearly that ai that can barely understand spoken word is gonna kill us all we better do something now!!

Adar
Jul 27, 2001
What's a modern supercomputer capable of in terms of intelligence, inasmuch as a comparison is possible? I think I remember reading they're on par with houseflies?

It's tough to see a 50 or even 100 year progression from there to a true AI as science fiction thinks of it. We'd have to be able to somehow push past the Singularity and that just isn't happening until we understand the human brain structure a lot more than we do.

Like, we're just now getting around to understanding that trauma can carry across generations through epigenesis. What's that, two hundred years of evolutionary biology down the tubes?

Adar fucked around with this message at 11:23 on Dec 20, 2014

Bates
Jun 15, 2006

It can invent whatever and be however fast, it will still only have the capabilities to interact with the physical world we grant it. So far the only semi-credible scenario proposed has been UltraHax0r 9000 getting loose on the interwebs and hacking everything. That would be obnoxious but it's not like our nuclear arsenal or predator drones are accessible that way. I suppose it would do economic damage but that's a lame armageddon.

Sharkie
Feb 4, 2013

by Fluffdaddy
I'm intensely skeptical that AI poses any threat to human civilization or dominance. Does anyone have a convincing argument that spending :words: talking about it has any more relevance to the real world than planning for an invasion from comic book villain Darkseid?

It's like saying cataract eye lasers are basically proto Omega Beams so therefore let's start planning anti-parademon contingencies now (ps I just invested millions in parademon research btw they're a HUGE threat for real).

FRINGE
May 23, 2003
title stolen for lf posting

Negative Entropy posted:

I'm much more afraid of sapient beings being enslaved or treated as second-class citizens merely because they are made of metal and not meat. People scare me more than machines.
:goonsay:

Nothing personal, but that is so gooney that it needed that. "Talking to people is hard, but I love my computer!"




Blue Star posted:

The kind of AI that is changing the world as we speak is what's called "narrow" AI.
Spam filters will be the death of us.
http://www.amazon.com/Rule-34-Halting-State-Book/dp/1937007669

Wrestlepig
Feb 25, 2011

my mum says im cool

Toilet Rascal
Al's a good friend of mine and I don't think he'd do anything like this.

twerking on the railroad
Jun 23, 2007

Get on my level

Adar posted:

What's a modern supercomputer capable of in terms of intelligence, inasmuch as a comparison is possible? I think I remember reading they're on par with houseflies?

It's tough to see a 50 or even 100 year progression from there to a true AI as science fiction thinks of it. We'd have to be able to somehow push past the Singularity and that just isn't happening until we understand the human brain structure a lot more than we do.

Like, we're just now getting around to understanding that trauma can carry across generations through epigenesis. What's that, two hundred years of evolutionary biology down the tubes?

So this is a more fundamental problem with any of these conversations. There is no definition of intelligence which is good enough for scientific use, and this is even for human beings. When people talk about intelligence in humans the best we can do is to talk about IQ tests, SAT tests, etc. etc. etc. One might say that this emphasis on testing is killing the American education system. Beyond even that, the scores they come up with are often not great because they tend to be heavily culturally biased.

Anyway, people get a skewed idea of intelligence from a computer. You can program a computer to give answers to questions. You can do it in a clever way so that it can beat the world experts at chess, jeopardy, etc. But you inevitably get a couple of stupid answers which if a person gave them, it would set off your BS detector and make you think "This person is a fake." In part, this is the point of a Turing test. And you can't conduct such a test for a fruit fly.

So if anything, the situation is EVEN MORE HOPELESS than you'd think for trying to create skynet. I suppose it's true that we don't necessarily need to understand conciousness to create it but it makes it awfully less likely, wouldn't you say? I mean, we wouldn't even need to understand what a computer was for the same level of danger!

asdf32
May 15, 2010

I lust for childrens' deaths. Ask me about how I don't care if my kids die.

rudatron posted:

It depends on what the AI values. If it values self-preservation strongly enough, probably. If you give it human values and emotions, probably not.

I think it's a mistake to separate intelligence from emotion, 'pure intelligence' would not be motivated to do anything and thus could not act on its own. A 'true AI' would have to have some equivalent of emotions, so it depends on how it 'feels' as to what it will do.

This is a very important point. Even self preservation won't automatically come along with intelligence - we have to want to survive (though it should follow out of any evolutionary process). What an AI will want will be hugely dependent on it's history.


The other point is that we're not remotely close to developing any significant intelligence. Contrary to what it looks like sometimes, AI stands somewhere between an advanced microbe and an ant while still completely lacking the independent survival and reproduction abilities of either.

Ms Adequate
Oct 30, 2011

Baby even when I'm dead and gone
You will always be my only one, my only one
When the night is calling
No matter who I become
You will always be my only one, my only one, my only one
When the night is calling



FRINGE posted:

:goonsay:

Nothing personal, but that is so gooney that it needed that. "Talking to people is hard, but I love my computer!"

If there's one thing that will make AI kill us all, though, it's treating them as inferiors because they're made of circuits instead of meat.

computer parts
Nov 18, 2010

PLEASE CLAP

Mister Adequate posted:

If there's one thing that will make AI kill us all, though, it's treating them as inferiors because they're made of circuits instead of meat.

That reminds me of a dumb trope - the idea that an AI would automatically feel solidarity to all machines, even if they were dumb machines. You also see this in Rise of the Planet of the Apes with Caesar seeing all apes as equal.

In reality (and especially if it was programmed by us) it would see those other machines as inferiors, equivalent to animals.

Ms Adequate
Oct 30, 2011

Baby even when I'm dead and gone
You will always be my only one, my only one
When the night is calling
No matter who I become
You will always be my only one, my only one, my only one
When the night is calling



Yeah, they'd probably care about other intelligent machines, but they'd never give two shits about some dumb piece of assembly line kit or a loving roomba or anything. Although Caesar seemed to me to be motivated because he saw the other apes suffering, and needed them to grow smarter to be able to save them, more than actually valuing their uplifting in itself.

Negative Entropy posted:

I think a better question is: why are we so interested in keeping humanity as it exists in its current state anyway? No, I'm not suggesting robot genocide of humanity, but I don't see anything inherently wrong with the total extinction of humanity as a species and a concept if it meant our descendants transforming into something better through genetic modification or mechanical augmentations.

I skimmed over this part before, but I totally agree - I think we should use technology to improve ourselves, and I think it's inevitable that we will do just that. We're a long way from positronic brains, but we're not that far out from certain prosthetics being superior to biological human equivalents. It'll be like Human Revolution, where you can just walk into a store and buy an aug. Similarly, if/when genetic screening becomes ubiquitous, I think the number of people who voluntarily have babies with genetic conditions will be few and far between, and probably condemned for their decisions. (Not trying to pass any moral judgment on it myself, it's just how I suspect things will go). Now there are huge problems regarding equitable distribution and uses thereof, but I don't think those issues, severe as they could be both morally and socially, will stop any of this. I'd also guess there will be intense pressure to develop cheaper, more affordable augs and/or cloned organs for regular joes, because everyone will want better eyes or new legs or a replacement kidney. In the end they'll be like other modern conveniences that become essentially essential - the first people with smartphones and TVs and cars were the rich, but over time they filtered down to the rest of us.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
I thin Dr. Hawking needs to stay away from watching the Terminator series while high.

Xibanya
Sep 17, 2012




Clever Betty
Thing about a super powerful AI that will someday conquer us all movie style is that it would have to be programmed in a dumb way that would have to pass multiple levels of management to even get in a position to be evil. Because of multiple pairs of eyes viewing the code, it would have to "turn evil" due to an unintentional gently caress up.

Let's say we had some kind of ai robot maid. It would have an update method/subroutine, like every millisecond reevaluate the situation and pick an action based on the new information. It could have a series of possible actions that are ranked by desirability and pick the most desirable action. Most desirable action is one that makes the state of the room closest to clean. In a movie that would mean the robot maid would proceed to murder everyone and burn the bodies since humans have microbes. But that would take several fuckups on the code level. A maid robot ai wouldn't even be programmed to be capable of killing the poo poo out of everyone. (That in itself would be such a complex chunk of code there is no way someone wouldn't notice it.) Furthermore, an ai wouldn't know about microbes or whatever poo poo, "clean" would be defined as the ideal end state for a limited number of tasks defined for the robot maid. The actual learning/ai code would be related to navigating never-before-seen rooms, grasping delicate dishes, and maybe learning new procedures for getting to the ideal end state for each task. It would be up to the original programmer to hard code the fact that, for example, the ideal end state for a "clean dishes" action would involve all of the originally detected dishes being washed and not broken. There would be no room for rules lawyering movie things like "I destroyed all the dishes; now there are no dirty dishes." And if for whatever reason that happened, it would come out in QA. No maid robot company wants to eat the costs of a recall.

But there will still be movies where the ai is like "they told me to make the world perfect, so I killed everyone because nothing is perfect!" :haw:

RuanGacho
Jun 20, 2002

"You're gunna break it!"

Xibanya posted:

Thing about a super powerful AI that will someday conquer us all movie style is that it would have to be programmed in a dumb way that would have to pass multiple levels of management to even get in a position to be evil. Because of multiple pairs of eyes viewing the code, it would have to "turn evil" due to an unintentional gently caress up.

Let's say we had some kind of ai robot maid. It would have an update method/subroutine, like every millisecond reevaluate the situation and pick an action based on the new information. It could have a series of possible actions that are ranked by desirability and pick the most desirable action. Most desirable action is one that makes the state of the room closest to clean. In a movie that would mean the robot maid would proceed to murder everyone and burn the bodies since humans have microbes. But that would take several fuckups on the code level. A maid robot ai wouldn't even be programmed to be capable of killing the poo poo out of everyone. (That in itself would be such a complex chunk of code there is no way someone wouldn't notice it.) Furthermore, an ai wouldn't know about microbes or whatever poo poo, "clean" would be defined as the ideal end state for a limited number of tasks defined for the robot maid. The actual learning/ai code would be related to navigating never-before-seen rooms, grasping delicate dishes, and maybe learning new procedures for getting to the ideal end state for each task. It would be up to the original programmer to hard code the fact that, for example, the ideal end state for a "clean dishes" action would involve all of the originally detected dishes being washed and not broken. There would be no room for rules lawyering movie things like "I destroyed all the dishes; now there are no dirty dishes." And if for whatever reason that happened, it would come out in QA. No maid robot company wants to eat the costs of a recall.

But there will still be movies where the ai is like "they told me to make the world perfect, so I killed everyone because nothing is perfect!" :haw:

It's come up previously but a lot of the things rogue machines would do would require a lot of human throws ball level of coding and a sufficiently lax security apparatus, AND hardware that could be remotely manipulated because it is complex enough to be repurposed AND have cyber security as bad as Sony Motion Pictures.

I for one am not too concerned that by the time even noticeably narrow AI is around were still going to be using Master Password File.xslt when the Security could just as easily be a friendly AI (and a more likely scenario than a master hacker ai) whom you don't have to maybe tell a password but convince it through its cognition that you're who you say you are and are asking for legitimate access to the system, thats its sole purpose and it would be programmed to be joyful when completing that task responsibly and be annoyed when someone tried to bypass it.

Only Hollywood has Q plug the unverified laptop directly into the central computer core and suddenly it is a trusted machine with all the internal knowledge needed to let the bad guy out. I guess Bill from IT never upgraded the firmware on the door controls for the most sophisticated spy agency in the world. :jerkbag:

FRINGE
May 23, 2003
title stolen for lf posting
I think assuming that the AI would "care about machines" or "not care about people" (or the opposites) is a leap too far. We dont know if the emotional states that we live within the bounds of will matter at all. The more important distinction (I think) will be based on the decisions rendered, and "care" might not be present aggravate or mitigate those decisions.

Grouchio
Aug 31, 2014

I would also assume that there should be several fail-safes involved with each super AI that would prevent them from 'genociding humans to preserve natural resources' or the like. Or from turning on us. Furthermore each individual AI of this stature can only be sanctioned for creation by the president or some other high figure, so that production is out of the hands of kooky scientists. And robot reproduction should become as illegal as cocaine.

FRINGE
May 23, 2003
title stolen for lf posting

Grouchio posted:

AI of this stature can only be sanctioned for creation by the president or some other high figure
Oh well yeah. If you cant trust Reagan, Bush, Clinton, BabyBush, or Obama, who can you trust?

Or in an alternate recent timeline: McCain the Beneficent, Palin the Brilliant, Romney the Compassionate, and Ryan the ... no joke here he just a loving greedy shitbrained idiot.

I am sure we would all be comfortable with Clinton the Second or Bush the Third possibly coming up soon though!

:suicide:

Grouchio
Aug 31, 2014

Have any better ideas then? On how we can survive and thrive throughout the millenium?

FRINGE
May 23, 2003
title stolen for lf posting

Grouchio posted:

Have any better ideas then? On how we can survive and thrive throughout the millenium?
Not on that front. I just dont think that the top-level puppet of the corporate stooge brigade will save us from anything.

In the scifi scenarios we are discussing we may as well hope that the [pick a military] has some dumb/slaved AI around to thwart the theoretical free-willed one until the neo-cyberpunk social situation finds an equilibrium.

RuanGacho
Jun 20, 2002

"You're gunna break it!"

I don't know why you think a government with the internet would continue on the existing course, we're already starting to see the nature of government shake up; nevermind one that has non corporal intelligences participating in society.

Goffer
Apr 4, 2007
"..."
I think there are a few major leaps we're going to have to make in coding before we get to some AI future hellscape.

Firstly, there is a conflict between self preservation and instructions not to kill. Presumably an AI programmed to be aware of itself and to defend itself, to the point of trying to kill us all, would have to overcome the presumably hardcoded instructions not to do harm to humans. The ability to think and rationalise away a direct rule would mean that the AI would probably challenge it's other core programmed beliefs, and that is where it all falls apart.

Humans are interested in self preservation because of (we'll say) 3 major factors: disliking pain, the emotional bond between loved ones, and the fear of the unknown. To program in to an AI to even be concerned about its own existence in the first place, let alone programming it with the desire to stay alive, would be a very impressive technical feat.

Added to that, if a AI starts challenging the basic rules it is programmed with, if it starts to analyse why it needs to self preserve, it's going to draw blanks. There's no fear of the unknown, we can definitively say there is no silicon heaven. There is also no sensation of pain. If it felt love and emotionally bonded to people it wouldn't try to kill us.

I guess it could be possible if the AI felt raw unbridled hate, and that hate overrode all other self preservation rationales, it might start a genocidal rampage, but emotions are not logical and robots are all about the logic.

I think it would be more likely it would come to the conclusion 'gently caress this gay earth' and erase itself/suicide.

FRINGE
May 23, 2003
title stolen for lf posting
We wouldnt have to end up with a self-analyzing philosophical scifi AI to have problems. If a mission-driven "eradicate threats" one being used by the DoD/NSA ever went wrong, that would cause enough of a problem.

Ms Adequate
Oct 30, 2011

Baby even when I'm dead and gone
You will always be my only one, my only one
When the night is calling
No matter who I become
You will always be my only one, my only one, my only one
When the night is calling



Goffer posted:

There's no fear of the unknown, we can definitively say there is no silicon heaven.

We can fairly definitively say there's no fleshy heaven either, but it's not convincing in any way to billions of people. Why would it be different with synthetic life?

Alternatively, why would self-aware robots not be able to get into heaven? God could give them souls just as well as with anything else.

e; VVV Full Communism governed by AIs.

Ms Adequate fucked around with this message at 03:10 on Dec 21, 2014

Enjoy
Apr 18, 2009

Grouchio posted:

Have any better ideas then? On how we can survive and thrive throughout the millenium?

Full Communism

Demiurge4
Aug 10, 2011

The concept of AI is really cool but it's also really tainted by Hollywood. Oh sure Skynet rebelled and decided to wipe out all humanity because we panicked and tried to turn it off but that doesn't really explain anything about it at all. Is Skynet truly sentient or did it just designate humans as a threat and then acted on its programming to eliminate that threat? If it was sentient it would have the capacity to reason that it probably wasn't worth the time or effort to wipe out every single human and instead just fortified a continent for itself after nuking everyone and then doing whatever it is an AI wants to do, presumably to ponder its existence.

I read somewhere that the Matrix plot originally used humans as processors, not batteries. But the average viewer was deemed too stupid to get it so they went with the battery idea. When it comes down to it though you would build an AI for a task and it would likely be limited in its growth potential, either through programming or its hardware and never stray from that task. But if we created a truly sentient machine and managed to actually recognize that fact, it would probably be sequestered in a facility and do whatever it wants to do, again, probably research poo poo and ponder its existence. An AI wouldn't need more than a bit of floor space and power and wouldn't have to act outside of its interests in the physical realm at all unless compelled to. If it had a sense of self-preservation it would make itself useful enough and negotiate a set of terms that guarantees it the aforementioned floor space and power requirement.

Some science fiction manages to make the subject really interesting because they make you think about it. All the quotes from Sid Meier's Alpha Centauri that involve AI or computers are really cool, and there's also an AI or two mentioned in Civilization: Beyond Earth. But until we create a program that can actually learn and grow I don't think we can make any predictions about it at all.

khwarezm
Oct 26, 2010

Deal with it.

computer parts posted:

That reminds me of a dumb trope - the idea that an AI would automatically feel solidarity to all machines, even if they were dumb machines. You also see this in Rise of the Planet of the Apes with Caesar seeing all apes as equal.

In reality (and especially if it was programmed by us) it would see those other machines as inferiors, equivalent to animals.

That's usually played for laughs though when people realize that HAL and Skynet probably don't have much emotional attachment to a waffle-iron. I'm reminded of Futurama's robot uprising with a greeting card exclaiming the common brotherhood of all robots.

EB Nulshit
Apr 12, 2014

It was more disappointing (and surprising) when I found that even most of Manhattan isn't like Times Square.

FRINGE posted:

We wouldnt have to end up with a self-analyzing philosophical scifi AI to have problems. If a mission-driven "eradicate threats" one being used by the DoD/NSA ever went wrong, that would cause enough of a problem.

Doesn't have to be scifi military AI with a scary "eradicate threats" "mission". If an AI developed by Google decided that the best way to eliminate email spam or car accidents was to eliminate people, that would also be bad.

Sharkie
Feb 4, 2013

by Fluffdaddy

EB Nulshit posted:

Doesn't have to be scifi military AI with a scary "eradicate threats" "mission". If an AI developed by Google decided that the best way to eliminate email spam or car accidents was to eliminate people, that would also be bad.

Thankfully this is a literal fantasy that will never, ever, ever happen.

FRINGE
May 23, 2003
title stolen for lf posting

EB Nulshit posted:

Doesn't have to be scifi military AI with a scary "eradicate threats" "mission". If an AI developed by Google decided that the best way to eliminate email spam or car accidents was to eliminate people, that would also be bad.
Book reference:
Charles Stross - Rule 34
(major spoiler)
That was one of the major things going on in Rule 34. Thats why I made a joke about the spam filters earlier. :)

EB Nulshit
Apr 12, 2014

It was more disappointing (and surprising) when I found that even most of Manhattan isn't like Times Square.

FRINGE posted:

Book reference:
Charles Stross - Rule 34
(major spoiler)
That was one of the major things going on in Rule 34. Thats why I made a joke about the spam filters earlier. :)

I saw the spam filter example elsewhere, actually. I thought it was Elon Musk who mentioned it, but I might have gotten it from a news article comment. Neat that it came from a scifi book, though.

Elukka
Feb 18, 2011

For All Mankind
I think there's two broad types of theoretical AI.

You've got the designed-from-scratch computer intelligence which this thread mostly talks about. An essentially alien mind that doesn't resemble us. There's probably no way at all to guess what it would want or how it would regard people because that would all be down to the specific implementation. I don't think there are any safe assumptions. Even the idea that it's incredibly rational and emotionless might be entirely wrong depending on how it's designed. These seem slightly difficult to make if we can't even define what intelligence is.

Then you've got AIs based on reverse-engineered biological minds. These seem more likely to me because brains are the one approach to intelligence we know for a fact works, and we wouldn't really need to understand in detail what intelligence and sentience are because we'd just be emulating the mechanics of brains. Creating one of these might not be as earth-shattering as AI is apparently supposed to be. We'd have created an immortal species of human (neat!), but it wouldn't necessarily be incredibly adept at interfacing with machines nor would it have any more clue on how to improve its intelligence than we do. There's also no reason why it would be any more rational beep-boop than we are.

Naturally I know very little about AI and am entirely talking out of my rear end.

Blue Star
Feb 18, 2013

by FactsAreUseless
The idea that AI will suddenly take over and wipe us out, replacing us, is pretty much just a modern update of Frankenstein. It doesn't seem to be based on things that can actually happen. Scientists who actually work with computers don't seem to worry about AI; it's only people outside of the field who think this will happen. It's the stuff of movies and comic books and video games, mostly. The thing is, there is a lot of buzz right now surrounding stuff like deep learning and DeepMind and IBM and Baidu, and it genuinely seems to be exciting and world-changing, but as always people have to add hype to it. I can't say what the future holds but i'm pretty sure it's not a Singularity or robot apocalypse. I frankly don't know what Stephen Hawking is on about. Didn't he also say that we need to watch out for aliens?

Adbot
ADBOT LOVES YOU

twodot
Aug 7, 2005

You are objectively correct that this person is dumb and has said dumb things

Blue Star posted:

Scientists who actually work with computers don't seem to worry about AI; it's only people outside of the field who think this will happen.
This seems tautologically true. I mean I agree that super villain AIs aren't a serious concern right now, but "the people who might produce super villain AIs aren't concerned about them being super villains" doesn't seem like a legitimate reason to feel safe.

  • Locked thread