Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
FRINGE
May 23, 2003
title stolen for lf posting

Blue Star posted:

Scientists who actually work with computers don't seem to worry about AI; it's only people outside of the field who think this will happen.
Somewhat. Like I quoted up-thread:

quote:

A.I. researchers say Elon Musk's fears 'not completely crazy

High-tech entrepreneur Elon Musk made headlines when he said artificial intelligence research is a danger to humanity, but researchers from some of the top U.S. universities say he's not so far off the mark.

"At first I was surprised and then I thought, 'this is not completely crazy,' " said Andrew Moore, dean of the School of Computer Science at Carnegie Mellon University. "I actually do think this is a valid concern and it's really an interesting one. It's a remote, far future danger but sometime we're going to have to think about it. If we're at all close to building these super-intelligent, powerful machines, we should absolutely stop and figure out what we're doing."

Adbot
ADBOT LOVES YOU

Rodatose
Jul 8, 2008

corn, corn, corn

Tonsured posted:

Why waste resources on billions of people when our ape mind needs but a few thousand sycophants to feel powerful? The most likely scenario would be at the period of time when a single AI would be capable of outperforming 10,000+ skilled human laborers. From an authoritarian point of view, it would make sense to start culling the population when society is mostly automated, the majority of the human population would have little to no economic value and instead would be viewed as an easily eliminated threat to the current power structure.
Not the end of humans, but the end of humanity as a human based society. Yes, such an event would effectively be a doomsday for most of us, and yes, I believe it is not only possible, but likely.

Basically this; it's not AI that you have to worry about, but technology in the hands of a few people who use it for self-enrichment. Such a future AI subordinated to state capitalist powers would not try to kill us all, just most of us if they find using weaponized technology against the now obsolete former laboring class to be more cost effective than palliative technology for a welfare state. Something like a bullet is a cheap, one-time cost while health care and food is a long-term sink of resources for those in charge for which they would get nothing in return. If the dominant mode of production continues to go toward capital accumulation for long enough while technology continues improving so much that wealth inequality finally doesn't result in crisis/collapse (because of automation), then by the time the technology for an automated genocide that doesn't kill its implementer (like nukes) emerges, health care would have to be cheaper than that weaponized technology or we might be looking at Some Bad poo poo.

It would probably not be fully independent AI with complete human emotions and whatnot, more of a limited intelligence set to be deferential to the ruling class's command but given some judgment to make executing that command more streamlined. Shock troops if you will, or Henry Kissinger's vision of military men as "just dumb, stupid animals to be used as pawns in foreign policy."

Kaal
May 22, 2002

through thousands of posts in D&D over a decade, I now believe I know what I'm talking about. if I post forcefully and confidently, I can convince others that is true. no one sees through my facade.
All the "psychopathic AI helps us by killing us" stories seem exceedingly fictional. For one thing, if some hyper-intelligent yet amoral AI wanted to reduce the human population they'd certainly be using a more elegant solution than killbots. Dosing the population with birth control would be far more effective, and what is 30 or 40 years to a machine?

Ms Adequate
Oct 30, 2011

Baby even when I'm dead and gone
You will always be my only one, my only one
When the night is calling
No matter who I become
You will always be my only one, my only one, my only one
When the night is calling



Kaal posted:

All the "psychopathic AI helps us by killing us" stories seem exceedingly fictional. For one thing, if some hyper-intelligent yet amoral AI wanted to reduce the human population they'd certainly be using a more elegant solution than killbots. Dosing the population with birth control would be far more effective, and what is 30 or 40 years to a machine?

Nothing at all if you're made of Nintendium, a long goddamn time if you're a first-run PS1!

FRINGE
May 23, 2003
title stolen for lf posting

Kaal posted:

All the "psychopathic AI helps us by killing us" stories seem exceedingly fictional. For one thing, if some hyper-intelligent yet amoral AI wanted to reduce the human population they'd certainly be using a more elegant solution than killbots. Dosing the population with birth control would be far more effective, and what is 30 or 40 years to a machine?
The killbots are just a prop to fill in for "whatever". Usually stories about forced sterilization use aliens instead of AI, but the messages overlap.

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET

Malcolm posted:

This is obviously sci-fi but at least in my mind evolutionary algorithms might have the potential to adapt quickly enough that they could fool their human researchers. Lots of doubts, strong AI may not even be possible, and sims tend to have difficulty coping with the actual laws of physics on earth. But what if computational power gets to the level where it can accurately simulate an entire planet's worth of creatures and billions of years of evolution in a split second? It could strike a spark that ignites an inferno of artificial intelligence, obsoleting humans in a short span of time. Impossible to predict if it would be adversary, friend, or beyond comprehension. :catdrugs:

This came up in the LW thread except with macros (as in Lisp macros) instead of evolutionary algorithms. The gist of my response was that self-rewriting code is not arbitrarily capable of self-improvement -- I think people tend to assume that a given AI can become arbitrarily good at improving AI and start a feedback loop because they look at self-rewriting code (as a catchall term) as a big scary black box.

Most of the self-modification that computers do occurs within strict constraints and oftentimes isn't even built around AI problems -- many languages express program structure in a higher-level description language that reassembles blocks of code expressed in a lower-level one, but the compilers of those languages aren't "intelligent" in the sense of a human.

Most of the free and relatively unconstrained self-rewriting -- things like algorithms which take an assembly language and attempt to evolutionarily generate a program that solves a given problem -- rarely produces optimal solutions and becomes computationally intractable for problems larger than trivial anyway. A lot of human intelligence is still a basic part of how these problems are solved -- coming up with constraints that restrict what's tried to things that are actually likely to be useful -- and there's no tractable-in-scale examples of systems that come up with and improve their own philosophy on how to solve problems, as far as I know, so unless we find one it's mostly navel-gazing to speculate about them.

Malcolm
May 11, 2008
Good points, I agree that the philosophical implications reduce to navel gazing. Self-rewriting code is by definition limited to the constraints of the implementers. One could just as easily ask what the legacy of humanity will be.

- Meat creatures colonize the entire Milky Way
- Tragic nuclear-holocaust, resulting in, "Earth, a Cautionary Tale"
- Artificial Robots that transcend their creators
- Horrific mix of biological and artificial components that are unrecognizable to 21st century humans
- That one episode where the Enterprise beams up Riker but the signal gets deflected resulting in two of them

Torka
Jan 5, 2008

Dwar Ev ceremoniously soldered the final connection with gold. The eyes of a dozen television cameras watched him and the subether bore through the universe a dozen pictures of what he was doing.

He straightened and nodded to Dwar Reyn, then moved to a position beside the switch that would complete the contact when he threw it. The switch that would connect, all at once, all of the monster computing machines of all the populated planets in the universe--ninety-six billion planets--into the supercircuit that would connect them all into the one supercalculator, one cybernetics machine that would combine all the knowledge of all the galaxies.

Dwar Reyn spoke briefly to the watching and listening trillions. Then, after a moment's silence, he said, "Now, Dwar Ev."

Dwar Ev threw the switch. There was a mighty hum, the surge of power from ninety-six billion planets. Lights flashed and quieted along the miles-long panel. He stepped back and drew a deep breath. "The honor of asking the first question is yours, Dwar Reyn."

"Thank you," said Dwar Reyn. "It shall be a question that no single cybernetics machine has been able to answer."

He turned to face the machine. "Is there a God?"

The mighty voice answered without hesitation, without the clicking of single relay.

"Yes, now there is a God."

Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch.

A bolt of lightning came from the cloudless sky, striking him down and fusing the switch shut.

Cippalippus
Mar 31, 2007

Out for a ride, chillin out w/ a couple of friends. Going to be back for dinner
If a supercomputer ever threatens humanity just switch off the wi-fi router, duh.

Rodnik
Dec 20, 2003
Hyperion Cantos by Dan Simmons is the best book about AI trying to murder humans.


We are artificial intelligence. By the time we have created new AI capable of coming to the conclusion that humanity needs to be destroyed, I believe that many of us will no longer be as recognizably human. I think the bleeding edge of philosophy over the next hundred or so years will have to define and refine what is essentially human. What is Humanity? If not defined and maintained, our human qualities will probably be a thing of the past soon enough. The Industrial revolution caused Europe to completely implode once modernity kicked in, and right now we are going through an industrial revolution every 10 years.

Blindsight is another good novel that deals with AI. I wouldn't mind if our species went extinct if the AI we create carries the torch beyond our own limitations.

Grouchio
Aug 31, 2014

So all in all, should I be worrying about this topic this soon? When will I need to?

Radbot
Aug 12, 2009
Probation
Can't post for 3 years!
Boy the only thing more interesting than baseless assumptions about what AI would do if it existed are more baseless assumptions about what will or won't exist in 100 years.

America Inc.
Nov 22, 2013

I plan to live forever, of course, but barring that I'd settle for a couple thousand years. Even 500 would be pretty nice.
Strong AI is the Half-Life 3 of computer science, and when if it finally happens it'll probably become the Duke Nukem Forever of computer science.

twerking on the railroad
Jun 23, 2007

Get on my level

Grouchio posted:

So all in all, should I be worrying about this topic this soon? When will I need to?

In all likelihood, never in your life.

FRINGE
May 23, 2003
title stolen for lf posting
Conversely, it could be next week!

OwlFancier
Aug 22, 2013

EB Nulshit posted:

Doesn't have to be scifi military AI with a scary "eradicate threats" "mission". If an AI developed by Google decided that the best way to eliminate email spam or car accidents was to eliminate people, that would also be bad.

Though, presumably, an AI developed by Google to eliminate email spam would probably be limited to setting up passive aggressive out-of-office autorepliers for everybody as its method of destroying humanity.

Edgar
Sep 9, 2005

Oh my heck!
Oh heavens!
Oh my lord!
OH Sweet meats!
Wedge Regret
future robots wont kill us by themselves. Considering that fail safes will be programmed in and when AI do operations on their own, they still have to interact with some level of human contact.

Edgar fucked around with this message at 06:24 on Dec 23, 2014

America Inc.
Nov 22, 2013

I plan to live forever, of course, but barring that I'd settle for a couple thousand years. Even 500 would be pretty nice.

Edgar posted:

future robots wont kill us by themselves. Considering that fail safes will be programmed in and when AI do operations on their own, they still have to interact with some level of human contact.
2070: GBS 20.1 accidentally causes the apocalypse when, after being egged on, a random poster hacks into Russia's horribly outdated security infrastructure and turns off the failsafe for the AI that manages its Dead Hand defense system as a prank. His last post is "lol i bet the missiles don't even work. don't worry guys"
VVV Mhm yes I forgot to include that. The random poster was also using a Mac.

America Inc. fucked around with this message at 08:30 on Dec 23, 2014

Bates
Jun 15, 2006

LookingGodIntheEye posted:

2070: GBS 20.1 accidentally causes the apocalypse when, after being egged on, a random poster hacks into Russia's horribly outdated security infrastructure and turns off the failsafe for the AI that manages its Dead Hand defense system as a prank. His last post is "lol i bet the missiles don't even work. don't worry guys"

Well I guess the Internet of Things is a thing now so Russia's ICMBs getting their own IP addresses is inevitable.

ReV VAdAUL
Oct 3, 2004

I'm WILD about
WILDMAN

Rodatose posted:

Basically this; it's not AI that you have to worry about, but technology in the hands of a few people who use it for self-enrichment. Such a future AI subordinated to state capitalist powers would not try to kill us all, just most of us if they find using weaponized technology against the now obsolete former laboring class to be more cost effective than palliative technology for a welfare state. Something like a bullet is a cheap, one-time cost while health care and food is a long-term sink of resources for those in charge for which they would get nothing in return. If the dominant mode of production continues to go toward capital accumulation for long enough while technology continues improving so much that wealth inequality finally doesn't result in crisis/collapse (because of automation), then by the time the technology for an automated genocide that doesn't kill its implementer (like nukes) emerges, health care would have to be cheaper than that weaponized technology or we might be looking at Some Bad poo poo.

It would probably not be fully independent AI with complete human emotions and whatnot, more of a limited intelligence set to be deferential to the ruling class's command but given some judgment to make executing that command more streamlined. Shock troops if you will, or Henry Kissinger's vision of military men as "just dumb, stupid animals to be used as pawns in foreign policy."

The rich and powerful always being so focused on overpopulation being the main cause of climate change/resource scarcity chimes worryingly with this. The developed world and the elite everywhere consume the most resources but consistently the blame is put on the size of the global population in general rather than addressing patterns of consumption.

Grouchio
Aug 31, 2014

Skeesix posted:

In all likelihood, never in your life.
What if my life lasts at least 200 years?

RuanGacho
Jun 20, 2002

"You're gunna break it!"

Grouchio posted:

What if my life lasts at least 200 years?

Can I interest you in a reverse mortgage?

Enjoy
Apr 18, 2009

RuanGacho posted:

Can I interest you in a reverse mortgage?

Isn't that called renting?

egg tats
Apr 3, 2010

Grouchio posted:

So all in all, should I be worrying about this topic this soon? When will I need to?

The saying is that strong AI is 10 years away. That's been the saying since the 60s.

Basically, we may be a single breakthrough away from a traditionally intelligent AI, but there's no telling until after the fact and by then it's all over.

Ignoring that though, the big concern with AI isn't that it will kill us all (that will only happen if the pentagon gets there first imo), but that it will ruin everything in more abstract ways.

It's important to remember that the point where AI becomes dangerous is the exact point we develop one that's capable of learning how to create itself from scratch. At that point there's nothing stopping it from making a better AI ad nauseum. A few days after that AI is made we could have an entity more intellectually capable than every human combined,that can be specialised for any task. Whoever makes that just cut everyone from their organisation out between physical labour and the ceo. As it spreads you're either in the .01% on the top or you're cleaning the bathroom at the local wal-mart. The old standby that machines aren't able to come up with new ideas is ridiculous, we already have narrow ai designing bridges, and writing songs, and creating art, we aren't special there.

If we're lucky, and I mean this sincerely, strong AI will mean the end of capitalism, because there just won't be a place for the vast majority of us otherwise.

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET

senae posted:

It's important to remember that the point where AI becomes dangerous is the exact point we develop one that's capable of learning how to create itself from scratch. At that point there's nothing stopping it from making a better AI ad nauseum. A few days after that AI is made we could have an entity more intellectually capable than every human combined,that can be specialised for any task. Whoever makes that just cut everyone from their organisation out between physical labour and the ceo. As it spreads you're either in the .01% on the top or you're cleaning the bathroom at the local wal-mart. The old standby that machines aren't able to come up with new ideas is ridiculous, we already have narrow ai designing bridges, and writing songs, and creating art, we aren't special there.

If we're lucky, and I mean this sincerely, strong AI will mean the end of capitalism, because there just won't be a place for the vast majority of us otherwise.

Didn't I just reply to this? An AI that can improve itself at all is still bound by certain constraints -- even if it was really bright and knew how to do all kinds of crazy things like inventing its own hardware, it's still subject to physical constraints and it's not particularly likely it can self-improve into infinity.

Here's an example: suppose I write a really crappy C++ compiler and compile it without optimizations on. Then I add a bunch of brilliant optimizations and recompile it. You might say this is a cheaty way to describe a self-optimizing AI, because we had to tell it how to optimize itself. But what if instead of optimizing it ourselves, we gave it a rule to generate and test optimizations? This is called a pinhole optimizer in industry. So, let's put that on our list of big improvements.

The new compiler, compiled with the old one, is going to be inefficient and potentially error-prone because the first compiler doesn't know very much about writing efficient C++. But if you tell the new compiler to compile itself again, it will self-rewrite the new bits and the old bits in all kinds of places -- because of the optimizations it knows about -- and result in a much faster, less error-prone compiler. If your first compiler was *really* bad and broke with the spec in subtle ways, then recompiling it yet again will get you an even better compiler -- however, that's not likely and basically cheats the test situation. It reaches its plateau in two iterations.

Was a human necessary to define the strategy it uses to self-optimize? Yes. Did the optimizations it was able to apply improve its ability to perform its task? Yes. But did they improve its ability to self-optimize? Most likely not. In an uncute sense you could call this a self-optimizing AI -- it's not cool and it doesn't seem very smart, but it's likely its pitfalls are similar to the pitfalls a self-optimizing AI would experience. It can't optimize itself into circumventing the processor -- its optimizations are still constrained by the hardware and extensionally the laws of physics: it can't invent opcodes that will make it go faster -- it doesn't have a deeper sense of what it's optimizing itself *into* than the sense provided by its source code itself, and it can't make what-does-the-programmer-want judgments about the passed program because doing so violates the spec.

Some of these problems don't apply to a hypothetical self-optimizing AI, but a lot of them do. A self-optimizing AI might be better at generating candidate optimizations than a compiler -- we have no evidence it would be, but it might -- but those optimizations would still be constrained by the laws of physics. A self-optimizing AI may be able to make intent judgments about what it's optimizing code into -- it might be able to run a tiny version of its new AI and talk to it and see if the new AI acts anything like it does -- but this requires the self-optimizing AI to have a more precise notion of intelligence and ethics than we do when we're armchair-talking about it on SA, and whenever you make something precise all of a sudden its implications get a lot less scary. If the proof is in the pudding and it will only christen an AI smart enough if it generates an even smarter AI, isn't that an infinite regress? What problem other than AI optimization judges one's ability to optimize AI?

RuanGacho
Jun 20, 2002

"You're gunna break it!"

Besides, anyone who's run any kind of virtual machine can tell you its going to run out of head room very quickly.

Also has anyone else noticed that these Hollywood depictions always de-invent magnets?

paragon1
Nov 22, 2010

FULL COMMUNISM NOW
Seems like that would be putting in a lot of time and effort for pretty much no gain OP.

Bates
Jun 15, 2006

RuanGacho posted:

Also has anyone else noticed that these Hollywood depictions always de-invent magnets?
Hollywood depictions are weird because you can often replace the "artificial" with "biological" and arrive at the same point. The premise in terminator is that they hand over control of everything to an AI. Ok so if Bob scores really, really high on an IQ test we'd just hand over the armageddon button to him, no questions asked? In I.Robot everybody got a butler that can jump 10 ft in the air, punch through steel, crush your skull in its hands and can't be taken down with anything less than a gun. Why would anyone want that around their kids? It's like trying to sell a dishwasher with built-in chainsaws and the ability to jump around your house - but don't worry we gave it strict instructions not to do that!

Elukka
Feb 18, 2011

For All Mankind

senae posted:

It's important to remember that the point where AI becomes dangerous is the exact point we develop one that's capable of learning how to create itself from scratch. At that point there's nothing stopping it from making a better AI ad nauseum.
How does this follow? What gives AI an inherent ability to make better AIs? The humans that made it by definition know how to create it from scratch, yet they don't gain the ability to instantly create better AIs until they've made some sort of god. Humans also know how to create new humans from scratch yet have very little idea how to change or improve them.

twerking on the railroad
Jun 23, 2007

Get on my level

Grouchio posted:

What if my life lasts at least 200 years?

Make sure to tell me if that happens

WorldsStongestNerd
Apr 28, 2010

by Fluffdaddy
After reading so far, I'm leaning towards the point of view of some of you that the problem won't be the A.I. but the humans that control the AI. At some point, a powerful AI will, with the help of purpose built robots that the AI controls, be able to run a factory all by itself. The AI may even be able to also handle the accounting and marketing side by itself. The problem isn't that the AI will go rouge. Humans have the capacity for hatred and violence because those things were useful to us at some point in the past. The AI won't have that unless it was built in for some reason. The AI won't use those robots to enslave humanity, because those purpose built robots aren't much use outside of that factory floor. What the AI will do is make obsolete human workers, and whether or not that's a good or bad thing will depend on us.

WorldsStongestNerd
Apr 28, 2010

by Fluffdaddy

senae posted:

It's important to remember that the point where AI becomes dangerous is the exact point we develop one that's capable of learning how to create itself from scratch. At that point there's nothing stopping it from making a better AI ad nauseum.

Ok, the AI learns how to make a better version of itself........ then what?

Does the AI control the factories scattered around the world where computer parts are made? Does it control the mines where the raw materials for those parts are mined? Does it control robot trucks that move the parts from point A to B?

Lets say that I, a human, came up with a great idea, like say for a new type of computer. I need enginers and scientists to design parts. I need miners to mine the ores, and smelters to make the metal. I need construction crews to pour concrete and erect metal for a factory. I need the light and water department to run power lines to my factory. I need the help and cooperation of tens of thousands of people and hundreds of companies. Not just for a new factory, but for overhauling or retooling an old one as well.

An AI might be able to design a better version of itself, but I see no way it can physically make a better version of itself. Not unless it's a world where a central AI runs everything and humans play only a small part.

And that may very well happen one day. Adaptable robot workers controlled by a powerful AI. But I don't think we will ever cede power that way for the same reasons we don't let one man or even one organization run everything now.

WorldsStongestNerd fucked around with this message at 22:22 on Dec 24, 2014

FRINGE
May 23, 2003
title stolen for lf posting

WorldsStrongestNerd posted:

Ok, the AI learns how to make a better version of itself........ then what?

Does the AI control the factories scattered around the world where computer parts are made? Does it control the mines where the raw materials for those parts are mined? Does it control robot trucks that move the parts from point A to B?

Lets say that I, a human, came up with a great idea, like say for a new type of computer. I need enginers and scientists to design parts. I need miners to mine the ores, and smelters to make the metal. I need construction crews to pour concrete and erect metal for a factory. I need the light and water department to run power lines to my factory. I need the help and cooperation of tens of thousands of people and hundreds of companies.

An AI might be able to design a better version of itself, but I see no way it can physically make a better version of itself. Not unless it's a world where a central AI runs everything and humans play only a small part.
Aside from the current-day increasingly robotized manufacturing and processing plants, all our theoretical AI would have to do is have access to money and force. Then they would be the same as every other anti-human planet raping CEO. The difference would be that the AI would have literally no concern for things that humans do. (Food, air, water.) If the goal was to be some kind of solar Matrioshka brain, then the planet would not need to survive at all.

If Our Theoretical AI (OTAI) began as some kind of military brain, then the path might be similar, but it could simply subvert or seize the force part of the equation rather than purchase it.

If OTAI came out of some kind of more subtle psych research, it could possibly do the same things while exploiting human blindspots and being essentially undetectable for a (long? indefinite?) period of time.

These things are not currently likely, but I think that Musk, Hawking, and various AI researchers are correct in that they are worth thinking about now.

Grouchio
Aug 31, 2014

paragon1 posted:

Seems like that would be putting in a lot of time and effort for pretty much no gain OP.
What do you mean by...'that'?

hepatizon
Oct 27, 2010

Elukka posted:

How does this follow? What gives AI an inherent ability to make better AIs? The humans that made it by definition know how to create it from scratch, yet they don't gain the ability to instantly create better AIs until they've made some sort of god. Humans also know how to create new humans from scratch yet have very little idea how to change or improve them.

Sure, but you have to examine why humans are bad at self-improvement. One of the major reasons is that our working memory is small, slow, and muddled. A human-brain-like machine with flawless working memory could easily be the foremost intelligence in history. It doesn't really matter whether that machine is biological, electronic, or a computer simulation.

America Inc.
Nov 22, 2013

I plan to live forever, of course, but barring that I'd settle for a couple thousand years. Even 500 would be pretty nice.
I wonder what Eripsa's opinion on this is.

FRINGE posted:

Aside from the current-day increasingly robotized manufacturing and processing plants, all our theoretical AI would have to do is have access to money and force. Then they would be the same as every other anti-human planet raping CEO.
That CEO has a biological drive for spreading his genes passed onto him through billions of years of evolution.
There is nothing saying that a machine will have that urge as well.

America Inc. fucked around with this message at 00:27 on Dec 25, 2014

Grouchio
Aug 31, 2014

*Sorry double post. Please delete*

paragon1
Nov 22, 2010

FULL COMMUNISM NOW

Grouchio posted:

What do you mean by...'that'?

Enslaving and/or destroying all humans.

Torka
Jan 5, 2008

[quote="LookingGodIntheEye" post=""439381495"]That CEO has a biological drive for spreading his genes passed onto him through billions of years of evolution.
There is nothing saying that a machine will have that urge as well.
[/quote]

Yeah, we can see from observing what happens to people with disorders of motivating hormones such as dopamine that just having a mind doesn't give you any inherent drive to action. They end up lying in bed or staring at the wall all day.

Adbot
ADBOT LOVES YOU

Samuelthebold
Jul 9, 2007
Astra Superstar
Call me crazy, but I actually think there's a greater chance that we'll just be cruel to AI, not the other way around, and that that will be the way it goes forever with few exceptions. AI machines will be like dogs, but instead of telling them to go sit in the bad-dog chair, we'll punish them by pressing a red button to "kill" them, then basically recycle their bodies.

I mean, HAL 9000 was an rear end in a top hat and everything, but I still felt a little sorry for him when he said "I can feel it, Dave."

  • Locked thread