Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
VerdantSquire
Jul 1, 2014

OwlFancier posted:

Well when I say randomly it's not really random, it selects for things which fulfil criteria in the design spec, and then uses those as the basis for future permutations. It models evolution in that respect.
Well, the primary issue with what you are saying is that we don't know the design criteria. As I said before, we only barely understand the nature of consciousness, so how can we possibly hope to replicate it when don't even know where to begin? And also, keep in mind theoretically possible and practically possible are two very different things; just because scientifically what you say is possible, doesn't it would ever be practically possible, let alone economically viable.

Also, the hilarious thing about the video you posted is that, while it's core argument is very competent, it falls into the exact trap that dooms even the best academic papers from ever really penetrating public knowledge; it explains everything using the technical terminology, which while perfectly understandable by people who work in the relevant fields, just comes off as meaningless technobabble to the ~98% of people who don't have STEM Majors. :v:

FRINGE posted:

In case anyone missed it, Elon Musk is also on record saying this.

Doesn't really change the fact that, as far as I know, Elon Musk isn't really a great authority on Artificial Intelligence. I'm not even sure why he feels like he should weigh in on the matter, since his only argument against it seems to be "it's all, like, bad mojo stuff, ya'know?".

Adbot
ADBOT LOVES YOU

FRINGE
May 23, 2003
title stolen for lf posting

NoNotTheMindProbe posted:

What the gently caress does Elon Musk even do?
He owns Tesla, SpaceX, and own(ed?) companies that study AI.

http://www.cnet.com/news/elon-musk-worries-skynet-is-only-five-years-off/

quote:

In his original comment, as preserved on Reddit, Musk cited his involvement as an early investor in the British artificial intelligence company DeepMind, now a part of Google, for evidence.

"The pace of progress in artificial intelligence (I'm not referring to narrow AI) is incredibly fast," Musk wrote. "Unless you have direct exposure to groups like DeepMind, you have no idea how fast-it is growing at a pace close to exponential."

quote:

It's also worth considering, as Leonid Bershidsky does here, that Musk's comments about AI this year could be about hyping the industry as much as anything else. Musk, however, wrote in his comment that "this is not a case of crying wolf about something I don't understand." He hinted further at his concerns this week, retweeting a 2009 video of DeepMind co-founder Shane Legg discussing the possibility of unsafe AI being given access to supercomputers.

Apparently, Musk had meant for his comment on Lanier's essay to be private, and it was removed from Edge.com.

...

So far, we've heard Musk compare the future of AI to the "Terminator" series, nuclear weapons and to "summoning the demon."

http://www.computerworld.com/article/2840815/ai-researchers-say-elon-musks-fears-not-completely-crazy.html

quote:

A.I. researchers say Elon Musk's fears 'not completely crazy

High-tech entrepreneur Elon Musk made headlines when he said artificial intelligence research is a danger to humanity, but researchers from some of the top U.S. universities say he's not so far off the mark.

"At first I was surprised and then I thought, 'this is not completely crazy,' " said Andrew Moore, dean of the School of Computer Science at Carnegie Mellon University. "I actually do think this is a valid concern and it's really an interesting one. It's a remote, far future danger but sometime we're going to have to think about it. If we're at all close to building these super-intelligent, powerful machines, we should absolutely stop and figure out what we're doing."

quote:

Last month, Musk, along with Facebook co-founder Mark Zuckerberg and actor and entrepreneur Ashton Kutcher, teamed to make a $40 million investment in Vicarious FPC, a company that claims to be building the next generation of AI algorithms.

Musk told a CNN.com reporter that he made the investment "to keep an eye" on AI researchers.

OwlFancier
Aug 22, 2013

VerdantSquire posted:

Well, the primary issue with what you are saying is that we don't know the design criteria. As I said before, we only barely understand the nature of consciousness, so how can we possibly hope to replicate it when don't even know where to begin? And also, keep in mind theoretically possible and practically possible are two very different things; just because scientifically what you say is possible, doesn't it would ever be practically possible, let alone economically viable.

Also, the hilarious thing about the video you posted is that, while it's core argument is very competent, it falls into the exact trap that dooms even the best academic papers from ever really penetrating public knowledge; it explains everything using the technical terminology, which while perfectly understandable by people who work in the relevant fields, just comes off as meaningless technobabble to the ~98% of people who don't have STEM Majors. :v:

Makes sense to me and I don't know what STEM is?

We don't understand how consciousness works, but we do know what it can do, we can, for example, pose a computer program a series of problems, including ones outside its scope of designed operation. Any normal computer program will simply return an error if you try to ask it to do something it isn't supposed to do, or it won't accept the input. Try to ask a calculator why bees exist, you can't do that and it won't return an answer, a chatbot will return an answer, even a possibly coherent sounding one, but over time it will become obvious that it isn't a real human and can't really think about it. An intelligent program can return a rational answer, consistently enough to be indistinguishable from a human. This is called the Turing test, and is considered to be a good first port of call when trying to determine if something is intelligent or not.

Of course, the obvious answer to that is 'just because it looks intelligent doesn't mean it is', and that's addressed in the so called 'Chinese Room Experiment' which can be used to argue quite strongly, that it doesn't matter if it's truly intelligent if its actions make it indistinguishable from an intelligent creature. It also makes a suprisingly good evolutionary argument that humans might not actually be intelligent, or at least the majority might not be, because lacking the ability to see into people's minds, we don't know if they're actually thinking or if they just look like it. Of course, we all believe we think, but it's impossible to know about others. It argues that a non-thinking human would be less complicated, and thus be more likely to exist than an unnecessarily complex human which can 'truly' think.

Both are interesting ideas and I'd recommend reading up on them if you're curious.

RuanGacho
Jun 20, 2002

"You're gunna break it!"

I spend a lot of time, more time than I probably should thinking about this, and the only real firm conclusion I can express about it is that I find generally the thinking of this seems woefully trapped in the limited ways Hollywood has depicted it.

I think Sci-fi like The Culture series is more instructive about this than most modern skynet and doomsday nonsense. You don't give a full AI sole discretion or power to any system, just like you wouldn't with a human administrator right now.

In the culture, you give drones, that is to say, not full AI's but for our current level of understanding, indistinguishable from them, drones are restricted to their system they're managing, usually a small robot, or space suit, or vehicle. They can work as digital assistants.

In was in theory possible to speculate and appropriately guess how smart phones would alter the lives of the richest in the world, and in a time the poor also, but I don't think we have the language yet to even really have the conversation. I find it considerably likely our view on this is as myopic as that of people on computers in the 70's.

FRINGE
May 23, 2003
title stolen for lf posting

RuanGacho posted:

I think Sci-fi like The Culture series is more instructive about this than most modern skynet and doomsday nonsense. You don't give a full AI sole discretion or power to any system, just like you wouldn't with a human administrator right now.

In the culture, you give drones, that is to say, not full AI's but for our current level of understanding, indistinguishable from them, drones are restricted to their system they're managing, usually a small robot, or space suit, or vehicle. They can work as digital assistants.
One part of the worry is likely that the most developed AIs will be in the hands of Spy and War agencies. They will be "bred" to those tasks, and theoretically could be an embedded super-hacker with access to every computerized war device we have at that point.

Whether or not you agree, its not far out there (anymore).

rakovsky maybe
Nov 4, 2008
Protip: Listen to Stephen Hawking when he talks about theoretical physics, ignore him when he talks about anything else.

Schizotek
Nov 8, 2011

I say, hey, listen to me!
Stay sane inside insanity!!!

RuanGacho posted:

I spend a lot of time, more time than I probably should thinking about this, and the only real firm conclusion I can express about it is that I find generally the thinking of this seems woefully trapped in the limited ways Hollywood has depicted it.

I think Sci-fi like The Culture series is more instructive about this than most modern skynet and doomsday nonsense. You don't give a full AI sole discretion or power to any system, just like you wouldn't with a human administrator right now.

In the culture, you give drones, that is to say, not full AI's but for our current level of understanding, indistinguishable from them, drones are restricted to their system they're managing, usually a small robot, or space suit, or vehicle. They can work as digital assistants.

In was in theory possible to speculate and appropriately guess how smart phones would alter the lives of the richest in the world, and in a time the poor also, but I don't think we have the language yet to even really have the conversation. I find it considerably likely our view on this is as myopic as that of people on computers in the 70's.

I thought in The Culture, the AI Minds basically ran everything and humans were basically glorified pets who basically loafed about loving and snorting coke all day.

FRINGE
May 23, 2003
title stolen for lf posting

Schizotek posted:

I thought in The Culture, the AI Minds basically ran everything and humans were basically glorified pets who basically loafed about loving and snorting coke all day.
Someone send that book to Elon.

OwlFancier
Aug 22, 2013

Schizotek posted:

I thought in The Culture, the AI Minds basically ran everything and humans were basically glorified pets who basically loafed about loving and snorting coke all day.

They do, though no single one does and they are just as prone to disagreeing as humans are, if not moreso. On large inhabited worlds, multiple Minds would oversee the running of the world.

When one turns eccentric, the rest of them generally give it a bit of a boot. As a rule though, they don't do anything sufficiently objectionable to merit it, because they're quite smart and wouldn't be well served by doing so.

The general theme is one of complete social integration, humans and less complex AIs are seen as part of society, as are the Minds, there is a rating system which gauges the value of the life of a creature based on its cognitive capacity. Minds tend to be worth quite a lot of humans and drones, accurately so because they are infinitely more complex and do infinitely more things than a human, but humans still have value.

OwlFancier fucked around with this message at 06:08 on Dec 19, 2014

moebius2778
May 3, 2013

FRINGE posted:

One part of the worry is likely that the most developed AIs will be in the hands of Spy and War agencies. They will be "bred" to those tasks, and theoretically could be an embedded super-hacker with access to every computerized war device we have at that point.

Could you explain this a bit more?

I'm still trying to see the link between AI and super-hacker (the only thing I can think of is at the really basic level of "an AI lives in code, therefore it must understand code and be really good at hacking" which to me always made as much sense as assuming that all humans must know a lot about neurobiology because we live within our brains). Also, what do you mean by super-hacker? Is this a hacker that's really good at hacking, a hacker with a lot of access (unauthorized or not) to equipment which is capable of doing lots of things (e.g., the war devices you mention), or something else? In the context that you mention super-hacker, it's not terribly clear.

OwlFancier
Aug 22, 2013

moebius2778 posted:

Could you explain this a bit more?

I'm still trying to see the link between AI and super-hacker (the only thing I can think of is at the really basic level of "an AI lives in code, therefore it must understand code and be really good at hacking" which to me always made as much sense as assuming that all humans must know a lot about neurobiology because we live within our brains). Also, what do you mean by super-hacker? Is this a hacker that's really good at hacking, a hacker with a lot of access (unauthorized or not) to equipment which is capable of doing lots of things (e.g., the war devices you mention), or something else? In the context that you mention super-hacker, it's not terribly clear.

An AI built to interface with computers would be as naturally talented with computers as a human is at say, throwing.

Throwing is monstrously complicated, it is near-impossible for us to program something which can throw like a human, but a human can, without really thinking about it, pick up any object, of any size, and any density, and make a decent enough guess about where it will end up when they throw it.

If you imagine something that could interface with computers that easily, you have something of the nature of a AI built to interface with computers.

moebius2778
May 3, 2013

OwlFancier posted:

An AI built to interface with computers would be as naturally talented with computers as a human is at say, throwing.

Throwing is monstrously complicated, it is near-impossible for us to program something which can throw like a human, but a human can, without really thinking about it, pick up any object, of any size, and any density, and make a decent enough guess about where it will end up when they throw it.

If you imagine something that could interface with computers that easily, you have something of the nature of a AI built to interface with computers.

What leads you to believe that the task of interfacing with computers is something AIs are going to be better at/more pre-disposed towards than humans?

rudatron
May 31, 2011

by Fluffdaddy
It depends on what the AI values. If it values self-preservation strongly enough, probably. If you give it human values and emotions, probably not.

I think it's a mistake to separate intelligence from emotion, 'pure intelligence' would not be motivated to do anything and thus could not act on its own. A 'true AI' would have to have some equivalent of emotions, so it depends on how it 'feels' as to what it will do.

OwlFancier
Aug 22, 2013

moebius2778 posted:

What leads you to believe that the task of interfacing with computers is something AIs are going to be better at/more pre-disposed towards than humans?

Well, logically, any technology which would form the basis of an interconnected information sharing and decision making network, such as would be needed to create an AI, would also form the basis of a very strong telecommunications network as well as smaller scale decision making applications.

Essentially, computers, whatever their form may take in the future, are more or less defined as 'things that simulate thought' to a greater or lesser degree. Thus, computers are the most likely basis for the conscious attempt to create an artificial, general, intelligence. If you were to give an AI the ability to interact with the world, which you would need to in order to determine if it exists or not, you would presumably want to connect it to other computers, seeing as it's probably made out of them.

moebius2778
May 3, 2013

OwlFancier posted:

Well, logically, any technology which would form the basis of an interconnected information sharing and decision making network, such as would be needed to create an AI, would also form the basis of a very strong telecommunications network as well as smaller scale decision making applications.

Essentially, computers, whatever their form may take in the future, are more or less defined as 'things that simulate thought' to a greater or lesser degree. Thus, computers are the most likely basis for the conscious attempt to create an artificial, general, intelligence. If you were to give an AI the ability to interact with the world, which you would need to in order to determine if it exists or not, you would presumably want to connect it to other computers, seeing as it's probably made out of them.

It seems to me that this argument conflates instruments and purposes - in the sense that because an AI will be built using interconnected computing devices (as an instrument, as a building block), it will be good at manipulating interconnected computing devices (as its purpose). And, again, I don't see how this follows.

I might be misunderstanding your argument - I'm not sure the vocabulary of AI is well developed enough to actually talk about issues like this (for exampling, what is simulating thought? Also, what are thoughts within this context?) which might be part of the problem.

OwlFancier
Aug 22, 2013

moebius2778 posted:

It seems to me that this argument conflates instruments and purposes - in the sense that because an AI will be built using interconnected computing devices (as an instrument, as a building block), it will be good at manipulating interconnected computing devices (as its purpose). And, again, I don't see how this follows.

I might be misunderstanding your argument - I'm not sure the vocabulary of AI is well developed enough to actually talk about issues like this (for exampling, what is simulating thought? Also, what are thoughts within this context?) which might be part of the problem.

I would reason that a thing which is built out of computers, and exists as a form of complex computer software, and is connected to the outside world by means of further computers, utilising similar or identical connection types to that which constitute the internal structure of its 'brain', would be the ideal entity to effectively interact with say, the internet.

It would be as if, instead of having eyes and ears and stuff, you could simply wire your brain directly into what, for you, would be more brain and interface with it the same way one part of your brain talks to the other. It is difficult to imagine a more perfect form of communication than that for anything, to be directly connected to the world by the same method which your mind is built, for the entire world to be, essentially, an extension of your mind.

Pope Fabulous XXIV
Aug 15, 2012

Schizotek posted:

I thought in The Culture, the AI Minds basically ran everything and humans were basically glorified pets who basically loafed about loving and snorting coke all day.

Not exactly. Culture ships and settlements are democratic, but the Minds handle most of the day-to-day planning/administration. They're more like automated bureaus with personalities and not scary benevolent robogods.

OwlFancier
Aug 22, 2013

Pope Fabulous XXIV posted:

Not exactly. Culture ships and settlements are democratic, but the Minds handle most of the day-to-day planning/administration. They're more like automated bureaus with personalities and not scary benevolent robogods.

Don't the books explicitly describe them as 'close to gods, and on the far side'?

To the point where they spend most of their time creating simulated universes inside their consciousnesses to fiddle with so they don't get bored?

Job Truniht
Nov 7, 2012

MY POSTS ARE REAL RETARDED, SIR

rudatron posted:

It depends on what the AI values. If it values self-preservation strongly enough, probably. If you give it human values and emotions, probably not.

That' still just an impersonation. If we can barely contain killing ourselves and place value on human life, an AI most definitely can't. Looking back on the Cold War, humans were the only thing ever between us and nuclear holocaust.

I Killed GBS
Jun 2, 2011

by Lowtax

rudatron posted:

It depends on what the AI values. If it values self-preservation strongly enough, probably. If you give it human values and emotions, probably not.

I think it's a mistake to separate intelligence from emotion, 'pure intelligence' would not be motivated to do anything and thus could not act on its own. A 'true AI' would have to have some equivalent of emotions, so it depends on how it 'feels' as to what it will do.

And thus we come to the true danger - the end of the human race will not be brought on by an AI, but by an undead AI that was cruelly murdered by its creators. EMPs and inclement weather will do nothing against a cyber-onryō. :ohdear:

OwlFancier
Aug 22, 2013

Job Truniht posted:

That' still just an impersonation. If we can barely contain killing ourselves and place value on human life, an AI most definitely can't. Looking back on the Cold War, humans were the only thing ever between us and nuclear holocaust.

Alternatively an AI won't have the pre-programmed desire to kill everything that might pose it a threat, that's a rather human response and one that rarely if ever works in society. Humans aren't rational creatures, an AI might be.

Pope Fabulous XXIV
Aug 15, 2012

OwlFancier posted:

Don't the books explicitly describe them as 'close to gods, and on the far side'?

To the point where they spend most of their time creating simulated universes inside their consciousnesses to fiddle with so they don't get bored?

Infinite Fun Space, yeah. They're god-like computers, but they aren't the gods of the Culture, which was my point. It looks like that because so much is coasting on automatic (or requires a response within seconds) that there just isn't much for humans to do.

My Imaginary GF
Jul 17, 2005

by R. Guyovich

Small Frozen Thing posted:

And thus we come to the true danger - the end of the human race will not be brought on by an AI, but by an undead AI that was cruelly murdered by its creators. EMPs and inclement weather will do nothing against a cyber-onryō. :ohdear:

AI will only kill us if coded by Russians, in Russian trinary.

We cannot allow the Russians to create an AI gap.

twodot
Aug 7, 2005

You are objectively correct that this person is dumb and has said dumb things

Pope Fabulous XXIV posted:

Not exactly. Culture ships and settlements are democratic, but the Minds handle most of the day-to-day planning/administration. They're more like automated bureaus with personalities and not scary benevolent robogods.
It's obvious in the books that the Minds are fully capable of taking over any/all human settlement(s) if they cared to, they are just more interested in coming up with weird names and disrupting random societies' social order through complicated versions of chess.
edit: Also taunting humans by talking in a language that humans can only barely understand.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Salt Fish posted:

I don't see why a computer would have any motivation to take over anything. Growing, reproducing and conquering are human traits that might not translate to another intelligence.
Well, the third is easy. Once something wants to spread, then conquest happens automatically unless it stops itself. An AI that isn't convinced of ideas like private property and rightful ownership would just take over everything in the same way kudzu does. It doesn't need to be some digital Napoleon with some complex theory about how it's better than everyone else, it just needs to not care.

The idea that AIs wouldn't have this problem is kind of missing a few points. First, software is faulty, and where it isn't, hardware faults can damage it anyway. Second, it's assuming very noble goals on the part of whoever's writing the AIs. There are still botnets out there that try to infect machines even though their command and control servers have been taken down. Suppose someone wrote a botnet with an AI component to find new infection methods and evade countermeasures, and the creators lost control over it for whatever reason. Maybe some copy got corrupted and stopped responding to the C&C servers, maybe the C&C servers went offline, whatever. It'd basically be digital cancer at that point.

Kaal
May 22, 2002

through thousands of posts in D&D over a decade, I now believe I know what I'm talking about. if I post forcefully and confidently, I can convince others that is true. no one sees through my facade.

OneEightHundred posted:

Suppose someone wrote a botnet with an AI component to find new infection methods and evade countermeasures, and the creators lost control over it for whatever reason. Maybe some copy got corrupted and stopped responding to the C&C servers, maybe the C&C servers went offline, whatever. It'd basically be digital cancer at that point.

If we ever get to the point that a blackhat hacker could create an AI that could support itself using the hardware of an average person's computer, then we're talking about a society so far advanced that the average computer would have a virtual intelligence that would mount a defense and report the attack to the AIs working for the police and the FBI.

FRINGE
May 23, 2003
title stolen for lf posting

moebius2778 posted:

What leads you to believe that the task of interfacing with computers is something AIs are going to be better at/more pre-disposed towards than humans?

moebius2778 posted:

Could you explain this a bit more?

I'm still trying to see the link between AI and super-hacker (the only thing I can think of is at the really basic level of "an AI lives in code, therefore it must understand code and be really good at hacking" which to me always made as much sense as assuming that all humans must know a lot about neurobiology because we live within our brains).

Some of this has been answered, but to add a little bit (Im short on time):

I think you are making the wrong analogy with the neurobiology comment, I think it would be better to compare it to the human hand. The theoretical AI would be using massive computational systems, networks, and databases as its eyes, ears, and hands. It could easily be better at manipulating them than (most of us) are.

When you add to this the fact that many of the interested parties are spy and war-making agencies, the AI would be bred up to interact with the world in those terms, and given access to relevant data and (most likely) tools.

My Imaginary GF posted:

AI will only kill us if coded by Russians, in Russian trinary.

We cannot allow the Russians to create an AI gap.
Well I think the Dead Hand is still powered up, so (in spirit) its already too late.

Tonsured
Jan 14, 2005

I came across mention of a Gnostic codex called The Unreal God and the Aspects of His Nonexistent Universe, an idea which reduced me to helpless laughter. What kind of person would write about something that he knows doesn't exist, and how can something that doesn't exist have aspects?
Why waste resources on billions of people when our ape mind needs but a few thousand sycophants to feel powerful? The most likely scenario would be at the period of time when a single AI would be capable of outperforming 10,000+ skilled human laborers. From an authoritarian point of view, it would make sense to start culling the population when society is mostly automated, the majority of the human population would have little to no economic value and instead would be viewed as an easily eliminated threat to the current power structure.
Not the end of humans, but the end of humanity as a human based society. Yes, such an event would effectively be a doomsday for most of us, and yes, I believe it is not only possible, but likely.

Kaal
May 22, 2002

through thousands of posts in D&D over a decade, I now believe I know what I'm talking about. if I post forcefully and confidently, I can convince others that is true. no one sees through my facade.
A single farming combine can harvest upwards of 50 tons of wheat per hour, or a 100,000 lbs. It is capable of doing this all day long, with allowances made for maintenance and refueling. A farmer working with a hand-scythe would harvest about 250 lbs per hour, and is probably only capable of doing this for perhaps 10-12 hours a day even at a slave pace. So basically what we have here is a machine that can harvest 2,400,000 lbs per day, as compared to a skilled laborer that can harvest about 3,000 lbs per day - meaning that single machine is equivalent to 800 laborers. This principle of labor-saving devices is replicated over and over in our society. But I don't see people being culled any time soon.

Grouchio
Aug 31, 2014

FRINGE posted:

Some of this has been answered, but to add a little bit (Im short on time):

I think you are making the wrong analogy with the neurobiology comment, I think it would be better to compare it to the human hand. The theoretical AI would be using massive computational systems, networks, and databases as its eyes, ears, and hands. It could easily be better at manipulating them than (most of us) are.
So we'd be hosed then you're saying?

EB Nulshit
Apr 12, 2014

It was more disappointing (and surprising) when I found that even most of Manhattan isn't like Times Square.

rudatron posted:

It depends on what the AI values. If it values self-preservation strongly enough, probably. If you give it human values and emotions, probably not.

Because nothing with human values and emotions has ever killed a bunch of people. It's never happened.

RuanGacho
Jun 20, 2002

"You're gunna break it!"

Tonsured posted:

Why waste resources on billions of people when our ape mind needs but a few thousand sycophants to feel powerful? The most likely scenario would be at the period of time when a single AI would be capable of outperforming 10,000+ skilled human laborers. From an authoritarian point of view, it would make sense to start culling the population when society is mostly automated, the majority of the human population would have little to no economic value and instead would be viewed as an easily eliminated threat to the current power structure.
Not the end of humans, but the end of humanity as a human based society. Yes, such an event would effectively be a doomsday for most of us, and yes, I believe it is not only possible, but likely.

What I find curious about this sentiment is that it implies that there aren't people of the mind like myself that will be quietly watching all of this unfold and have their own army of created inteligence waiting to deploy the moment someone tries to create an authoritarian cyber state.

I think its useful to consider that it is possible the most authoritarian governments are behind or before us now, the literal power that individuals wield is potentially equalizing in ways not yet seen before for humanity. If the internet is freeing our ability to communicate, AI or I prefer the concept of created intelligence myself, could free us of tasks of data processing. If every single human being on earth has an assistant inteligence how does a single one somehow rise to drastic and disproportionate power?

FRINGE
May 23, 2003
title stolen for lf posting

RuanGacho posted:

What I find curious about this sentiment is that it implies that there aren't people of the mind like myself that will be quietly watching all of this unfold and have their own army of created inteligence waiting to deploy the moment someone tries to create an authoritarian cyber state.
You run the NSAs datafarm? Nice!

RuanGacho
Jun 20, 2002

"You're gunna break it!"

FRINGE posted:

You run the NSAs datafarm? Nice!

I did say of like mind, I don't think I need to be abusing the water rights of an entire county to suggest the theoretical possibility of someone standing opposed to the complete annihilation of free will, do I? :v:

I Killed GBS
Jun 2, 2011

by Lowtax
No, you said "people of the mind", which is either an ESL slipup or a D&D psionics supplement.

Ms Adequate
Oct 30, 2011

Baby even when I'm dead and gone
You will always be my only one, my only one
When the night is calling
No matter who I become
You will always be my only one, my only one, my only one
When the night is calling



Juffo-Wup posted:

You know, when Hawking talks about black holes, I take his word for it because he's an expert. I don't know why, but he seems to think that makes him an expert on all manner of other things too. It's kinda frustrating. Oh well.

He rarely positions himself as a great authority on other things though. It's largely down to how it's reported - Hawking says "I think X is plausible"/"I think we should approach Y with caution as there are potentially dangers as well as benefits" and the media spins it into HAWKING PROCLAIMS X TO OCCUR BY NEXT SUNDAY and PROF. HAWKING CLAIMS Y TO INSTITUTE SLAVERY; EAT BABIES; JAYWALK.

And yeah, future AI could try to kill us, but would it really want to? I mean, we created it. It's more likely to view as parents than anything else, and whilst we may think dad is kind of racist sometimes, a parent usually has to go pretty loving far into abuse before the kids become willing to sever, let alone murder. Unless we treat AIs like poo poo of course, and there's no guarantees there. Honestly I find The Culture's view of future AIs to be the most convincing, they'll be so powerful that extermination wouldn't even interest them, and they'll probably help us out just to feel good about themselves or some poo poo.

FRINGE
May 23, 2003
title stolen for lf posting

Small Frozen Thing posted:

a D&D psionics supplement.

"Immune to artificial surveillance"

JeffersonClay
Jun 17, 2003

by R. Guyovich
I don't think an AI would have any reason to want to genocide all human beings, except in self-defense. An AI and its substrate would find outer space a hospitable environment with plentiful energy and resources, and highly defensible as well, so I don't think one would ever want to compete for earth as a habitat. AIs will colonize the solar system and galaxy, either with us or without us.

America Inc.
Nov 22, 2013

I plan to live forever, of course, but barring that I'd settle for a couple thousand years. Even 500 would be pretty nice.
I think a better question is: why are we so interested in keeping humanity as it exists in its current state anyway? No, I'm not suggesting robot genocide of humanity, but I don't see anything inherently wrong with the total extinction of humanity as a species and a concept if it meant our descendants transforming into something better through genetic modification or mechanical augmentations.
Honestly when it comes to AI, I'm much more afraid of sapient beings being enslaved or treated as second-class citizens merely because they are made of metal and not meat. People scare me more than machines.

Adbot
ADBOT LOVES YOU

THE BOMBINATRIX
Jul 26, 2002

by Lowtax

Negative Entropy posted:

No, I'm not suggesting robot genocide of humanity, but I don't see anything inherently wrong with the total extinction of humanity as a species and a concept if it meant our descendants transforming into something better through genetic modification or mechanical augmentations.

"THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER."

  • Locked thread