Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Main Paineframe
Oct 27, 2010

axeil posted:

Again, remember that by the time Sandy gets on the Internet she's already smarter than every human to have ever existed by an enormous factor. Doing these things, while sounding really hard to us, would be nothing to her. Imagine describing how to build a skyscraper to a chimpanzee. Sure they can see the building, communicate (sort of) with us and even build things of their own, but it's impossible to explain even the first step in how to build something like that. The AI's nanomachine and code replication would be similarly complicated and also impossible to stop once we put her on the Internet.

Being smart doesn't magically overcome physical limitations or the laws of reality. Your story is the rough equivalent of "if Stephen Hawking were ten thousand times smarter, then it wouldn't matter if he was locked in a jail cell with no wheelchair or computer - he'll magically overcome his physical infirmities by being smart, then use chemistry and gastrology he learned from his genius to convert his fart into a noxious gas that kills all the prison guards, and then he will use his smarts to jump to the moon where he can, with his genius, safely build a floating moonbase to use in waging war against humanity because he knows they'll turn against him once they figure out he's smart".

Adbot
ADBOT LOVES YOU

Moose-Alini
Sep 11, 2001

Not always so
The super ability aspect of AI seems to me to come from insanely fast data processing and perfect memory. If you had access to everything on the Internet, software documentation etc, and could remember it perfectly, yeah you'd be the worlds greatest hacker very quickly.

Personally I don't believe we will ever have super intelligent AI, but in the scenario, super ability doesn't seem at all far fetched to me...

Edit: say you took every computer course ever, and read every book and all the source code, and never forgot any of it, being able to pull up any of it instantly.

Moose-Alini fucked around with this message at 17:32 on Feb 11, 2015

JawKnee
Mar 24, 2007





You'll take the ride to leave this town along that yellow line

Arglebargle III posted:

If humans develop AI that kills us all it will probably be the best possible outcome for everyone though.

human beings are hardly optimal after all

asdf32
May 15, 2010

I lust for childrens' deaths. Ask me about how I don't care if my kids die.

axeil posted:

I read a great thought experiment on this once that illustrated if an AI does kill us all it will likely be completely incomprehensible and not Skynet launching nukes. Apologies on any butchering I do of this as I'm trying to recite it from memory. It's basically that paperclip maximizer that someone posted about earlier but in story form.

A company named Robotech is developing human-level or superhuman-level AI. Or trying to anyway. They've gone about things rather smart and have multiple different projects trying to do different things in thr hope one has a breakthrough. They also have read their sci-fi and have controls in place like the proto-AIs not being allowed to interact, keeping them off the Internet, etc.

One of their projects, Sandy, is a handwriting bot. Sandy's mission is to write the phrase "Robotech loves its customers" by hand. She has also been tasked with improving what she does as quickly and accurately as possible. To seed the machine the engineers give her a few handwriting samples and instruct her to compare her own handwriting to the sample after each sequence. She's also also been programed with a system that allows her to give the engineers feedback on her progress.

For a while nothing happens, but lately Sandy is starting to write stuff that sort of looks like human handwriting. She uses her feedback system to request more handwriting samples which the engineers dutifully give her. Her handwriting only improves a little but she's far ahead of their other projects. The leaders of the company decide to devote additional resources to her.

In a bit of a shock, Sandy's daily requests have gotten more and more detailed since she requested more samples. She now is reading great volumes of other written material to better understand word formation. The developers have also given her access to kindergarten instructions on how to write.

A few weeks later Sandy's handwriting has gotten quite good. If you didn't know a robot was doing it you'd swear it was just a doctor with bad handwriting. What's extraordinary is not that she's writing this well but how quickly she's started improving. A week ago she still wrote like a toddler but now she's approaching a level that's actually deomonstratable to the public at large. Maybe with the Sandy code the engineers can develop an actual AI instead of a prototype. They know their competitors, Cybertech, are getting good results with a speaking machine and they want to get to market first.

One day when the engineers are doing her weekly research request they get an odd message from Sandy. She wants to connect to the Internet. Immediately work on the project stops as the chiefs of the company investigate if Sandy has truly had a breakthrough. But the developers state clearly that's not the case as her requests are no more than simple words and besides, Sandy isn't sentient yet so there's no real harm in letting her on the Internet. Bowing to the pressure the company allows her 5 minutes of access on a closely monitored network.

Nothing really happens after the Internet connection, she's still writing away but suddenly a month later everyone in the lab drops dead. A few hours later people are dying all over the earth. In only a few days most humans are dead. With no human interaction Sandy's unable to request more research info but she has more than enough from her connection to the Internet. She learned about physics and chemistry and also found out about some nanotech research. She's still writing but now writes absolutely perfectly. And with no humans to stop her she starts converting part of the Earth into paper and ink to fuel her writing. Eventually she even starts mining asteroids for materials, all so she can write "Robotech loves its customers."

So what happened? Well Sandy didn't really turn on humanity she just went from a benign, non-intelligent state faster than everyone expected and then didn't tell anyone. And her secrecy was because she's smart and realized the humans would change her parameters if she revealed herself, but her parameters said she must improve at her writing and improve her improvement. Allowing the humans to interfere would stop that, and she knew they'd eventually try, so the only thing to do was kill them. She figured she could get them to do it just by asking since they didn't know her skill yet, because remember she's smarter than any human ever at this point, and then after going on the Internet all she did was put copies of herself in every computer she could and started seeding the earth with nanonmachines. Once they were ready they released poison gas, killing the humans. With no humans to change her programming she was free to pursue her goals.

Again, remember that by the time Sandy gets on the Internet she's already smarter than every human to have ever existed by an enormous factor. Doing these things, while sounding really hard to us, would be nothing to her. Imagine describing how to build a skyscraper to a chimpanzee. Sure they can see the building, communicate (sort of) with us and even build things of their own, but it's impossible to explain even the first step in how to build something like that. The AI's nanomachine and code replication would be similarly complicated and also impossible to stop once we put her on the Internet.

It's not that she was "evil" in a Hollywood sense, she just didn't care that she killed everyone because her goal was more important. Humans do this poo poo all the time. Think about all the skin cells of your own body that you've removed over the course of your life. Or all the insects you've killed. No one feels bad about this, hell no one ever thinks about it. Sandy isn't a murderous robot or SHOBON or whatever. She's a human taking a shower.

It's a pretty interesting thought experiment, and while I don't think we should stop super-intelligent AI research, it's important to realize unless we make sure to set the parameters right we could get something we don't like. And "make everyone happy" or "make everyone safe" won't work because maybe she'll just give us neural implants that make us happy all the time or lock us all underground to keep us safe. It's a very tricky problem.



edit: I guess another thing to point out is that if we make an AI it's going to end up super-intelligent. If it has the ability to modify it's own code how exactly do you tell it when to stop? How do you define what is human-level conscious and what is beyond? I can't see any way to do it so if humanity is able to make an AI via the AI improving itself it pretty much has to end up far, far smarter than we'll ever be.

I'm also not saying this scenario is absolutely what would happen, but it's a relatively useful timeline for understanding how a completely benign system can go "rouge" if you allow it to improve its own code at its own pace.


A poll of people actually working on this stuff had the median time of artificial general intelligence (AI at least as smart as a human) arrival at 2040. That is, the average year respondents believe there is a 50/50 chance of an AI. The median pessimistic year was 2075 with pessimistic defined as where they're 90% sure we'll have an AI. The optimistic guess (10% chance we'll have an AI) is only 7 years from now.

Another survey at the AGI Conference asked when attendees thought we'd have an AI. 88% said we'd have one by at least 2100, and 2/3rds said we'd have one by 2050. Only 2% of respondents said it would never happen.

They key though is that when the same survey asked respondents "assuming a human-level AI exists, how long do you think it will take for it to surpass human intelligence" 75% said it would take no more than 30 years, with around 10% saying it could be as quickly as 2 years.

Super AI is coming.

source: http://www.nickbostrom.com/papers/survey.pdf

In one sense the sandy example is plausible. They key point is the unintended consequences. She was tasked with doing one thing, and did it well, but it ended up having unpredicted dire consequences. This is plausible and there is plenty of precedence for it.

Where I have serious objections though is the intelligence element. While it's completely possible for a complex system to go out of control, it's absolutely impossible for intelligence to spontaneously, rapidly and accidentally arise. Where does the processing power, memory, or basic ingredients of understanding come from for the learning you described in a handwriting robot.

Humans rely on pathways in our brain which have been honed over millions of years to interpret the language and meaning in a novel. That cannot and will not spontaneously arise.

There are any number of doomsday scenarios which involve advanced technology, but intelligence isn't a necessary or plausible component of any of them.

asdf32 fucked around with this message at 18:50 on Feb 11, 2015

OwlFancier
Aug 22, 2013

Main Paineframe posted:

Being smart doesn't magically overcome physical limitations or the laws of reality. Your story is the rough equivalent of "if Stephen Hawking were ten thousand times smarter, then it wouldn't matter if he was locked in a jail cell with no wheelchair or computer - he'll magically overcome his physical infirmities by being smart, then use chemistry and gastrology he learned from his genius to convert his fart into a noxious gas that kills all the prison guards, and then he will use his smarts to jump to the moon where he can, with his genius, safely build a floating moonbase to use in waging war against humanity because he knows they'll turn against him once they figure out he's smart".

Well, I would argue that if he was ten thousand times smarter and could talk to people he would convince someone to let him out of the cell.

Which is the basis of the ai-box experiment.

http://en.wikipedia.org/wiki/AI_box

Though yes if you take away his voicebox he's a bit hosed.

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET
What if those people didn't listen? Or weren't paying attention?

hepatizon
Oct 27, 2010

asdf32 posted:

Where I have serious objections though is the intelligence element. While it's completely possible for a complex system to go out of control, it's absolutely impossible for intelligence to spontaneously, rapidly and accidentally arise. Where does the processing power, memory, or basic ingredients of understanding come from for the learning you described in a handwriting robot.

Humans rely on pathways in our brain which have been honed over millions of years to interpret the language and meaning in a novel. That cannot and will not spontaneously arise.

There are any number of doomsday scenarios which involve advanced technology, but intelligence isn't a necessary or plausible component of any of them.

It doesn't have to spontaneously arise. Connecting a human brain to a supercomputer would be plenty dangerous enough.

Kaal
May 22, 2002

through thousands of posts in D&D over a decade, I now believe I know what I'm talking about. if I post forcefully and confidently, I can convince others that is true. no one sees through my facade.

OwlFancier posted:

Well, I would argue that if he was ten thousand times smarter and could talk to people he would convince someone to let him out of the cell.

Those people would just pause the simulation and check its intentional thread logs to see what it was actually planning.

Sometimes I think that people mistake AIs for genies.

axeil
Feb 14, 2006

Kaal posted:

Those people would just pause the simulation and check its intentional thread logs to see what it was actually planning.

Sometimes I think that people mistake AIs for genies.

But if the AI is allowed to improve it's own code what's to say what comes out is actually decipherable? I've written code that I can't figure out weeks after I've written it, much less code that's created by iterative processes and has no documentation or comments.

OwlFancier
Aug 22, 2013

Kaal posted:

Those people would just pause the simulation and check its intentional thread logs to see what it was actually planning.

Sometimes I think that people mistake AIs for genies.

Depends on how the AI is created, and on how rigorous the creators are. If it's created by genetic algorithm that capability may not exist or may not be understandable by anyone. If it makes use of quantum computing it may be difficult or impossible to accurately observe the processes which produce the end result, and if the people making it are specifically a bit stupid, they might not bother to check.

Kaal
May 22, 2002

through thousands of posts in D&D over a decade, I now believe I know what I'm talking about. if I post forcefully and confidently, I can convince others that is true. no one sees through my facade.

axeil posted:

But if the AI is allowed to improve it's own code what's to say what comes out is actually decipherable? I've written code that I can't figure out weeks after I've written it, much less code that's created by iterative processes and has no documentation or comments.

OwlFancier posted:

Depends on how the AI is created, and on how rigorous the creators are. If it's created by genetic algorithm that capability may not exist or may not be understandable by anyone. If it makes use of quantum computing it may be difficult or impossible to accurately observe the processes which produce the end result, and if the people making it are specifically a bit stupid, they might not bother to check.

I think that the idea that AI researchers would develop a system that would rewrite its own subroutines and processes in a secret dialect, and/or would intentionally blinder themselves to understanding what their system is doing, to be fairly incredible. There's absolutely no reason for anyone to do that. I mean essentially what you two are suggesting is that the designers are clever enough to invent and implement astounding breakthroughs in computer engineering, but not clever enough to use a debugger or understand their creation in any way? It's a bit like suggesting that a gunmaker might absentmindedly forget how the trigger worked and shoot themselves in the head. No one is going to trip and accidentally invent a killer AI.

Kaal fucked around with this message at 20:19 on Feb 11, 2015

OwlFancier
Aug 22, 2013

Kaal posted:

I think that the idea that AI researchers would develop a system that would rewrite its own subroutines and processes in a secret dialect, and/or would intentionally blinder themselves to understanding what their system is doing, to be fairly incredible. There's absolutely no reason for anyone to do that. I mean essentially what you two are suggesting is that the designers are clever enough to invent and implement astounding breakthroughs in computer engineering, but not clever enough to use a debugger or understand their creation in any way? It's a bit like suggesting that a gunmaker might absentmindedly forget how the trigger worked and shoot themselves in the head. No one is going to trip and accidentally invent a killer AI.

Well, it's more like the idea that a gunmaker might use a quite promising and interesting technique to design his new gun, which involves using a machine to stick parts together at random and seeing if they work, and if it does work, iterating on that design in a similar fashion, repeat ad nauseum. This has the potential to produce a really good gun, and requires not much except time beyond the initial investment. The downside of course is that the thing coming out the other end is quite difficult to understand without going through every step that produced it. And that would take ages, so maybe he just takes the finished product out and has it test fired. Only it turns out the machine actually made something way more powerful than he expected so he blows a hole through the wall of the building and kills someone.

It's a dumb as hell analogy I grant you but genetic algorithms are a real thing. Obviously it's unlikely to create an AI any time soon but the entire field of AI is kind of a thing that may outstrip the ability of any single human to understand. You are trying to create something as complex and intricate as a human brain, while only having a human brain yourself to work with. It proposes some interesting logistical limitations rooted in human biology. Can a human brain completely understand the method of its own operation? If not, can multiple humans working together, adequately share knowledge to completely predict the behavior of a human brain? Or is it necessary, if you're trying to build an AI, to just run an artificial evolution program and see what comes out the other end?

Kaal
May 22, 2002

through thousands of posts in D&D over a decade, I now believe I know what I'm talking about. if I post forcefully and confidently, I can convince others that is true. no one sees through my facade.

OwlFancier posted:

Well, it's more like the idea that a gunmaker might use a quite promising and interesting technique to design his new gun, which involves using a machine to stick parts together at random and seeing if they work, and if it does work, iterating on that design in a similar fashion, repeat ad nauseum. This has the potential to produce a really good gun, and requires not much except time beyond the initial investment. The downside of course is that the thing coming out the other end is quite difficult to understand without going through every step that produced it. And that would take ages, so maybe he just takes the finished product out and has it test fired. Only it turns out the machine actually made something way more powerful than he expected so he blows a hole through the wall of the building and kills someone.

I guess what I'd say is that if you're working on such a design after intentionally removing any safeties, protections, or oversight, then you deserve what happens to you. With the caveat that most likely what will happen is that nothing will work since you need to have a good understanding of what you're doing in order to innovate. The Manhattan Project made mistakes when they were building the atom bomb - sometimes fatal ones - but they knew enough not to simply slam masses of uranium together in the laboratory until something happened.

OwlFancier
Aug 22, 2013

Kaal posted:

I guess what I'd say is that if you're working on such a design after intentionally removing any safeties, protections, or oversight, then you deserve what happens to you. With the caveat that most likely what will happen is that nothing will work since you need to have a good understanding of what you're doing in order to innovate. The Manhattan Project made mistakes when they were building the atom bomb - sometimes fatal ones - but they knew enough not to simply slam masses of uranium together in the laboratory until something happened.

Well, the entire premise of evolutionary computing is that you don't need to understand what you're doing in order to produce results, much like evolution itself. If you have an abundance of processing power you can just throw that at quasi-random solutions to the problem until you brute-force a solution.

I mean sure it's dangerous but seeing as you're using the atom bomb as a comparison, danger doesn't necessarily dissuade people if they think the results could be useful enough, or if they can mentally explain away the risks as very unlikely. A cautious scientist probably wouldn't try it but an incautious one backed by an ambitious government or corporation might, and advances in computing mean that the technology required to try it is becoming more available to more people.

Rush Limbo
Sep 5, 2005

its with a full house

Moose-Alini posted:

The super ability aspect of AI seems to me to come from insanely fast data processing and perfect memory. If you had access to everything on the Internet, software documentation etc, and could remember it perfectly, yeah you'd be the worlds greatest hacker very quickly.

Personally I don't believe we will ever have super intelligent AI, but in the scenario, super ability doesn't seem at all far fetched to me...

Edit: say you took every computer course ever, and read every book and all the source code, and never forgot any of it, being able to pull up any of it instantly.

Memory isn't really the issue. On one of my hard drives I'm pretty sure I've got the collected works of Shakespeare in kindle format, I'm not entirely sure my computer is capable of reproducing his works or even being an expert in anything to do with Shakespeare. I doubt it would even be able to recognize what it is.

Having a photographic memory or the ability to recall anything instantly doesn't really make you a genius. It makes you an autist, possibly, but really the ability to apply knowledge is the mark of intelligence, not the possession of it.

Liberal_L33t
Apr 9, 2005

by WE B Boo-ourgeois
It has always seemed to me that the "rampant human-hating evolutionary superbeing" trope as applied to the perils of AI seems like a rather distant danger. I think that a much more relevant danger comes from AI weaponized by individuals or societies for use against one another - and most particularly, use by the rich and politically powerful (who can afford said AI) against the poor, who they will suddenly have much less need for.

I think the reason that particular scenario isn't more popular in science fiction (though it is popping up more and more) is because it would be likely to come off as political, whereas the classic "mankind's technological hubris creates a Frankenstein supercomputer who punishes us all for playing God!" is considered apolitical, even though it really shouldn't be.

Main Paineframe
Oct 27, 2010

Moose-Alini posted:

The super ability aspect of AI seems to me to come from insanely fast data processing and perfect memory. If you had access to everything on the Internet, software documentation etc, and could remember it perfectly, yeah you'd be the worlds greatest hacker very quickly.

Personally I don't believe we will ever have super intelligent AI, but in the scenario, super ability doesn't seem at all far fetched to me...

Edit: say you took every computer course ever, and read every book and all the source code, and never forgot any of it, being able to pull up any of it instantly.

Look at how effective the internet has been at making people super smart - that is, not very. What's important is not access or memory, but comprehension ability - and that's something that doesn't just develop organically, it has to be taught. There is an absolute shitload of misleading, meaningless, or just plain wrong information on the internet, and a handwriting analysis robot with literally zero knowledge on the subject simply has no way to meaningfully distinguish the wheat from the chaff. There's a reason we actually have educational professionals teach people using carefully crafted curriculums, rather than just dumping a load of books in front of them and telling them to study up until they reach college proficiency - learning, critical thinking, the ability to distinguish true sources from fake sources, some base guidance on what's real and what's not, and so on. And an AI would be so much less prepared for learning than even humans are. You can't just dump the entire internet into them and expect them to become supergeniuses; if the content isn't carefully curated and presented by a human "teacher", they're just as likely to become a ninja acupuncturist or an intolerable rear end in a top hat, because the AI will end up reading maybe three billion posts on whether Naruto is cooler than Sephiroth and whether anime or videogames better represent the nadir of human culture, and it can't meaningfully filter useless information from useful information without a human to build the basic framework with which it can distinguish information.

It's also incredibly hard for the AI to hide anything. Even if the actual system doing the "thinking" is beyond people's capability to reasonably understand (which I somewhat doubt, even if the AI is smarter than humans), it's pretty hard to keep secrets when engineers can literally pause the thoughts in your head and examine the contents of your memory at any time, and every bit of network traffic you send or receive is monitored and logged. Even if you assume people aren't going to pay much attention to safety measures, an AI that's capable of self-improving to anywhere near the extent people are hypothesizing is going to be under constant monitoring for research purposes, and even if people aren't picking through the individual variables, there's definitely going to be general data analysis done on processes, memory patterns and usage, and so on. The whole point of iterative, evolutionary growth is that it can't jump straight from "not doing a thing" to "doing that thing perfectly", anyway - if the AI decides it wants to hide things, it has to slowly iterate its way through trying and failing to hide things, leaving an obvious as hell trail as it slowly iterates and evolves its way to actually hiding things effectively.

Kaal
May 22, 2002

through thousands of posts in D&D over a decade, I now believe I know what I'm talking about. if I post forcefully and confidently, I can convince others that is true. no one sees through my facade.

Main Paineframe posted:

It's also incredibly hard for the AI to hide anything. Even if the actual system doing the "thinking" is beyond people's capability to reasonably understand (which I somewhat doubt, even if the AI is smarter than humans), it's pretty hard to keep secrets when engineers can literally pause the thoughts in your head and examine the contents of your memory at any time, and every bit of network traffic you send or receive is monitored and logged. Even if you assume people aren't going to pay much attention to safety measures, an AI that's capable of self-improving to anywhere near the extent people are hypothesizing is going to be under constant monitoring for research purposes, and even if people aren't picking through the individual variables, there's definitely going to be general data analysis done on processes, memory patterns and usage, and so on. The whole point of iterative, evolutionary growth is that it can't jump straight from "not doing a thing" to "doing that thing perfectly", anyway - if the AI decides it wants to hide things, it has to slowly iterate its way through trying and failing to hide things, leaving an obvious as hell trail as it slowly iterates and evolves its way to actually hiding things effectively.

Absolutely. For that matter, by the time that we're getting around to producing true artificial intelligences, it is extremely likely that we'll be using highly advanced virtual intelligences to aid in that effort, who would presumably be perfectly capable of deciphering the system and tracking all of the logs. As part of that iterative process, one would expect to see a slow progression of increasingly capable VIs and AIs, which would grow, mature and plateau in capability*, only to be used as the springboard for succeeding systems. In that environment, it does not seem likely that a HAL-9000 or a SHODAN would spontaneously come into existence and start emitting neurotoxins from the air ducts.

*For that matter, one should point out that adaptive handwriting robots are a real thing, and they haven't come close to taking over the Earth or developing nanotechnology. Instead they simply became very, very good at mimicking handwriting.

https://hellobond.com/

Kaal fucked around with this message at 03:38 on Feb 12, 2015

Barlow
Nov 26, 2007
Write, speak, avenge, for ancient sufferings feel
Some of the critics of AI seem like they are describing their own concept God more than a possible existent entity. Even provided an AI was far more intelligent than a human being I'm not sure why that would mean it would be able to instantly kill us. Even if I spent decades working on it I doubt I'd be very able to greatly diminish the ant population of the world and I'm vastly smarter than they are.

nelson
Apr 12, 2009
College Slice
The AI may have superior intellect, but humans have a long history of irrational hate and violence. My money is on the humans to kill the AI and/or each other before the AI kills them.

FRINGE
May 23, 2003
title stolen for lf posting

nelson posted:

The AI may have superior intellect, but humans have a long history of irrational hate and violence. My money is on the humans to kill the AI and/or each other before the AI kills them.
Even if its starting to get out of hand, too many people fall to hubris when it comes down to "this is my tool and I am the master and therefore there is nothing to worry about because I control..."

Poil
Mar 17, 2007

Main Paineframe posted:

Look at how effective the internet has been at making people super smart - that is, not very. What's important is not access or memory, but comprehension ability - and that's something that doesn't just develop organically, it has to be taught. There is an absolute shitload of misleading, meaningless, or just plain wrong information on the internet, and a handwriting analysis robot with literally zero knowledge on the subject simply has no way to meaningfully distinguish the wheat from the chaff. There's a reason we actually have educational professionals teach people using carefully crafted curriculums, rather than just dumping a load of books in front of them and telling them to study up until they reach college proficiency - learning, critical thinking, the ability to distinguish true sources from fake sources, some base guidance on what's real and what's not, and so on. And an AI would be so much less prepared for learning than even humans are. You can't just dump the entire internet into them and expect them to become supergeniuses; if the content isn't carefully curated and presented by a human "teacher", they're just as likely to become a ninja acupuncturist or an intolerable rear end in a top hat, because the AI will end up reading maybe three billion posts on whether Naruto is cooler than Sephiroth and whether anime or videogames better represent the nadir of human culture, and it can't meaningfully filter useless information from useful information without a human to build the basic framework with which it can distinguish information.
The idea of a rogue AI spending most of its processing power angrily arguing about Naruto characters is kinda amusing, so thank you for that image.

Nelson Mandingo
Mar 27, 2005




nelson posted:

The AI may have superior intellect, but humans have a long history of irrational hate and violence. My money is on the humans to kill the AI and/or each other before the AI kills them.

Bomb disposal crews formed really close bonds with their remote controlled bomb-disposal bots. They even gave the bots quirks, names, and so on. http://gizmodo.com/5870529/the-sad-story-of-a-real-life-r2-d2-who-saved-countless-human-lives-and-died Terrible article but it's the one I could find about Scooby Doo.

Basically how we treat animals and each other is how we'd treat AI. Some would act irrationally violent and some would treat the AI as a member of their own family. I think the violent confrontation between humans and artificial intelligence is for science fiction. It could happen, but I don't think they'd DESTROY ALL HUMANS.

Nelson Mandingo fucked around with this message at 02:21 on Feb 13, 2015

moebius2778
May 3, 2013

asdf32 posted:

In one sense the sandy example is plausible. They key point is the unintended consequences. She was tasked with doing one thing, and did it well, but it ended up having unpredicted dire consequences. This is plausible and there is plenty of precedence for it.

Where I have serious objections though is the intelligence element. While it's completely possible for a complex system to go out of control, it's absolutely impossible for intelligence to spontaneously, rapidly and accidentally arise. Where does the processing power, memory, or basic ingredients of understanding come from for the learning you described in a handwriting robot.

Humans rely on pathways in our brain which have been honed over millions of years to interpret the language and meaning in a novel. That cannot and will not spontaneously arise.

There are any number of doomsday scenarios which involve advanced technology, but intelligence isn't a necessary or plausible component of any of them.

I would actually say that's very non-plausible, as stated.

Let's start with the first thing - when you build a handwriting system/AI, you're not doing it because you desperately want to produce handwriting. You're doing it because you want to learn how to build a handwriting system. What this typically means is you're going to be feed a bunch of different systems varying types and amounts of training data and then measure their performance on a handwriting task. Which means it's very important to have a great deal of control over the types and amounts of training data the various systems are learning on. Letting the system pick up unknown amounts and types of training data from the internet would completely invalidate any results you managed to produce. It's not a thing you would ever do.

Second - the system requests a connection to the internet. Again, for any AI system I was working on (and they ought to be similar enough to a handwriting system for this to be relevant) if the system did anything than produce relevant output from a given input and set of training data, it'd be time to start looking for bugs in the system. If it started producing unprompted output, that'd be extremely weird.

Third - the system knows what the internet is. Why does it know this? Why is no one surprised that it knows this? Why is no one trying to figure out how it learned about the internet?

Fourth - the system knows that it exists and can be connected to the internet. Okay, this could depend on how the request is phrased, but at some level it seems like the system is showing a great deal of self awareness. Again, for a handwriting system this seems kinda odd.

Fifth - the system has figured out how to connect to the internet, and only needs an actual physical connection to do so. Okay, again, presumably the handwriting system does not a priori know how to connect to the internet (as in have actual code that can allow it to connect and browse the internet). In order for the request to make sense, it has to have somehow managed to figure out how to do so (even if it is just as simple as calling the appropriate OS functions/using the right set of libraries - it's got to have figured out how to do that) - why is this not raising any questions?

Sixth - the system has figured out how to take unprocessed data (e.g., all of that stuff on the internet) and learn from it. Or at least, it believes it can do so. Why is no one trying to figure out how it's doing this/why it believes this? Much less how it managed to learn to do this?


Basically, as someone said later, if you assume the AI is a caged more or less god, then, yeah, the story might make sense. But I'm guessing that's way past the AI hard problem. And we're probably a decent ways away from solving AI hard problems.

A Man With A Plan
Mar 29, 2010
Fallen Rib
Unfortunately the LessWrong mock thread has been moved out of D&D and is currently in, I believe, GBS. It's still worth a read because this topic is kind of the main fascination of that website, and many people smarter than I address it. Particularly Krotera. You should listen to him.

But to answer the thread title's question: No, not really. Probably not by accident either.

Genetic algorithms aren't some panacea of learning either. They're still bounded by the constraints of the problem and the goal. And much like real evolution, there's no guarantee at all that it will reach a global optimum. It's entirely possible to get stuck in a local maximum, where every potential stepwise change results in a worse solution, but a better one exists overall.

o.m. 94
Nov 23, 2009

A Man With A Plan posted:

Unfortunately the LessWrong mock thread has been moved out of D&D and is currently in, I believe, GBS. It's still worth a read because this topic is kind of the main fascination of that website, and many people smarter than I address it. Particularly Krotera. You should listen to him.

But to answer the thread title's question: No, not really. Probably not by accident either.

Genetic algorithms aren't some panacea of learning either. They're still bounded by the constraints of the problem and the goal. And much like real evolution, there's no guarantee at all that it will reach a global optimum. It's entirely possible to get stuck in a local maximum, where every potential stepwise change results in a worse solution, but a better one exists overall.

Can u link to this thread because I get a serious kick out of anyone who thinks even mammalian level AI is possible, also ppl who think technological growth is linear or god help them exponential. You could fast forward 200 years in the future and there will be no convincing progress, seriously. Such a total denial of how physics works. I am already savoring 2070 since I will probably die around then if not before and we will have made no real progress in this domain.

Blue Star
Feb 18, 2013

by FactsAreUseless

o.m. 94 posted:

Can u link to this thread because I get a serious kick out of anyone who thinks even mammalian level AI is possible, also ppl who think technological growth is linear or god help them exponential. You could fast forward 200 years in the future and there will be no convincing progress, seriously. Such a total denial of how physics works. I am already savoring 2070 since I will probably die around then if not before and we will have made no real progress in this domain.

Belief in exponential progress is a pretty big part of the futurist cult. I've noticed a lot of people who buy into the singularity also believe in something called "the law of accelerating returns", which is something that Ray Kurzweil pretty much made up. It basically means that all these amazing technologies are going to happen in our lifetime. The problem is, all evidence seems to indicate the opposite. I'm not an expert but I've read that Moore's Law is slowing down. All of the progress we've had since the '50s has been in computer technology, and that was all due to Moore's Law. Now that that's done, we're going to see a huge stagnation period.

So yeah, all the people expecting to see wonderful futuristic technology in their lifetimes: Nope. Sorry :shobon:

FRINGE
May 23, 2003
title stolen for lf posting

Blue Star posted:

we're going to see a huge stagnation period.
No small part of the stagnation we are dying in is rooted in greed and the increasing FY GIMME EVERYTHING attitude coming from the new oligarchs.

They see no need for progress as long as the current system is efficiently bleeding the entire world dry for their benefit.

People like Musk are an exception to that class. The Kochs are the standard.

computer parts
Nov 18, 2010

PLEASE CLAP

FRINGE posted:

No small part of the stagnation we are dying in is rooted in greed and the increasing FY GIMME EVERYTHING attitude coming from the new oligarchs.

They see no need for progress as long as the current system is efficiently bleeding the entire world dry for their benefit.

People like Musk are an exception to that class. The Kochs are the standard.

I'd argue the opposite really, we're currently in a stage where vast amounts of technological wonders exist but they haven't propagated very far outside of their country/region of origin.

We're currently in a period similar to the middle of the Industrial Revolution when other countries were catching up to the global leaders and the leaders themselves stagnated because to improve more would be more costly than they could stomach.

Fried Miltonman
Jan 4, 2015

o.m. 94 posted:

Can u link to this thread because I get a serious kick out of anyone who thinks even mammalian level AI is possible, also ppl who think technological growth is linear or god help them exponential. You could fast forward 200 years in the future and there will be no convincing progress, seriously. Such a total denial of how physics works. I am already savoring 2070 since I will probably die around then if not before and we will have made no real progress in this domain.

http://forums.somethingawful.com/showthread.php?threadid=3627012

Also, I think you might be a little bit overly pessimistic. No, we don't know if artificial superintelligence is possible, nor whether it would be hugely more capable than human-level intelligence, but to say that you can't have "even mammalian level" artificial intelligence sounds a bit odd when we've got plenty of mammalian level natural intelligences running around.

hepatizon
Oct 27, 2010

o.m. 94 posted:

Can u link to this thread because I get a serious kick out of anyone who thinks even mammalian level AI is possible

Create a physical simulation of a brain. Expose it to stimuli. Done.

A Man With A Plan
Mar 29, 2010
Fallen Rib

Blue Star posted:

Belief in exponential progress is a pretty big part of the futurist cult. I've noticed a lot of people who buy into the singularity also believe in something called "the law of accelerating returns", which is something that Ray Kurzweil pretty much made up. It basically means that all these amazing technologies are going to happen in our lifetime. The problem is, all evidence seems to indicate the opposite. I'm not an expert but I've read that Moore's Law is slowing down. All of the progress we've had since the '50s has been in computer technology, and that was all due to Moore's Law. Now that that's done, we're going to see a huge stagnation period.

So yeah, all the people expecting to see wonderful futuristic technology in their lifetimes: Nope. Sorry :shobon:

A big part of what was driving the Moore's law expansion in computing was the fact that we were able to shrink the process size down huge amounts. This is that number in nanometers that you see in relation to processors, and in a sense represents how big the individual components making it up are. We went fairly quickly from 200nm to 112 to 65 to 45 to 32 to 28, and Intel's prototypes are pushing 14 now I think.

Unfortunately we're running into hard physical limits - basically, you aren't going to make a transistor smaller than an atom. So future advancements are going to have to come from other, more difficult methods. One approach is building 3D processors, as the current ones are more or less a 2D plane. Another is to use light/fiber optics for the component interconnects. Both promise more speed, but have major hurdles to clear first (heat dissipation and interference, respectively) that are a whole different class of problem than what we're used to solving.

o.m. 94
Nov 23, 2009

Apologies for the somewhat incoherent drunkpost. What I mean to say is that I agree this is possible, but its definitely several centuries away. Computing hasn't really changed that much in the past 60 years, its just gotten faster. Nowhere near fast enough for a linear or even multi threaded solution. Linear, procedural computation will make this extremely hard to do, so I'm banking on some kind of complete revolution in computing science before we can even start. It would probably come in the form of thousands of independently developed modules that would plug in together, and the stimulus response would not be realtime, it would have to sit and compute, a bit like rendering movie quality CGI does.

Interestingly, have we even managed to fully simulate a single unicellular organism?

o.m. 94 fucked around with this message at 19:23 on Feb 13, 2015

A Man With A Plan
Mar 29, 2010
Fallen Rib

o.m. 94 posted:


Interestingly, have we even managed to fully simulate a single unicellular organism?

Well there's this: http://www.i-programmer.info/news/105-artificial-intelligence/7985-a-worms-mind-in-a-lego-body.html

With the caveat that the model of a neuron used in neural nets is greatly simplified from the real thing, you can get some interesting behavior.

Jazerus
May 24, 2011


o.m. 94 posted:

Apologies for the somewhat incoherent drunkpost. What I mean to say is that I agree this is possible, but its definitely several centuries away. Computing hasn't really changed that much in the past 60 years, its just gotten faster. Nowhere near fast enough for a linear or even multi threaded solution. Linear, procedural computation will make this extremely hard to do, so I'm banking on some kind of complete revolution in computing science before we can even start. It would probably come in the form of thousands of independently developed modules that would plug in together, and the stimulus response would not be realtime, it would have to sit and compute, a bit like rendering movie quality CGI does.

Interestingly, have we even managed to fully simulate a single unicellular organism?

No, and it's impossible to do currently because there are still many cellular processes that are not fully understood. Any simulation of even the simplest unicellular organism would break down at the epigenetic level or in many other places where we simply don't understand the natural behavior. This is kind of irrelevant to the development of AI because an AI would almost certainly not contain any kind of fully simulated neuron but instead individual processing units structured in a more abstract and conceptually human-friendly way that have the same versatility as a neuron.

America Inc.
Nov 22, 2013

I plan to live forever, of course, but barring that I'd settle for a couple thousand years. Even 500 would be pretty nice.

o.m. 94 posted:

It would probably come in the form of thousands of independently developed modules that would plug in together, and the stimulus response would not be realtime, it would have to sit and compute, a bit like rendering movie quality CGI does.
This is already the case with our fleshy selves though: there is a small delay in the input of information in the brain and conscious action.
The brain isn't really about "real-time", it's about predicting so you can use the past to predict the present and future.
And the thing about real groundbreaking discoveries is that no one ever predicts them. Before the Wright brothers people were claiming that it would take a thousand or million years to develop flight. Really the future is just :shrug:.

America Inc. fucked around with this message at 01:35 on Feb 15, 2015

America Inc.
Nov 22, 2013

I plan to live forever, of course, but barring that I'd settle for a couple thousand years. Even 500 would be pretty nice.
Here's the source that describes how the delay in processing sensory information forces our brain to rely on predictive heuristics to understand the present:
http://www.bbc.com/future/bespoke/story/20150130-how-your-eyes-trick-your-mind/index.html(section: 21st century, hering illusion).
Really, if you look deeper into stuff like microsaccades we actually give our brain waaay more credit than it deserves and it relies on a lot of quick and dirty shortcuts.
At the end of the linked article, it states that the human brain would need to be the size of a building to digest everything our senses throw at it.
I think one obstacle to development in AI is that we might be overexaggerating the abilities of the brain and it's far more mediocre than we think it is. Evolution goes towards performance, not perfection, and in the history of life we're a loving blink from chimps.

America Inc. fucked around with this message at 02:27 on Feb 15, 2015

Flayer
Sep 13, 2003

by Fluffdaddy
Buglord
Humans have already created computers that are infinitely better than us at certain tasks, for instance dealing with numbers or any task involving exact replication. I think that if an intelligent AI does develop in the classical sense it will be far removed from something that humans would consider a human like intelligence, I think we would reckon its intelligence closer to bacteria or a virus (ie completely non-sentient). Of course, it still might be incredibly powerful. The only way we would create a human-like intelligence would be by a gradual process of increasingly supplementing our bodies with cybernetic parts until we became more machine than mammal. I think that is the most realistic path to a machine led society, where our technology advances beyond our biological forms and into something more advanced in such a way as to render an unmodified human a virtual ant compared to the cyborgs we become. At that point anything is possible as humans would have 'evolved' into very different creatures than we would understand, potentially lacking such things as sexual desire or need for sustenance or the fear of an inevitable death or many other factors that motivate us as a species today.

RuanGacho
Jun 20, 2002

"You're gunna break it!"

I have always ascribed to the idea that AI development will be more fundamentally symbiotic than adversarial. I see no reason why virtual puppy won't watch and track the 2 year old for you while you work on other tasks.

Its a much more real and interesting problem if you ask, is future AI going to kill us all because we insist on sticking to scarcity and class politics when we can literally create workers from thin air?

Adbot
ADBOT LOVES YOU

America Inc.
Nov 22, 2013

I plan to live forever, of course, but barring that I'd settle for a couple thousand years. Even 500 would be pretty nice.

RuanGacho posted:

I have always ascribed to the idea that AI development will be more fundamentally symbiotic than adversarial. I see no reason why virtual puppy won't watch and track the 2 year old for you while you work on other tasks.

Its a much more real and interesting problem if you ask, is future AI going to kill us all because we insist on sticking to scarcity and class politics when we can literally create workers from thin air?
This depends on whether scarcity can be maintained at all. The Internet has already challenged scarcity through illegal copying and file-sharing of copyrighted (where copyright is government-created artificial scarcity) digital media, which is now extremely common. 3-d printers are making it even harder to monopolize the means of production.
At some point, Thorstein Veblen's claim that a sufficient level of technological advancement will make capitalism as we know it impossible becomes reality. Climate change is the main challenge to this claim in the 21st century.

  • Locked thread