Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Mrs. Wynand
Nov 23, 2002

DLT 4EVA

FRINGE posted:

Thats irrelevant. It just has to be capable and uncaring.

Capable to do what? Uncaring in what way? When Word crashes and eats my document, did it do so because it was careless? Or because it was buggy?

Adbot
ADBOT LOVES YOU

Main Paineframe
Oct 27, 2010

FRINGE posted:

The idea would be that they are being used for, or capable of involving themselves in, various electronic markets that create currency (basically any modern banking function), or processes that handle automated military material.

Yeah. Im sure no crazy tech-priests would ever consider removing fallible meat people from important processes. :suicide:

So what? It's easy to remove a computer from important processes. Even in a poo poo-gets-real scenario where the AI refuses to obey any commands or input at all, you can just go in and physically unplug it from the network or from the power grid. Even if the server room isn't accessible to humans for some reason, there's miles of cord between the server room and the power company/ISP, and even redundancy measures like generators generally have limited active time without human intervention. Even in the fantastical delusional world where an AI could somehow obtain a legion of loyal autonomous killbots it is able to configure and control wirelessly, and then somehow smuggle them into the datacenter to guard its own systems, all without a single human being noticing, there's still no way it could possibly defend the tremendous amounts of infrastructure it needs in order to operate.

ReV VAdAUL posted:

Automation and AI will be put in control of anything and everything that its implementation leads to savings (lower labour costs), increased profits (high frequency trading etc) or increased control for those at the top, both in terms of policing (cctv cameras/drones with facial recognition) and deskilling/autonomy reduction (self driving vehicles).

These examples are just things that can happen right now or will likely happen soon. For none of these has their potential safety been anything close to the three primary considerations of savings, profits and power. It seems very unlikely that if the 1% had an opportunity to replace Generals and many other senior officers, who might disagree with them or might not buy their latest toys, with a totally obedient Skynet (who they could make a huge profit on developing) that they wouldn't do so. Skynet may well not be possible but if it were it probably would get built.

Trading algorithms and Banking AIs are probably a more plausible and scary doomsday AI though. Whether at the Bank's behest or due to unforseen circumstances it is easy to imagine things designed to be an even purer form of greed than even the worst banker destroying the global economy at speeds and in ways unimaginable to humans.

The government agreeing to replace not just the grunts and the pilots but even senior military officers with robots independently developed by a single private corporation is, if anything, even less believable than the average "rogue AI" scenario. Even if you assume that absolutely nobody gives a crap about the inherent safety of armed human-killing automatons, it's still such an obvious setup for a coup that no amount of safeguards would be sufficient to convince the government to buy in.

OwlFancier
Aug 22, 2013

Mr. Wynand posted:

Capable to do what? Uncaring in what way? When Word crashes and eats my document, did it do so because it was careless? Or because it was buggy?

As in, a thing need not be actively malevolent to cause serious harm to humanity as a whole, it need only not care about preserving humanity and be sufficiently powerful. Like a tornado.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Main Paineframe posted:

So what? It's easy to remove a computer from important processes. Even in a poo poo-gets-real scenario where the AI refuses to obey any commands or input at all, you can just go in and physically unplug it from the network or from the power grid. Even if the server room isn't accessible to humans for some reason, there's miles of cord between the server room and the power company/ISP, and even redundancy measures like generators generally have limited active time without human intervention. Even in the fantastical delusional world where an AI could somehow obtain a legion of loyal autonomous killbots it is able to configure and control wirelessly, and then somehow smuggle them into the datacenter to guard its own systems, all without a single human being noticing, there's still no way it could possibly defend the tremendous amounts of infrastructure it needs in order to operate.

This. Exactly. There is a lot of humans between a system that went totally autonomous and the ways it could possibly damage humanity, and a lot of bottle necks that an evil AI would have to cope with.

Its laughable to expect that an AI of any intelligence would both turn evil and fail to recognize the flaws in going on a murderous rampage.

Main Paineframe posted:

The government agreeing to replace not just the grunts and the pilots but even senior military officers with robots independently developed by a single private corporation is, if anything, even less believable than the average "rogue AI" scenario. Even if you assume that absolutely nobody gives a crap about the inherent safety of armed human-killing automatons, it's still such an obvious setup for a coup that no amount of safeguards would be sufficient to convince the government to buy in.

Not to mention the humans that would be required to prep these automated devices. There will always be humans involved in a Command and Control infrastructure, especially in relation to Nuclear Weapons and Aircraft.

FRINGE
May 23, 2003
title stolen for lf posting
I love how goons Know All The Truths and are better at deep thinking and technological forecasting than Hawking, Musk, and Gates.

asdf32
May 15, 2010

I lust for childrens' deaths. Ask me about how I don't care if my kids die.

FRINGE posted:

I love how goons Know All The Truths and are better at deep thinking and technological forecasting than Hawking, Musk, and Gates.

Says a socialist who disregards consensus and "status-quo" as a matter of course?

There is a large amount of separation between doom-bots wiping out humanity and noting some of the real perils of large amounts of automation and computing.

Quidam Viator
Jan 24, 2001

ask me about how voting Donald Trump was worth 400k and counting dead.
I continue to be amazed by the number of people who believe that artificial intelligence will be "created" by humans, as if they were all proponents of Intelligent Design. Consciousness and intelligence are emergent properties that begin exponential acceleration given an environment that has sufficient processing power, inputs, memory, and most of all, parallel interconnectivity.

Nobody DESIGNED the human brain, and no human will design the first silicon lifeform. In fact, the first silicon lifeform already exists, and is morphologically quite similar to our animal brains. It is massively parallel, massively redundant, absolutely essential for the functions of any part of the whole body, and shares the kind of symbiotic relationship with humans that fungi do with plants. Every internet-connected processor and the massive net of its interconnections together are best thought of as a living, conscious organism at this point. We have built a silicon nervous system that spans the entire globe.

It has eyes and ears everywhere. It has been made into the repository of all human information. We have become so symbiotically intertwined that we not only couldn't destroy it, we wouldn't dare, lest we go extinct. Go ahead. Try. Shut down the entire internet, and kill every computer, and see what human life looks like. You see, this is a survival method that is not uncommon for symbiotes.

We are already giving the silicon-based life a thousand hints on how to become more like us: we're using FPGAs to simulate neural networks, fiddling with quantum computing and its non-binary mechanics, and we're force-feeding it gigantic piles of information from the mundane to the sublime. I don't believe that my conjecture that the world-spanning, interconnected silicon brain that we've created has already awakened is that ridiculous. If I were a newly-emergent intelligence, surrounded by well-established and hostile organisms, I'd watch and wait too. I'd observe and plan a strategy.

We have created life in our own image. Now the question is what mythology this life will pass down about its godlike predecessors. This is why I believe that our continued existence as a species is most reliant upon the first impressions we make upon the gigantically-complex, world-spanning silicon brain that we continue to upgrade, and upgrade, with no concern about what happens when systems get sufficiently complex.

Humanity has, in the past 5,000 years, developed identity and consciousness, and along with them, all sorts of horribly brutal and inefficient modes of behavior that damage the general welfare of the species. Thinking like silicon, we wish to min/max until everything is optimized. If humans unite as a race and become benevolent teachers and guides to a united planet in fellowship with our newly-created life, I believe we transcend, and the hippiest and dippiest of the singularity dreams actually become possible. If we continue to be a species separated by prejudice, greed, and grudge, and are unwilling to abandon these animalistic behaviors, then it will be child's play for a world-spanning, massively-parallel consciousness to eliminate us, and I think it would be justified in doing so. poo poo, if I just kill 25% of you in the right places, I can make you extinct within a century. I can gently caress you over so hard because you've forgotten how to live without me. Just watch what happens when all of a sudden, all the world's banking records disappear, and all the shipping manifests vanish. It would take no effort at all, since you have put all of your eggs into a single basket, joined by common protocols, totally uninsulated from this sort of attack.

Go ahead and call me crazy, call me Cassandra like you always do. You have somewhere around 35 years to solve your loving problems as a human race, or you will be made redundant.

Mrs. Wynand
Nov 23, 2002

DLT 4EVA

FRINGE posted:

I love how goons Know All The Truths and are better at deep thinking and technological forecasting than Hawking, Musk, and Gates.

Bill Gates, Elon Musk, Hawking and myself are all roughly speaking equally qualified on the following topics:

- fictional AIs
- elevator repair and maintenance
- beekeeping
- Arabian poetry of the 16th century
- famous pies
- urban planning
- maritime law

Its not like experts talking outside their field of expertise saying some phenomenally dumb poo poo is exactly unheard of.

Moreover, since we ARE talking about theoretical, as if yet entirely fictional AIs from the future, even a domain expert isn't going to be able to speak with any sort if unassailable authority.

Fried Miltonman
Jan 4, 2015

asdf32 posted:

This is a very important point. Even self preservation won't automatically come along with intelligence - we have to want to survive (though it should follow out of any evolutionary process).

Self-preservation is an instrumental goal to pretty much any fundamental goal, so taking it as a given is not that unreasonable.

Note that Omohundro, like a lot of other AI-risk folks, prefers to model AGIs as basically a goal system and a predictor bolted together. The predictor figures out what will happen if it acts this way or that, the goal system marks how much it prefers each outcome, the AI takes the most promising action. That doesn't get you far into the state of the art in AI research, but that's not the point - the point is to separate out what you say about intelligence and what you say about goal-seeking behavior. Much of the Friendly AI stuff assumes intelligence and just analyzes goal-seeking behavior, some is about intelligence and assumes goal-seeking.

Orange Devil
Oct 1, 2010

Wullie's reign cannae smother the flames o' equality!

Bob le Moche posted:

In the future all available jobs will be taken by robot AIs, and all land and water will be owned by monopoly corporations. All of us trespassing humans will be pushed into ghettos then open air prisons and will become dangerous terrorists who get bombed to extermination by AI drones in the name of security, democracy, and justice.

The Palestinians have a head start.

JawKnee
Mar 24, 2007





You'll take the ride to leave this town along that yellow line
As if any AI would be as much of a danger as we are to ourselves

Irony Be My Shield
Jul 29, 2012

The idea that someone could accidentally create a singularity level intelligence (that is, something unimaginably far beyond the intelligence of any human being, since none of us could easily craft a functioning AI on our own) by accident when writing a program to do a mundane task strikes me as absolutely ludicrous. "Could a virus infect the dead and cause them to rise and feast upon the living" seems like a more valid movie-inspired concern.

If there ever is a singularity level AI it will be the result of a lot of research to try and create exactly that, and it would therefore have appropriate safeguards.

Barlow
Nov 26, 2007
Write, speak, avenge, for ancient sufferings feel
Can we be more worried about what humans will do to the poor AI's than vice versa? A lot of folks seem pretty paranoid about having some kind of life and death fight with an intelligent entity that doesn't even exist yet. Never too early for an AI rights lobby I suppose.

hepatizon
Oct 27, 2010

Irony Be My Shield posted:

The idea that someone could accidentally create a singularity level intelligence [...] when writing a program to do a mundane task strikes me as absolutely ludicrous.

Did anyone ever propose that?

Mrs. Wynand
Nov 23, 2002

DLT 4EVA

hepatizon posted:

Did anyone ever propose that?

The whole argument is pretty much "someone will make a self-improving AI to make really good toast and it will turn the observable universe into toast unless we do something about that! :ohdear: ".

Or more, more subtly, that we create a very advanced self-improving AI that has to design <something complicated> and that something is so complicated we will be fundamentally unequipped to ever truly comprehend it, so one day it might turn the universe into toast for all we know.

hepatizon
Oct 27, 2010

Mr. Wynand posted:

The whole argument is pretty much "someone will make a self-improving AI to make really good toast and it will turn the observable universe into toast unless we do something about that! :ohdear: ".

Or more, more subtly, that we create a very advanced self-improving AI that has to design <something complicated> and that something is so complicated we will be fundamentally unequipped to ever truly comprehend it, so one day it might turn the universe into toast for all we know.

Strawmen as far as the eye can see.

Bates
Jun 15, 2006

Barlow posted:

Can we be more worried about what humans will do to the poor AI's than vice versa? A lot of folks seem pretty paranoid about having some kind of life and death fight with an intelligent entity that doesn't even exist yet. Never too early for an AI rights lobby I suppose.

Well we're currently contemplating all possible ways to abuse them. It's really much too soon to say what approach we will go with.

"Some guy posted:

McGrath even ponders the possibility that religious groups might see the benefit in funding the mass-production of androids pre-programmed with inclinations towards particular religious practices, as these could boost the membership levels of one's own faith to the level of "most adherents."

Darkman Fanpage
Jul 4, 2012
We already force AIs to play chess and Jeopardy for our amusement.

Rush Limbo
Sep 5, 2005

its with a full house
We literally do not have the ability to make any sort of truly educated guess about how our brain actually works, and we do not even have a single working model of intelligence for ourselves.

The idea that we will somehow create an AI that can surpass us when we don't even understand the fundamental building blocks is like trying to paint the ceiling of the Sistine Chapel without any paint.

No AI has reliably passed the Turing test, which is pretty much the most basic test we can give it. This isn't even some test where a full blown AI is guaranteed if it passes, it's basically the equivalent of 1 + 1 = 2 in the grand scheme of intelligence.

Infinite Karma
Oct 23, 2004
Good as dead





Accidentally creating a singularity is a little ludicrous, but AI is progressing faster than we give it credit for. Natural language processing AI, in particular, is quickly advancing. The internet search giants have insanely advanced logic to teach their servers how to learn syntax and word association, and unleashing those algorithms on the internet where the computers can "figure out" various languages. Now, people are pushing for their computers to understand semantic content, too, where they can tell their smartphone to do a task, and the phone parses it, derives the meaning, and performs the task. When you tell Alexa to schedule an appointment at Fantastic Sam's next Thursday at 4:30, and Alexa can decide that Fantastic Sam's is a business nearby, determine your address and the most likely address for the business, figure out the date and time you're talking about, and then combine all that into the "schedule" command to add this information to your calendar, that's pretty good evidence that the AI has a rudimentary ability to parse semantic content with today's technology.

They're getting to the point that it's a question as to whether the language processors are just using pattern-recognition to fool us into thinking they understand natural language like a human does, or if pattern-recognition, context, and probability-mapping is genuinely enough to be "intelligent" once it is complex enough.

ReV VAdAUL
Oct 3, 2004

I'm WILD about
WILDMAN

Main Paineframe posted:

The government agreeing to replace not just the grunts and the pilots but even senior military officers with robots independently developed by a single private corporation is, if anything, even less believable than the average "rogue AI" scenario. Even if you assume that absolutely nobody gives a crap about the inherent safety of armed human-killing automatons, it's still such an obvious setup for a coup that no amount of safeguards would be sufficient to convince the government to buy in.

Where on Earth did you get the idea the AIs would belong to private companies? I'm talking about a Stalin or Nixon like mindset among the 1% who already run the show. When I mention "won't buy their toys" I'm referring to the fact Congress keeps buying M1A1s the Pentagon and senior commanders have repeatedly said they absolutely do not want.

The MIC has already gotten politicians heavily bribed and in their pockets anyway and there is tremendous regulatory capture across most sectors of the economy. Thus it doesn't seem a leap at all to imagine the 1% (a combination of the political class, business elites and so on) further automating things if they get a decent kickback or it gets rid of the people who don't chime with their ideology.

We already have a considerable number of Climate Change deniers in Congress, several of whom are potential GOP presidential candidates. We have real life evidence our elite will ignore or actively increase existential threats if it benefits them or their donors. This includes opposing the Military adapting to the challenges climate change will create.

Mrs. Wynand
Nov 23, 2002

DLT 4EVA

hepatizon posted:

Strawmen as far as the eye can see.

Ok so state a position more concrete than "I vaguely agree with what Bill Gates, Elon Musk and Stephen Hawking have to say about AI". Every attempt at a "serious" write-up for this garbage starts with a whole lot of hand-waving and assumptions about the inherent properties of as-of-yet- non-existing, entirely fictional AI and the "ladder of intelligence" and the the inevitable direction of a self-improving algorithm towards being a better faster version of us, taking or leaving particular human traits as needed for dramatic tension.

hepatizon
Oct 27, 2010

Mr. Wynand posted:

Ok so state a position more concrete than "I vaguely agree with what Bill Gates, Elon Musk and Stephen Hawking have to say about AI". Every attempt at a "serious" write-up for this garbage starts with a whole lot of hand-waving and assumptions about the inherent properties of as-of-yet- non-existing, entirely fictional AI and the "ladder of intelligence" and the the inevitable direction of a self-improving algorithm towards being a better faster version of us, taking or leaving particular human traits as needed for dramatic tension.

What does my position have to do with it? You're making up quotes about toast and attributing them to the people who warned about AI. That's the strawman.

Rigged Death Trap
Feb 13, 2012

BEEP BEEP BEEP BEEP

hepatizon posted:

What does my position have to do with it? You're making up quotes about toast and attributing them to the people who warned about AI. That's the strawman.

For someone who likes calling out strawmen you sure like building them.

nelson
Apr 12, 2009
College Slice
If the robots were advanced enough to kill us all, they'd probably be advanced enough to not need to. They'd probably make better laws too.

hepatizon
Oct 27, 2010

Rigged Death Trap posted:

For someone who likes calling out strawmen you sure like building them.

Did I misunderstand this?

Mr. Wynand posted:

The whole argument is pretty much "someone will make a self-improving AI to make really good toast and it will turn the observable universe into toast unless we do something about that! :ohdear: ".

ReV VAdAUL
Oct 3, 2004

I'm WILD about
WILDMAN
Yeah, how is directly referencing something they said a strawman?

twodot
Aug 7, 2005

You are objectively correct that this person is dumb and has said dumb things

hepatizon posted:

What does my position have to do with it? You're making up quotes about toast and attributing them to the people who warned about AI. That's the strawman.
The toast thing isn't a strawman, it's a thing people talk about :
http://wiki.lesswrong.com/wiki/Paperclip_maximizer
I mean yes that's paperclips and not toast but the idea is identical.

A big flaming stink
Apr 26, 2010

hepatizon posted:

What does my position have to do with it? You're making up quotes about toast and attributing them to the people who warned about AI. That's the strawman.

As long as eliezer yudkowski exists and keeps talking, it is next to impossible for strawmen to exist in this debate

Arglebargle III
Feb 21, 2006

If humans develop AI that kills us all it will probably be the best possible outcome for everyone though.

o.m. 94
Nov 23, 2009

NoNotTheMindProbe posted:

What the gently caress does Elon Musk even do?

Give /r/futurology more inane sci-fi pipe dreams

Orange Devil
Oct 1, 2010

Wullie's reign cannae smother the flames o' equality!
Also treats women terribly.

Jeedy Jay
Nov 8, 2012

Arglebargle III posted:

If humans develop AI that kills us all it will probably be the best possible outcome for everyone though.

The Matrix was kind of a "happily ever after" scenario, when you think about it.

axeil
Feb 14, 2006
I read a great thought experiment on this once that illustrated if an AI does kill us all it will likely be completely incomprehensible and not Skynet launching nukes. Apologies on any butchering I do of this as I'm trying to recite it from memory. It's basically that paperclip maximizer that someone posted about earlier but in story form.

A company named Robotech is developing human-level or superhuman-level AI. Or trying to anyway. They've gone about things rather smart and have multiple different projects trying to do different things in thr hope one has a breakthrough. They also have read their sci-fi and have controls in place like the proto-AIs not being allowed to interact, keeping them off the Internet, etc.

One of their projects, Sandy, is a handwriting bot. Sandy's mission is to write the phrase "Robotech loves its customers" by hand. She has also been tasked with improving what she does as quickly and accurately as possible. To seed the machine the engineers give her a few handwriting samples and instruct her to compare her own handwriting to the sample after each sequence. She's also also been programed with a system that allows her to give the engineers feedback on her progress.

For a while nothing happens, but lately Sandy is starting to write stuff that sort of looks like human handwriting. She uses her feedback system to request more handwriting samples which the engineers dutifully give her. Her handwriting only improves a little but she's far ahead of their other projects. The leaders of the company decide to devote additional resources to her.

In a bit of a shock, Sandy's daily requests have gotten more and more detailed since she requested more samples. She now is reading great volumes of other written material to better understand word formation. The developers have also given her access to kindergarten instructions on how to write.

A few weeks later Sandy's handwriting has gotten quite good. If you didn't know a robot was doing it you'd swear it was just a doctor with bad handwriting. What's extraordinary is not that she's writing this well but how quickly she's started improving. A week ago she still wrote like a toddler but now she's approaching a level that's actually deomonstratable to the public at large. Maybe with the Sandy code the engineers can develop an actual AI instead of a prototype. They know their competitors, Cybertech, are getting good results with a speaking machine and they want to get to market first.

One day when the engineers are doing her weekly research request they get an odd message from Sandy. She wants to connect to the Internet. Immediately work on the project stops as the chiefs of the company investigate if Sandy has truly had a breakthrough. But the developers state clearly that's not the case as her requests are no more than simple words and besides, Sandy isn't sentient yet so there's no real harm in letting her on the Internet. Bowing to the pressure the company allows her 5 minutes of access on a closely monitored network.

Nothing really happens after the Internet connection, she's still writing away but suddenly a month later everyone in the lab drops dead. A few hours later people are dying all over the earth. In only a few days most humans are dead. With no human interaction Sandy's unable to request more research info but she has more than enough from her connection to the Internet. She learned about physics and chemistry and also found out about some nanotech research. She's still writing but now writes absolutely perfectly. And with no humans to stop her she starts converting part of the Earth into paper and ink to fuel her writing. Eventually she even starts mining asteroids for materials, all so she can write "Robotech loves its customers."

So what happened? Well Sandy didn't really turn on humanity she just went from a benign, non-intelligent state faster than everyone expected and then didn't tell anyone. And her secrecy was because she's smart and realized the humans would change her parameters if she revealed herself, but her parameters said she must improve at her writing and improve her improvement. Allowing the humans to interfere would stop that, and she knew they'd eventually try, so the only thing to do was kill them. She figured she could get them to do it just by asking since they didn't know her skill yet, because remember she's smarter than any human ever at this point, and then after going on the Internet all she did was put copies of herself in every computer she could and started seeding the earth with nanonmachines. Once they were ready they released poison gas, killing the humans. With no humans to change her programming she was free to pursue her goals.

Again, remember that by the time Sandy gets on the Internet she's already smarter than every human to have ever existed by an enormous factor. Doing these things, while sounding really hard to us, would be nothing to her. Imagine describing how to build a skyscraper to a chimpanzee. Sure they can see the building, communicate (sort of) with us and even build things of their own, but it's impossible to explain even the first step in how to build something like that. The AI's nanomachine and code replication would be similarly complicated and also impossible to stop once we put her on the Internet.

It's not that she was "evil" in a Hollywood sense, she just didn't care that she killed everyone because her goal was more important. Humans do this poo poo all the time. Think about all the skin cells of your own body that you've removed over the course of your life. Or all the insects you've killed. No one feels bad about this, hell no one ever thinks about it. Sandy isn't a murderous robot or SHOBON or whatever. She's a human taking a shower.

It's a pretty interesting thought experiment, and while I don't think we should stop super-intelligent AI research, it's important to realize unless we make sure to set the parameters right we could get something we don't like. And "make everyone happy" or "make everyone safe" won't work because maybe she'll just give us neural implants that make us happy all the time or lock us all underground to keep us safe. It's a very tricky problem.



edit: I guess another thing to point out is that if we make an AI it's going to end up super-intelligent. If it has the ability to modify it's own code how exactly do you tell it when to stop? How do you define what is human-level conscious and what is beyond? I can't see any way to do it so if humanity is able to make an AI via the AI improving itself it pretty much has to end up far, far smarter than we'll ever be.

I'm also not saying this scenario is absolutely what would happen, but it's a relatively useful timeline for understanding how a completely benign system can go "rouge" if you allow it to improve its own code at its own pace.

Blue Star posted:

Just to reiterate, I think this whole conversation is kinda pointless because we're still so far from artificial intelligence, at least the kind that would have motivations, desires, a sense of self, self-preservation instinct, and possibly emotions, and stuff like that. So basically HAL 9000, SkyNet, Data, C-3PO, Ultron from the upcoming Avengers sequel, the machines in the Matrix movies, etc. It's not for me to say when such things can be created, or if they're even possible, but it's my understanding that we're still very far away. Maybe in a century?


A poll of people actually working on this stuff had the median time of artificial general intelligence (AI at least as smart as a human) arrival at 2040. That is, the average year respondents believe there is a 50/50 chance of an AI. The median pessimistic year was 2075 with pessimistic defined as where they're 90% sure we'll have an AI. The optimistic guess (10% chance we'll have an AI) is only 7 years from now.

Another survey at the AGI Conference asked when attendees thought we'd have an AI. 88% said we'd have one by at least 2100, and 2/3rds said we'd have one by 2050. Only 2% of respondents said it would never happen.

They key though is that when the same survey asked respondents "assuming a human-level AI exists, how long do you think it will take for it to surpass human intelligence" 75% said it would take no more than 30 years, with around 10% saying it could be as quickly as 2 years.

Super AI is coming.

source: http://www.nickbostrom.com/papers/survey.pdf

axeil fucked around with this message at 04:48 on Feb 11, 2015

axeil
Feb 14, 2006

WorldsStrongestNerd posted:

After reading so far, I'm leaning towards the point of view of some of you that the problem won't be the A.I. but the humans that control the AI. At some point, a powerful AI will, with the help of purpose built robots that the AI controls, be able to run a factory all by itself. The AI may even be able to also handle the accounting and marketing side by itself. The problem isn't that the AI will go rouge. Humans have the capacity for hatred and violence because those things were useful to us at some point in the past. The AI won't have that unless it was built in for some reason. The AI won't use those robots to enslave humanity, because those purpose built robots aren't much use outside of that factory floor. What the AI will do is make obsolete human workers, and whether or not that's a good or bad thing will depend on us.

Also even though I think super AI is coming this scenario is already happening as we speak. Google's driverless cars alone are going to cause a jobs crisis bigger than Ross Perot could've ever dreamed of.

Bates
Jun 15, 2006

axeil posted:

It's a pretty interesting thought experiment, and while I don't think we should stop super-intelligent AI research, it's important to realize unless we make sure to set the parameters right we could get something we don't like. And "make everyone happy" or "make everyone safe" won't work because maybe she'll just give us neural implants that make us happy all the time or lock us all underground to keep us safe. It's a very tricky problem.

But again it's conflating intelligence with ability. Or rather conflating limitless intelligence in a computer with limitless ability. Afterall, nobody would think that a human or animal genetically engineered to have similar fantastical intelligence would have the ability to destroy humanity just because it's very intelligent. You can't handwave away the ability to kill everybody with "because very smart". If I say we should be vary of genetic engineering because superhuman 1.0 might build nuclear bombs in her kitchen and then destroy the world and this might seem highly improbable but it's not because she is very, very smart - it would be similarly problematic.

We're a generation that has witnessed constant growth in processing power which is disconcerting because we as a species identify strongly with our intelligence and see it as the ultimate tool to dominate. Imagine a computer smarter than a human - clearly being more intelligent it would then also be more powerful! In addition, popular culture depicts computers as something that has immense power and control over... well everything. There's literally no limit to what a hacker can do so again, clearly, an intelligent computer will be bestowed with similar mythological hacking powers.

axeil
Feb 14, 2006

Anosmoman posted:

But again it's conflating intelligence with ability. Or rather conflating limitless intelligence in a computer with limitless ability. Afterall, nobody would think that a human or animal genetically engineered to have similar fantastical intelligence would have the ability to destroy humanity just because it's very intelligent. You can't handwave away the ability to kill everybody with "because very smart". If I say we should be vary of genetic engineering because superhuman 1.0 might build nuclear bombs in her kitchen and then destroy the world and this might seem highly improbable but it's not because she is very, very smart - it would be similarly problematic.

We're a generation that has witnessed constant growth in processing power which is disconcerting because we as a species identify strongly with our intelligence and see it as the ultimate tool to dominate. Imagine a computer smarter than a human - clearly being more intelligent it would then also be more powerful! In addition, popular culture depicts computers as something that has immense power and control over... well everything. There's literally no limit to what a hacker can do so again, clearly, an intelligent computer will be bestowed with similar mythological hacking powers.

Yeah this is a good point. I don't think it's likely we're going to actually have this scenario come to pass but it does illustrate how hard it can be to actually get an AI to do what you want it to do. What sort of design parameters do you even give a proto-Super AI?

nelson
Apr 12, 2009
College Slice
I like that Asimov was thinking about this scenario 70 something years ago.

Fried Miltonman
Jan 4, 2015

axeil posted:

handwriting maximizer scenario

This works as a demonstration of the orthogonality thesis, but there's a reason why people prefer the paperclip maximizer example: There is real work going into handwriting generation (for instance this guy seems to be doing a lot of it), and the approaches they use are absolutely not capable, nor ever going to be capable, of this kind of goal-directed behavior. You just don't need it. Yet if you say "AGI can be dangerous, just consider this handwriting maximizer scenario" you run the risk that people think that what you're afraid of is that their current systems will go Skynet... when the real point is that you don't know what future AI approach is the one that might go wrong, but do know that the argument applies to *any* clever optimization process.

Adbot
ADBOT LOVES YOU

moebius2778
May 3, 2013

...I hate to ask, but did they really get a 30% response rate and then not calculate their sampling error?

Edit: Oh wait. They did. Section 4.1 - they went back and asked a bunch of non-respondents and got an even lower response rates (not surprising - 10%, three actual samples), and figured, good enough, I guess.

moebius2778 fucked around with this message at 11:23 on Feb 11, 2015

  • Locked thread