Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Doctor Malaver
May 23, 2007

Ce qui s'est passé t'a rendu plus fort
Here's an interesting TED talk
https://www.ted.com/talks/jeremy_howard_the_wonderful_and_terrifying_implications_of_computers_that_can_learn

tl;dr - computers can read, write, speak, integrate knowledge, and will soon make most service jobs obsolete.
It doesn't speak of super AI becoming renegade and endangering humanity, but it's a good insight into rising abilities of AI.


Now this is a more speculative, futurist article where the handwriting robot story came from.
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

After reading this thread, I'm not convinced at all that this threat is an illusion. We can identify the three phases where (many) goons are making their stand against it.

1) First you need super AI and we will not reach it in foreseeable future.
Many prominent thinkers and AI experts disagree. According to Bostrom's survey they expect it by 2040. As someone pointed out, the error margin is high because of how many experts didn't respond, but it doesn't negate the fact that a non-fringe number of experts who know more about this than anyone else expect super AI within our lifetimes. It's not a certainty but it's a significant possibility.

It's also funny how many goons go "lol Hawking, lol Musk" just because these people themselves are not AI experts. Musk, Gates, Hawking are among the richest and most capable people on the world, and they are plenty smart too. They don't have to sit alone with a text book and learn this stuff, they can have lunch with top experts, heads of research in this area, heads of states, and pick their brains.


2) If super AI started emerging it would meet human imposed constraints and safety measures ("somebody will pull the plug").
This is very naive. It depends on two wrong assumptions - that AI would be physically constrained in one computer or one building, and that safety measures would be infallible. AI research is done by countless organizations around the world - corporate, university, military, hobbyist, criminal... Many of them will rely on distributed resources. You can't pull the plug on internet. And the special constraining code or any other safety measure is bound to fail because once we reach the point that one organization made the AI that is so capable that the constraints are actually needed, then others will follow. The barrier of entry will drop. If Google makes super AI in 2040, by 2045 a dozen organizations will catch up. And one of them will fail, sooner or later.
Not to mention that there might be organizations or individuals that for whatever reasons won't want their AI to have constraints.


3) Massive intelligence doesn't translate into massive capability of altering physical world.
This is the only argument that I recognize as valid, and even that one isn't perfect. Imagine that you were a crazy super villain, with infinite time (you work 24/7 with no rest, no distractions, always at 100%) and infinite resources (you hacked / mined / earned on currency exchange all the money you need). How would you go about changing the world? You don't need limbs, or ability to climb stairs, or a physical presence to buy and merge companies just like humans can. Your corporation can produce chemicals and biological agents. Once their quantity or combination becomes dangerous it's up to the human management in your organization, or the government regulators to recognize that something is wrong. And even if they do, and even if your operations are busted and dismantled, you as distributed AI are not in threat. You will start working on a similar but better plan, without pausing for a millisecond.

Doctor Malaver fucked around with this message at 12:07 on Feb 19, 2015

Adbot
ADBOT LOVES YOU

The Bloop
Jul 5, 2004

by Fluffdaddy
If something like this happens, it will most likely not be malevolent in the way typically imagined.

There is no reason that an AI would necessarily be even recognizable to us as an intelligence. An emergent AI, for instance, might somehow be composed of or seated in the signals of a future internet - one of their impulses might be transmitted by a post on the forums being edited - but that doesn't mean that the AI will be reading the post. It is very anthropocentric to assume that their thoughts will contain words, or ones and zeroes, just because those are the substrates of their form. Their thoughts may contain words no more than ours contain cells or cytoplasm or electricity. Of course, the content of our thoughts now can be about our own brain cells, and a sufficiently advanced AI would similarly be able to contemplate its own code. At first, though, there may be some more animalistic behavior and that may make recognizing the first AIs quite difficult. It may not even be possible for our minds to imagine the drives and appetites of an AI.

Barlow
Nov 26, 2007
Write, speak, avenge, for ancient sufferings feel

Doctor Malaver posted:

Imagine that you were a crazy super villain, with infinite time (you work 24/7 with no rest, no distractions, always at 100%) and infinite resources (you hacked / mined / earned on currency exchange all the money you need).
Why exactly are we assuming that AI will act like a "crazy super villain"? If a group of devoted regular human beings wanted to act like "crazy super villains" they could cause a lot of damage even without AI, yet we don't see the fear that people will act like the Joker from the Dark Knight as a pressing concern. Other than our fear of it and desire to kill it I'm not sure why an AI would have any ill will towards humanity.

Doctor Malaver
May 23, 2007

Ce qui s'est passé t'a rendu plus fort

Trent posted:

If something like this happens, it will most likely not be malevolent in the way typically imagined.

There is no reason that an AI would necessarily be even recognizable to us as an intelligence. An emergent AI, for instance, might somehow be composed of or seated in the signals of a future internet - one of their impulses might be transmitted by a post on the forums being edited - but that doesn't mean that the AI will be reading the post. It is very anthropocentric to assume that their thoughts will contain words, or ones and zeroes, just because those are the substrates of their form. Their thoughts may contain words no more than ours contain cells or cytoplasm or electricity. Of course, the content of our thoughts now can be about our own brain cells, and a sufficiently advanced AI would similarly be able to contemplate its own code. At first, though, there may be some more animalistic behavior and that may make recognizing the first AIs quite difficult. It may not even be possible for our minds to imagine the drives and appetites of an AI.

That's possible, although it's also possible that it will function similar to us because we model AI systems partly after our brains and we teach it human-like behavior (decoding speech, assigning meaning to images, etc), and we communicate with it by using words.

Barlow posted:

Why exactly are we assuming that AI will act like a "crazy super villain"? If a group of devoted regular human beings wanted to act like "crazy super villains" they could cause a lot of damage even without AI, yet we don't see the fear that people will act like the Joker from the Dark Knight as a pressing concern. Other than our fear of it and desire to kill it I'm not sure why an AI would have any ill will towards humanity.

Nobody really expects an Evil AI, but they do expect an amoral (note the difference from immoral) AI. A super powerful force with no morals might be equally dangerous because it will proceed with its plans with no regards for human life. I probably shouldn't have compared it to a villain.

The Bloop
Jul 5, 2004

by Fluffdaddy

Doctor Malaver posted:

That's possible, although it's also possible that it will function similar to us because we model AI systems partly after our brains and we teach it human-like behavior (decoding speech, assigning meaning to images, etc), and we communicate with it by using words.

I can agree that that is certainly a possibility, once we realize it's an AI, or if it's a purpose-built human-designed AI. I think it's much more likely that a true AI will come from some sort of emergent/ evolved route, however.

I'm suggesting that there is a lot of room for an intelligence more than my laptop (none) and less than a human (linguistic). We may have mouse or dog or even ape level intelligence doing things for reasons that we don't even realize are acts of agency, and even when/if we do, understanding the goals of such a mind might be effectively impossible.

mysterious frankie
Jan 11, 2009

This displeases Dev- ..van. Shut up.
I don't think Evil Al could kill us all. Maybe a couple people, but Evil Al doesn't even own a car, wtf?

mysterious frankie
Jan 11, 2009

This displeases Dev- ..van. Shut up.
This is the thread to talk about my band's bassist, right?

lollontee
Nov 4, 2014
Probation
Can't post for 10 years!
I don't think it's reasonable to speculate about the emotional motivations of a hypothetical type of intelligence when we have no idea how it would even work.

mysterious frankie
Jan 11, 2009

This displeases Dev- ..van. Shut up.
Al is definitely real and can definitely slap the B.

Rigged Death Trap
Feb 13, 2012

BEEP BEEP BEEP BEEP

Yes. You most definitely can pull the plug on the internet. Granted there are a lot of 'plugs' but it certainly is possible. Its like saying you can't shut down a power grid, or a telephone network.

Who the hell even perpetuates that myth anyways?

RuanGacho
Jun 20, 2002

"You're gunna break it!"

Rigged Death Trap posted:

Yes. You most definitely can pull the plug on the internet. Granted there are a lot of 'plugs' but it certainly is possible. Its like saying you can't shut down a power grid, or a telephone network.

Who the hell even perpetuates that myth anyways?

In a discussion of if students were graduating with computer literacy on the radio today, one student was heard to say "I don't even know how to do anything but open up word and type an essay"

also one time someone put a piece of paper on their teachers computer and the teacher thought it was broken.

I don't account this as a story of stupid people, but how far disconnected our perceptions are from people who don't use computers regularly, of which there are legion still, for some reason.

Main Paineframe
Oct 27, 2010

Doctor Malaver posted:

tl;dr - computers can read, write, speak, integrate knowledge, and will soon make most service jobs obsolete.
It doesn't speak of super AI becoming renegade and endangering humanity, but it's a good insight into rising abilities of AI.


Computers will never replace menial service jobs. It's not just about being able to do the work - if it were just that, we'd have been able to enter our McDonalds orders by touchscreen five years ago. There are social and emotional factors involved in interacting with service workers that a computer simply can't replicate in the foreseeable future.

quote:

2) If super AI started emerging it would meet human imposed constraints and safety measures ("somebody will pull the plug").
This is very naive. It depends on two wrong assumptions - that AI would be physically constrained in one computer or one building, and that safety measures would be infallible. AI research is done by countless organizations around the world - corporate, university, military, hobbyist, criminal... Many of them will rely on distributed resources. You can't pull the plug on internet. And the special constraining code or any other safety measure is bound to fail because once we reach the point that one organization made the AI that is so capable that the constraints are actually needed, then others will follow. The barrier of entry will drop. If Google makes super AI in 2040, by 2045 a dozen organizations will catch up. And one of them will fail, sooner or later.
Not to mention that there might be organizations or individuals that for whatever reasons won't want their AI to have constraints.


No real-life organization is going to create a decentralized AI project split over multiple locations. It's stupid and impractical for an incredible number or reasons. As a general rule of thumb, decentralization makes things worse, not better, except when it's something like infrastructure where physical location matters a lot; it's not really feasible for large projects carried out by professionals. (and lol if you think amateurs are going to develop an AI, especially a decentralized one). And even if that somehow did happen, it is definitely possible to pull the plug on the internet.

quote:

3) Massive intelligence doesn't translate into massive capability of altering physical world.
This is the only argument that I recognize as valid, and even that one isn't perfect. Imagine that you were a crazy super villain, with infinite time (you work 24/7 with no rest, no distractions, always at 100%) and infinite resources (you hacked / mined / earned on currency exchange all the money you need). How would you go about changing the world? You don't need limbs, or ability to climb stairs, or a physical presence to buy and merge companies just like humans can. Your corporation can produce chemicals and biological agents. Once their quantity or combination becomes dangerous it's up to the human management in your organization, or the government regulators to recognize that something is wrong. And even if they do, and even if your operations are busted and dismantled, you as distributed AI are not in threat. You will start working on a similar but better plan, without pausing for a millisecond.

Now imagine that all of the supervillain's communications with the outside world are monitored, and that it is impossible for him to encode or otherwise obfuscate those communications without the monitors noticing, and that those monitors can also literally read the supervillain's mind at any time of their choosing and there is nothing he can do to hide those thoughts and memories without also rendering them inaccessible to himself.

Even putting that aside, a "corporation" doesn't produce chemicals just by willing it - it issues orders for humans to produce those chemicals on its behalf. Even if the machinery is computerized, the supply chain still requires some human labor. For that matter, the AI is unlikely to have access to the money for that either - a computer can't just open up a bank account over the internet all on its own, even if it's doing so in the name of a corporation, so it either has to involve a human or hijack existing accounts, both of which would soon be tracked down before the purchases could have any physical effect. It's impossible to keep your involvement in a nefarious secret plan secret for long when you're ordering an array of unknowing people to carry out your bidding over the telephone because you don't have any arms or legs. Sooner or later somebody's going to notice the UPS deliveryman trying to put your network-enabled deathray turret in the server room and then the jig'll be up.

Doctor Malaver
May 23, 2007

Ce qui s'est passé t'a rendu plus fort
Of course it's possible to unplug the internet, but it's not realistic. Imagine that such a dangerous distributed AI is spreading as we speak and that a group of researchers is trying to alert the authorities. Nobody would listen to them until catastrophic real-world events unambiguously caused by that AI started occurring. And shutting down the internet would in itself be a huge disaster for humankind.

RuanGacho
Jun 20, 2002

"You're gunna break it!"

Doctor Malaver posted:

Of course it's possible to unplug the internet, but it's not realistic. Imagine that such a dangerous distributed AI is spreading as we speak and that a group of researchers is trying to alert the authorities. Nobody would listen to them until catastrophic real-world events unambiguously caused by that AI started occurring. And shutting down the internet would in itself be a huge disaster for humankind.

I say this as a trained "half the city just got hit by a meteor" emergency responder, humanity will be OK if the internet goes out.

JawnV6
Jul 4, 2004

So hot ...

Main Paineframe posted:

For that matter, the AI is unlikely to have access to the money for that either - a computer can't just open up a bank account over the internet all on its own, even if it's doing so in the name of a corporation, so it either has to involve a human or hijack existing accounts, both of which would soon be tracked down before the purchases could have any physical effect.

In general I'm on your side of this, but there's a recent fun example of why this just isn't true. Someone very politely asked for $17 million dollars over email, and got it. The closest thing to physical confirmation was calling a phone number and reaching a human-ish voice claiming to be the same named person in the email. It was humans doing the scamming, but done over pure digital interfaces that a hypothetical bot could have access to.

As for actually getting a novel *thing* produced, that currently takes bodies. You could probably brandslap some commodity with EvilAI logos, but getting something unique produced and plugged in isn't full auto. Yet.

mysterious frankie
Jan 11, 2009

This displeases Dev- ..van. Shut up.
I'm really starting to think this thread isn't about my bassist.

Doctor Malaver
May 23, 2007

Ce qui s'est passé t'a rendu plus fort

Main Paineframe posted:

Computers will never replace menial service jobs. It's not just about being able to do the work - if it were just that, we'd have been able to enter our McDonalds orders by touchscreen five years ago. There are social and emotional factors involved in interacting with service workers that a computer simply can't replicate in the foreseeable future.

I agree, but that's not what that TED talk was about. It was about expert services being replaced - doctors, lawyers, etc.

Main Paineframe posted:

No real-life organization is going to create a decentralized AI project split over multiple locations. It's stupid and impractical for an incredible number or reasons. As a general rule of thumb, decentralization makes things worse, not better, except when it's something like infrastructure where physical location matters a lot; it's not really feasible for large projects carried out by professionals. (and lol if you think amateurs are going to develop an AI, especially a decentralized one). And even if that somehow did happen, it is definitely possible to pull the plug on the internet.

lol if you claim that you can predict trends in software developments 20 years from now.

Main Paineframe posted:

Now imagine that all of the supervillain's communications with the outside world are monitored, and that it is impossible for him to encode or otherwise obfuscate those communications without the monitors noticing, and that those monitors can also literally read the supervillain's mind at any time of their choosing and there is nothing he can do to hide those thoughts and memories without also rendering them inaccessible to himself.

Why would I imagine that? We are talking about a situation where the AI already exists independently from its maker and nobody's observing it.

Main Paineframe posted:

Even putting that aside, a "corporation" doesn't produce chemicals just by willing it - it issues orders for humans to produce those chemicals on its behalf. Even if the machinery is computerized, the supply chain still requires some human labor. For that matter, the AI is unlikely to have access to the money for that either - a computer can't just open up a bank account over the internet all on its own, even if it's doing so in the name of a corporation, so it either has to involve a human or hijack existing accounts, both of which would soon be tracked down before the purchases could have any physical effect. It's impossible to keep your involvement in a nefarious secret plan secret for long when you're ordering an array of unknowing people to carry out your bidding over the telephone because you don't have any arms or legs. Sooner or later somebody's going to notice the UPS deliveryman trying to put your network-enabled deathray turret in the server room and then the jig'll be up.

I never argued that a massive and dangerous operation wouldn't require humans. And you have too much faith in law enforcement. In your world there is apparently no embezzlement over internet or hacking because everything is "tracked down before the purchases could have any physical effect". I mean, really?

You can open a business, hire people, get legal representation, file taxes... without ever leaving your room. Super AI can do it all, including placing voice calls. Stop imagining it as a computer in a server room in some basement in California, imagine it as a large, privately-owned pharmaceutical company in an African country. It has human workers and human management but the owner (who is only concerned with strategic decisions) communicates with the CEO remotely.

Main Paineframe
Oct 27, 2010

Doctor Malaver posted:

Of course it's possible to unplug the internet, but it's not realistic. Imagine that such a dangerous distributed AI is spreading as we speak and that a group of researchers is trying to alert the authorities. Nobody would listen to them until catastrophic real-world events unambiguously caused by that AI started occurring. And shutting down the internet would in itself be a huge disaster for humankind.

Contrary to popular belief, it is absurdly unlikely that any possible future AI will ever be able to transform itself into a computer virus and infect other computers. It's so ridiculous I don't even know how to say it more specifically than "poo poo just doesn't work that way". And, like I said, no one's going to invent a "distributed AI" spread across multiple locations in the first place. It's downright idiotic. It seems like you're just handwaving away reality.

JawnV6 posted:

In general I'm on your side of this, but there's a recent fun example of why this just isn't true. Someone very politely asked for $17 million dollars over email, and got it. The closest thing to physical confirmation was calling a phone number and reaching a human-ish voice claiming to be the same named person in the email. It was humans doing the scamming, but done over pure digital interfaces that a hypothetical bot could have access to.

As for actually getting a novel *thing* produced, that currently takes bodies. You could probably brandslap some commodity with EvilAI logos, but getting something unique produced and plugged in isn't full auto. Yet.

Sure, the scammers got the money - but the lost money was detected and the FBI is investigating. While it's unlikely that the money will be recovered, and the thieves aren't under US jurisdiction, odds are good that the FBI knows exactly who was responsible. If the money was stolen by a hypothetical evil AI, then that would be game over for it right there.

Even slapping logos on a product takes human labor. And that product isn't much good just sitting there at the factory - you have to transport it somewhere, and moving goods around is almost entirely human labor, and is likely to remain so for the foreseeable future - we don't even have driverless trains yet, let alone driverless trucks.

JawnV6
Jul 4, 2004

So hot ...

Main Paineframe posted:

Sure, the scammers got the money - but the lost money was detected and the FBI is investigating. While it's unlikely that the money will be recovered, and the thieves aren't under US jurisdiction, odds are good that the FBI knows exactly who was responsible. If the money was stolen by a hypothetical evil AI, then that would be game over for it right there.
Doctor Malaver has so much more wrong with what they're saying that this whole line is worthless though. Because you're wrong.

Alternatively, the FBI and others are going to be raking in a pretty huge sum because having a bank address is tantamount to prosecution.

Main Paineframe posted:

Even slapping logos on a product takes human labor. And that product isn't much good just sitting there at the factory - you have to transport it somewhere, and moving goods around is almost entirely human labor, and is likely to remain so for the foreseeable future - we don't even have driverless trains yet, let alone driverless trucks.
Pretty sure I could get zazzle or someone to slap a logo on something and ship it out based on HTTP requests without using my human bits at all. That a human is picking up the box is wholly irrelevant and a stupid distraction from the core of your argument. A theoretical attacker whose only access to the world is digital can easily ship out EvilAI branded mugs to every valid home address.

Main Paineframe
Oct 27, 2010

JawnV6 posted:

Doctor Malaver has so much more wrong with what they're saying that this whole line is worthless though. Because you're wrong.

Alternatively, the FBI and others are going to be raking in a pretty huge sum because having a bank address is tantamount to prosecution.

Pretty sure I could get zazzle or someone to slap a logo on something and ship it out based on HTTP requests without using my human bits at all. That a human is picking up the box is wholly irrelevant and a stupid distraction from the core of your argument. A theoretical attacker whose only access to the world is digital can easily ship out EvilAI branded mugs to every valid home address.

Did you mean to quote someone else there? I said explicitly that they weren't getting the money back, and implied that since the thieves aren't under US jurisdiction, they aren't going to be prosecuted. Stealing money is easy. Avoiding being convicted in court for stealing money can be tricky, but is possible. Avoiding even being prosecuted at all can be difficult or quite easy, depending on how mobile you are and whose jurisdiction you're stealing money from. Avoiding ever even being suspected in the first place, however, is drat near impossible, and that's a far more important thing for an AI than for a human since AIs don't have rights and their owner can read their mind at will.

That a human is picking up the box is, in fact, incredibly relevant. Sending out EvilAI-branded mugs is fine, but we're not talking about mugs here. Most people who are talking about the AI obtaining and distributing physical materials are talking about somewhat less innocent payloads. EvilAI might have a little more trouble shipping out EvilAI-branded WiFi bombs to every valid home address, because no matter how big a pile of money EvilAI has, every time a human hand touches the manufacturing and distribution process is another chance for the whole nefarious evil plan to fail. Yes, EvilAI can accomplish a lot of things without hands by hiring humans to do them...but if the thing EvilAI wants to do without hands is "kill all humans", it might find a little bit of trouble finding enough workers willing to take on aspects of that job.

The Bloop
Jul 5, 2004

by Fluffdaddy

Main Paineframe posted:

Contrary to popular belief, it is absurdly unlikely that any possible future AI will ever be able to transform itself into a computer virus and infect other computers. It's so ridiculous I don't even know how to say it more specifically than "poo poo just doesn't work that way". And, like I said, no one's going to invent a "distributed AI" spread across multiple locations in the first place. It's downright idiotic. It seems like you're just handwaving away reality.
I get a lot of what you're saying, and agree with most of it. I don't understand your insistence on proximity for all functional parts of a theoretical AI, though. It is possible that things like ant colonies already represent a sort of distributed intelligence, and with light speed (or nearly so) communication, I can see no reason that an AI need all be in one big hulking grey box rather than in various smaller boxes in different locations.

mysterious frankie
Jan 11, 2009

This displeases Dev- ..van. Shut up.
evil Al is a real riddle, but you guys are being weird about some basis poo poo

Main Paineframe
Oct 27, 2010

Trent posted:

I get a lot of what you're saying, and agree with most of it. I don't understand your insistence on proximity for all functional parts of a theoretical AI, though. It is possible that things like ant colonies already represent a sort of distributed intelligence, and with light speed (or nearly so) communication, I can see no reason that an AI need all be in one big hulking grey box rather than in various smaller boxes in different locations.

Because it's cheaper and makes logistics and maintenance way easier. Even if you're going to divide your theoretical AI into twenty different computers, it makes far more sense to have those twenty computers sitting right next to each other in the same rack in the same location plugged into the same UPSes, backup infrastructure, IT nerds, and so on, instead of splitting them among twenty different data centers. Having everything (except the offsite backups) in one place makes it way easier and less costly to keep the hardware running. In the case of something like AI, decentralization just wastes money, energy, maintenance effort, and bandwidth for basically no real advantage. Besides, no matter how fast the internet gets, sending some data to the next rack over is still going to be hundreds of times faster than sending that data across the country via the internet.

duck monster
Dec 15, 2004

You seem to be missing the fact that *any* future AI is going to be distributed, because thats how we actually do computing these days. If we're going to approach the sorts of scale of a human brain its guaranteed to be parallellized because it makes no goddamn sense not to. Yes it'll likely be within close proximity, but it doesn't actually have to be. Where I worked before, fire simulation jobs ran on systems split between our own cluster and Amazons infrastructure a few thousand kilometers away in sydney. As long as the task doesn't have too much locking or dependencies between processes theres not much reason not to.

GrrrlSweatshirt
Jun 2, 2012
nice- i loving HATE robots op! i will kill the first robot I see. when the robot comes i will kill it. im going to figure out the best way to kill the robot and then do so

GrrrlSweatshirt
Jun 2, 2012
maybe i would run over the robot in my car or dump water on it to mess the electricity up

GrrrlSweatshirt
Jun 2, 2012
another idea i had was to tell the robot theres something cool downstairs.... but then i would push it down the stairs when it went to check it out.

lollontee
Nov 4, 2014
Probation
Can't post for 10 years!

GrrrlSweatshirt posted:

another idea i had was to tell the robot theres something cool downstairs.... but then i would push it down the stairs when it went to check it out.

nice. turn the tables on them"/!

JawnV6
Jul 4, 2004

So hot ...

Main Paineframe posted:

Avoiding ever even being suspected in the first place, however, is drat near impossible, and that's a far more important thing for an AI than for a human since AIs don't have rights and their owner can read their mind at will.
Let's just take a look at all the ridiculous leaps you packed into this. "Owner can read their mind at will" is just ludicrous. What complex system have you built that it was trivial to ascertain any part of its state? Are you charging an adequate amount for your singular ability to debug anything in the space of 15 minutes? Building a chess bot and moving to bitboards makes asking questions such as "what moves are valid for the white bishop" hard to tell from just a memory dump. The programmer had to build scaffolding to get those questions in and out. It's possible to go the gcc route and purposefully muck up the insides so that the only useful I/O is "1. e4".

Pretending that same scaffolding and affordances must exist on a significantly more complex system is begging the question. Why'd the creator build those hooks? How is he competent enough to trivially pierce the baseline obfuscation that even -O3 compilation puts on it? How do they even have access to the state distributed on a bunch of AWS instances? You're handwaving away a lot of harsh truths about the systems people actually build.

Main Paineframe posted:

That a human is picking up the box is, in fact, incredibly relevant. Sending out EvilAI-branded mugs is fine, but we're not talking about mugs here. Most people who are talking about the AI obtaining and distributing physical materials are talking about somewhat less innocent payloads. EvilAI might have a little more trouble shipping out EvilAI-branded WiFi bombs to every valid home address, because no matter how big a pile of money EvilAI has, every time a human hand touches the manufacturing and distribution process is another chance for the whole nefarious evil plan to fail. Yes, EvilAI can accomplish a lot of things without hands by hiring humans to do them...but if the thing EvilAI wants to do without hands is "kill all humans", it might find a little bit of trouble finding enough workers willing to take on aspects of that job.
Would it? What if shoveling ABS pucks wrapped around FR4 boards out of China has the long term effect of killing us all? ABS alone certainly kills a few million people in southern China every year, maybe that's a good enough start for EvilAI. It'll get around to killing the rest of us with global warming from the same factories somewhere on the 50-1000 year scale.

You keep making this preposterous leap that goes something like:
1) Zazzle employee sees branded mug
2) Immediately deduces the presence of EvilAI
3) Convinces the rest of humanity of an urgent threat
4) Drastic, targeted action is taken to remove the AI

For 1) & 2), there are a shitload of people who don't care at all. How else does leaded paint slip onto a Barbie doll, how else does baby formula get cut with melamine? Capitalism always cheats when it can. I don't have to make bombs, I just have to source things from known-cheating factories and I'll make some decent headway on this killing all humans business.

So after we've found it, 3) and 4) just fall into place too. I'm pretty sure there are real urgent threats that have a majority opinion of denial. The actual world-killing AI wouldn't spark strike forces demagnetizing a single hard drive, it would be the Thanksgiving dinner debate with your racist uncle if it even existed or not. "You'll learn when you're older. Google's here to help us find content from our favorite Brands, the messages about killing us are just a funny emergent quirk!"

Doctor Malaver
May 23, 2007

Ce qui s'est passé t'a rendu plus fort

Main Paineframe posted:

Contrary to popular belief, it is absurdly unlikely that any possible future AI will ever be able to transform itself into a computer virus and infect other computers. It's so ridiculous I don't even know how to say it more specifically than "poo poo just doesn't work that way". And, like I said, no one's going to invent a "distributed AI" spread across multiple locations in the first place. It's downright idiotic. It seems like you're just handwaving away reality.

You are making two claims - that I) nobody will ever make a distributed AI; and that II) running AI software comes with automatic, effortless, infallible real-time insight into all its processes. You didn't convince me at all but even if give you the benefit of the doubt, we reach another problem. I already presented it in my first post and I'll post an updated version:

AI research is done by countless organizations around the world - corporate, university, military, hobbyist, criminal... Safety measures of any kind (hardware, software, human) implemented to constrain the AI are bound to fail sooner or later because once we reach the point that one organization made the AI that is so capable that the constraints are actually needed, then others will follow. The barrier of entry will drop. If Google makes super AI in 2040, by 2045 a dozen organizations will catch up. You can't rely on every one of them to maintain the constrains perfectly and indefinitely. Not to mention that there might be organizations or individuals that for whatever reasons won't want their AI to have constraints.

America Inc.
Nov 22, 2013

I plan to live forever, of course, but barring that I'd settle for a couple thousand years. Even 500 would be pretty nice.
As an example of researchers being able to get AI to do what they want without fully understanding how the AI does it:

MIT posted:

For decades, neuroscientists have been trying to design computer networks that can mimic visual skills such as recognizing objects, which the human brain does very accurately and quickly.
Until now, no computer model has been able to match the primate brain at visual object recognition during a brief glance. However, a new study from MIT neuroscientists has found that one of the latest generation of these so-called “deep neural networks” matches the primate brain.

...

The second factor is that researchers now have access to large datasets to feed the algorithms to “train” them. These datasets contain millions of images, and each one is annotated by humans with different levels of identification. For example, a photo of a dog would be labeled as animal, canine, domesticated dog, and the breed of dog.
At first, neural networks are not good at identifying these images, but as they see more and more images, and find out when they were wrong, they refine their calculations until they become much more accurate at identifying objects.
Cadieu says that researchers don’t know much about what exactly allows these networks to distinguish different objects.
“That’s a pro and a con,” he says. “It’s very good in that we don’t have to really know what the things are that distinguish those objects. But the big con is that it’s very hard to inspect those [computer] networks, to look inside and see what they really did. Now that people can see that these things are working well, they’ll work more to understand what’s happening inside of them.”

Hell, humanity was able to get pretty far technologically without Newton's laws, or thermodynamics. You can build something without fully understanding how it works.

Einstein posted:

"Before I die, I hope that someone will explain quantum mechanics to me. After I die, I hope that God will explain turbulence to me."

America Inc. fucked around with this message at 08:18 on Feb 22, 2015

Main Paineframe
Oct 27, 2010

Doctor Malaver posted:

You are making two claims - that I) nobody will ever make a distributed AI; and that II) running AI software comes with automatic, effortless, infallible real-time insight into all its processes. You didn't convince me at all but even if give you the benefit of the doubt, we reach another problem. I already presented it in my first post and I'll post an updated version:

AI research is done by countless organizations around the world - corporate, university, military, hobbyist, criminal... Safety measures of any kind (hardware, software, human) implemented to constrain the AI are bound to fail sooner or later because once we reach the point that one organization made the AI that is so capable that the constraints are actually needed, then others will follow. The barrier of entry will drop. If Google makes super AI in 2040, by 2045 a dozen organizations will catch up. You can't rely on every one of them to maintain the constrains perfectly and indefinitely. Not to mention that there might be organizations or individuals that for whatever reasons won't want their AI to have constraints.

You're making a lot of mistaken inferences about my claims, so let me toss in a quick correction.

I) I didn't say that no one would ever make a distributed AI, I said that no one would make one spread across multiple locations, for the aforementioned maintenance and logistics issues that everybody dismissing my posts have apparently just not bothered to respond to? Nobody's going to spread their AI nodes among two hundred different data centers, nor are they ever going to create an AI capable of turning itself into a virus and infecting every computer in the world when it realizes the researchers are coming to turn it off (this is a thing that several people in this thread have brought up as a potential AI threat). AWS instances don't even count, since physical location is meaningless there - no matter where they're hosted, virtual machines are even easier to cut off than physical instances.

II) Automatic? Effortless? Real-time? Now you're just plain putting words into my mouth. I didn't say any of those things, since none of them are necessary or even really relevant. Again, you're just re-interpreting what I'm saying to fit your preconceived notions of how this poo poo should work, even though they have little to do with reality. If anomalous activities are traced back to an experimental AI, though, it's not really that much of a hardship for the organization that is currently researching and studying that AI to dump its memory and spend a few days verifying that it was indeed responsible for those activities. One might even say that studying and researching the AI's behavior is part of the job of those AI researchers.


JawnV6 posted:

You keep making this preposterous leap that goes something like:
1) Zazzle employee sees branded mug
2) Immediately deduces the presence of EvilAI
3) Convinces the rest of humanity of an urgent threat
4) Drastic, targeted action is taken to remove the AI

No I'm not. Why do you keep bringing up mugs? I'm talking about actual threats to humanity, like remotely-controlled drone armies or biological weapons. Somebody is going to notice that poo poo being made and realize that it is not kosher and report it to the relevant authorities, the manufacturing and distribution process will be halted immediately and the finished products impounded by the police or FBI, and an investigation will be carried out to find out who the hell ordered it in the first place. Eventually one trail or another will be tracked back to the company running the AI, and once it reaches that point, pinning guilt on the AI is practically inevitable (unless you think that all AIs are super hackers which can crack any and all systems with ease and untraceably manipulate code at a whim, which is a misconception right up there with "AI goes viral, infects the world")

America Inc.
Nov 22, 2013

I plan to live forever, of course, but barring that I'd settle for a couple thousand years. Even 500 would be pretty nice.

Main Paineframe posted:

If anomalous activities are traced back to an experimental AI, though, it's not really that much of a hardship for the organization that is currently researching and studying that AI to dump its memory and spend a few days verifying that it was indeed responsible for those activities. One might even say that studying and researching the AI's behavior is part of the job of those AI researchers.
But as we can see with the case of the neural nets I posted above, sometimes it's not as simple as core dumping. Sometimes we have to run with things we just don't have the current capacity to understand, and (in the case of future AI) do contingency planning. We might do a core dump and have a general idea that the AI was trying to do something, but have absolutely no clue as to its motivations or how it did it, and be stuck with the option of scrapping the project altogether and watching grant money vaporize and scientific progress halt or work with what we have.
E: And I mean "contingency planning" not in the sense of "magically generating Skynet" but more like causing damage to itself or its immediate environment.

America Inc. fucked around with this message at 09:33 on Feb 22, 2015

Rigged Death Trap
Feb 13, 2012

BEEP BEEP BEEP BEEP

Lol if you think people competent enough to create an AI that complex havent hardcoded ways to completely control that AI.

Imagine the coders as the scientists and the AI as the test mouse that has essentially become a pincushion of IVs and electrodes wired straight into its brain.

Except the coders already know every single facet of this metaphorical mouse. AIs and anything computer based dont emerge fully formed from a CPUs core, unassissted. They are built line by line, byte by byte, bit by bit.

'Super AI' theorizers give the AI too much credit and forget the human factor.

Its fine as a philosophical quandry but entertaining the idea as a real possibility is frankly just dumb.

Cakebaker
Jul 23, 2007
Wanna buy some cake?

Rigged Death Trap posted:

Lol if you think people competent enough to create an AI that complex havent hardcoded ways to completely control that AI.

Imagine the coders as the scientists and the AI as the test mouse that has essentially become a pincushion of IVs and electrodes wired straight into its brain.

Except the coders already know every single facet of this metaphorical mouse. AIs and anything computer based dont emerge fully formed from a CPUs core, unassissted. They are built line by line, byte by byte, bit by bit.

'Super AI' theorizers give the AI too much credit and forget the human factor.

Its fine as a philosophical quandry but entertaining the idea as a real possibility is frankly just dumb.

Actually the whole point of neural networks is that you don't know the whole process. You know the input and you know the desired result, but you let the computer optimise the solution.

lollontee
Nov 4, 2014
Probation
Can't post for 10 years!

Cakebaker posted:

Actually the whole point of neural networks is that you don't know the whole process. You know the input and you know the desired result, but you let the computer optimise the solution.

What whole process? The learning weights, the input reactions, or the structure of the end network?

Doctor Malaver
May 23, 2007

Ce qui s'est passé t'a rendu plus fort

Main Paineframe posted:

I) I didn't say that no one would ever make a distributed AI, I said that no one would make one spread across multiple locations, for the aforementioned maintenance and logistics issues that everybody dismissing my posts have apparently just not bothered to respond to? Nobody's going to spread their AI nodes among two hundred different data centers, nor are they ever going to create an AI capable of turning itself into a virus and infecting every computer in the world when it realizes the researchers are coming to turn it off (this is a thing that several people in this thread have brought up as a potential AI threat). AWS instances don't even count, since physical location is meaningless there - no matter where they're hosted, virtual machines are even easier to cut off than physical instances.

They are not responding because you are equating "X is not an optimal method today" with "nobody will never ever even try doing X". You can prove the first statement but your main argument for the second statement is "because I say so". It doesn't really invite discussion.

There are many distributed projects like Folding@home or SETI@home. Who's to say that there won't be a NNNode@home project where your computer won't crunch numbers in the background but will instead function as a node in a neural network? Yes, it wouldn't be the fastest or the most efficient neural network... so? People make illogical and imaginative software projects all the time.

Main Paineframe posted:

II) Automatic? Effortless? Real-time? Now you're just plain putting words into my mouth. I didn't say any of those things, since none of them are necessary or even really relevant. Again, you're just re-interpreting what I'm saying to fit your preconceived notions of how this poo poo should work, even though they have little to do with reality. If anomalous activities are traced back to an experimental AI, though, it's not really that much of a hardship for the organization that is currently researching and studying that AI to dump its memory and spend a few days verifying that it was indeed responsible for those activities. One might even say that studying and researching the AI's behavior is part of the job of those AI researchers.

Yes, I put those words into your mouth to make a point. I'd say that they are quite relevant. You are creating a safety system and if the history of engineering has taught us anything, it's that safety systems only work most of the time. As soon as you admit that your safety system will rely on some researcher's skill, you must recognize the potential of human error.

Rigged Death Trap posted:

Imagine the coders as the scientists and the AI as the test mouse that has essentially become a pincushion of IVs and electrodes wired straight into its brain.

Except the coders already know every single facet of this metaphorical mouse. AIs and anything computer based dont emerge fully formed from a CPUs core, unassissted. They are built line by line, byte by byte, bit by bit.

Is that how you imagine development of complex software? :allears:

JawnV6
Jul 4, 2004

So hot ...

Main Paineframe posted:

You're making a lot of mistaken inferences about my claims, so let me toss in a quick correction.

I) I didn't say that no one would ever make a distributed AI, I said that no one would make one spread across multiple locations, for the aforementioned maintenance and logistics issues that everybody dismissing my posts have apparently just not bothered to respond to? Nobody's going to spread their AI nodes among two hundred different data centers, nor are they ever going to create an AI capable of turning itself into a virus and infecting every computer in the world when it realizes the researchers are coming to turn it off (this is a thing that several people in this thread have brought up as a potential AI threat). AWS instances don't even count, since physical location is meaningless there - no matter where they're hosted, virtual machines are even easier to cut off than physical instances.
The programmers in your world are very competent. I would like to live there some day. Here, a few months back, a very popular tool that makes it possible to pull code from anywhere on the planet to run on any other computer on the planet had a trivial error that made a lot of web-facing things go dead for a day. I'm not saying it'll be a super hacker AI, I'm saying the distributed web we currently built is so utterly fragile that leaking into one js repo or just waiting a month for npm to go critical again is enough for distribution.

I have to beg and plead to keep our git repos hosted locally. All the younger people see no problem chucking all that out to a remote service. Even when I come up with a 32GB RAM server to a problem, someone always wants to chuck it out to a PaaS/SaaS (and slaughter the UX imho) but the world of airgapped systems with fancy kill swtiches isn't here any more, everything's distributed by default.

Main Paineframe posted:

No I'm not. Why do you keep bringing up mugs? I'm talking about actual threats to humanity, like remotely-controlled drone armies or biological weapons.
Mugs are an easy demonstration of humans acting at the behest of HTTP requests. You were positing the big gap in EvilAI's plans is that a human would have to do something at it's behest. I've pointed out that pathway exists, so now we're arguing about complexity of the objects produced. Your earlier posts really made it seem like FedEx delivery contractors have intimate knowledge of the contents of packages and call up their local FBI rep quite easily for a full takedown.

Frankly, mugs can kill people, and EvilAI isn't going to win this thing with One Grand Scheme, it'll take a few million out with mugs, a couple hundred thousand with yoga-inspired energy drink flavors, etc. You keep positing ridiculous threat models where they're not required.

Main Paineframe posted:

Somebody is going to notice that poo poo being made and realize that it is not kosher and report it to the relevant authorities, the manufacturing and distribution process will be halted immediately and the finished products impounded by the police or FBI, and an investigation will be carried out to find out who the hell ordered it in the first place.
ABS kills millions of people a year. This is a fact about the current world, as it stand right now. The FBI has not launched a probe, no investigation is being done. The death toll is in tier-4/5 suppliers and nobody gives a poo poo. Your threat models don't adequately describe the current state of affairs and I have a hard time believing they're an accurate portrayal of a theoretical threat. Capitalism regularly grinds people to death, thinking it might continue to do a little more of that at the request of a rogue non-human intelligence in a way that doesn't immediately result in it's discovery, prosecution, and swift removal isn't as far fetched as you want it to be.

Main Paineframe posted:

Eventually one trail or another will be tracked back to the company running the AI, and once it reaches that point, pinning guilt on the AI is practically inevitable (unless you think that all AIs are super hackers which can crack any and all systems with ease and untraceably manipulate code at a whim, which is a misconception right up there with "AI goes viral, infects the world")
We know global warming is happening and it's not getting solved by this current rev of humanity. Why, pray tell, are they completely changing their behavior and response mechanisms to this new threat? Why isn't the political will to get that done getting smothered to death by the same special interests currently at the whim of a dollar?

RuanGacho
Jun 20, 2002

"You're gunna break it!"

Another reason run away AI is unlikely is we're not likely to program it to run on a java runtime where it can run on every single computing device known to humanity. Its probably going to run on a very specific run time that requires specific hardware present to even run.

Also AI is not bacteria and will not spontaneously evolve out of such restrictions as much as Hollywood would like to insist as such.

The lexicon of what's possible with AI and how it could change our society is as I've said previously woefully underdeveloped and tragically apocalyptic. When someone mentions created inteligence the first thing to mind should not be to consider seriously that the terminator universe conceived before the 56k modem is prescriptive.

Adbot
ADBOT LOVES YOU

A big flaming stink
Apr 26, 2010

Doctor Malaver posted:

\
Imagine that you were a crazy super villain, with infinite time (you work 24/7 with no rest, no distractions, always at 100%) and infinite resources (you hacked / mined / earned on currency exchange all the money you need).

So your hypothetical AI is literally magic.

God this loving subject is retarded

  • Locked thread