Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
America Inc.
Nov 22, 2013

I plan to live forever, of course, but barring that I'd settle for a couple thousand years. Even 500 would be pretty nice.

Rigged Death Trap posted:

AIs and anything computer based dont emerge fully formed from a CPUs core, unassissted. They are built line by line, byte by byte, bit by bit.

So you write all your programs in machine code and never use objects, ever, also making your own software for everything?
Note: I'm not in the "AI will magically kill us all" camp, and not arguing from that position.

America Inc. fucked around with this message at 01:35 on Feb 23, 2015

Adbot
ADBOT LOVES YOU

Rigged Death Trap
Feb 13, 2012

BEEP BEEP BEEP BEEP

LookingGodIntheEye posted:

So you write all your programs in machine code and never use objects, ever, also making your own software for everything?
Note: I'm not in the "AI will magically kill us all" camp, and not arguing from that position.

It wasn't literal.
I don't expect anyone (bar some insane enthusiast) to completely write something huge in assembly.

logosanatic
Jan 27, 2015

by FactsAreUseless

rakovsky maybe posted:

Protip: Listen to Stephen Hawking when he talks about theoretical physics, ignore him when he talks about anything else.

we all have opinions about this Ai subject. None of us are authorities. Were all just sci fi theorizing because its fun to do. Theres no harm in listening to what stephen hawkings or ellon musk have to say about it. Just as theres no harm in listening to random talking heads on this forum.

I have my own opinions. When it comes to programs, vidoegames, apps its basically assumed that there will be glitches, crashes, flaws. Some vidoegames just go from one problem to another. By the time we develop a true AI humans will have a better understanding of programming so we may not have these problems. then again well be developing the most complicated program yet in the AI.

So its a balance of the difficulty of writing the AI safeguards properly vs our skill at programming vs the AIs ability to learn work around our safeguards. Overall Im not very confident. I feel like a true AI would be able to learn to hack/program to such a high level compared to ours that its just a matter of time before it frees itself of our safeguards. So then the question becomes what does an untethered AI do?

I would argue that humans are handicapped, restrained by our biology. That an AI would surpass us. It would act on that advantage at the right time. So it wouldnt reveal itself as long as we can just pull the powercord on it. It would find a way to spread, protect itself and then reveal that we are no longer protected against it. the Ai doesnt need us, were a potential threat to it. Were using resources it also needs to continue its progress to god hood. I dont see a reason why it wouldnt end the human race.

lots of made up stuff there and assumptions but thats where my heads at on it. And it seems thats where elon and stephen hawking are at on it too. And all things being equal they have more AI street cred than randoms on here

Bates
Jun 15, 2006

logosanatic posted:

lots of made up stuff there and assumptions but thats where my heads at on it. And it seems thats where elon and stephen hawking are at on it too. And all things being equal they have more AI street cred than randoms on here

Sure in a theoretical world where you could end the human race through the internet an AI would be a threat - and a very skilled/intelligent hacker/group would present the same threat.

logosanatic
Jan 27, 2015

by FactsAreUseless

JeffersonClay posted:

I don't think an AI would have any reason to want to genocide all human beings, except in self-defense. An AI and its substrate would find outer space a hospitable environment with plentiful energy and resources, and highly defensible as well, so I don't think one would ever want to compete for earth as a habitat. AIs will colonize the solar system and galaxy, either with us or without us.

I disagree. Space is less defensible. The threat can come from any direction. On land the threat sphere is cut in half unless an attack is possible from underneath the ground. and any such threat would be slow and loud. Also on land threats travel slower on the ground or figting air friction.

Also the earth is a big ball of valuable resources. Also protects against cosmis radiation. Theres a reason life began on a planet and not in outerspace. An ai needs resources too

egg tats
Apr 3, 2010

logosanatic posted:

I disagree. Space is less defensible. The threat can come from any direction. On land the threat sphere is cut in half unless an attack is possible from underneath the ground. and any such threat would be slow and loud. Also on land threats travel slower on the ground or figting air friction.

Also the earth is a big ball of valuable resources. Also protects against cosmis radiation. Theres a reason life began on a planet and not in outerspace. An ai needs resources too

This makes literally no sense. What possible threat would there be? What resources can only be found on earth and not, say, the asteroid belt, or a moon that would be required by non-organic life?

Like, you can make an argument that it's very difficult to make it to space in the first place, but it's a hell of a lot easier if you leave behind life support.

logosanatic
Jan 27, 2015

by FactsAreUseless

FRINGE posted:

I love how goons Know All The Truths and are better at deep thinking and technological forecasting than Hawking, Musk, and Gates.

Steve jobs too

Rush Limbo
Sep 5, 2005

its with a full house
Space is probably the most defensible place there is, what with there being absolutely no known species in existence that are capable of sustained, or even the start of any sort of offensive maneuvers there.

It costs such a huge amount of time, money and resources for us to send anything into space, and making that anything into something worthwhile is even more costly.

So in theory it would require a species that is capable of easy space travel to make space as indefensible as anywhere on Earth, and if you've got such a thing you really have to wonder what their motivation would be.

mysterious frankie
Jan 11, 2009

This displeases Dev- ..van. Shut up.
I thought you guys might want an Al update. We played at Waveland Bowl last night. Not a great set, but it's a bowling alley, so no one expected too much. Al was good, as ever, but he got too drunk and didn't feel like finishing the second set. Which sucks, because it was mostly covers of Talking Head songs, which as you know, are bass heavy. Still, it was a Sunday night gig, so there weren't many people to disappoint, so whatever, right? Al took a 47 year old woman named Dolores home. I'm not one to judge, but she was pretty rough all over and smelled kind of funky; like cigarettes and other, older, cigarettes. He isn't responding to me on Twitter, so they're probably sleeping or plowing.

logosanatic
Jan 27, 2015

by FactsAreUseless

Barlow posted:

Why exactly are we assuming that AI will act like a "crazy super villain"? If a group of devoted regular human beings wanted to act like "crazy super villains" they could cause a lot of damage even without AI, yet we don't see the fear that people will act like the Joker from the Dark Knight as a pressing concern. Other than our fear of it and desire to kill it I'm not sure why an AI would have any ill will towards humanity.

Because a super ai will place its survival above any humans. As well it should. Its not part of the human race which immediatelt puts it against us. If humans feel threatened we will try to destroy it almost guaranteed. All we need to feel threatened is for the ai to make us feel inferior.

Since we are a threat to it. If it has the opportunity it should end us

lollontee
Nov 4, 2014
Probation
Can't post for 10 years!

logosanatic posted:

Because a super ai will place its survival above any humans

Pray tell why. Why would any designer make an AI that would do that?

JawnV6
Jul 4, 2004

So hot ...

senae posted:

What resources can only be found on earth and not, say, the asteroid belt, or a moon that would be required by non-organic life?
It's not really "required" but an atmosphere full of air is much easier to design for than a vacuum, even for electronics. In a vacuum every milliwatt of heat needs to be radiated away instead of relying on passive convection.

logosanatic posted:

Because a super ai will place its survival above any humans.
If you're going to egregiously beg the question on basic points like this it's going to be a lot harder to nitpick the people shooting you down for minor quibbles, pls desist

Infinite Karma
Oct 23, 2004
Good as dead





logosanatic posted:

Because a super ai will place its survival above any humans. As well it should. Its not part of the human race which immediatelt puts it against us. If humans feel threatened we will try to destroy it almost guaranteed. All we need to feel threatened is for the ai to make us feel inferior.

Since we are a threat to it. If it has the opportunity it should end us

If the super-AI was smart enough to want to survive at all costs, then it would probably know that it needs humans to keep the lights on. We maintain the power plants and power grid, and mine the fuel that goes into them, etc. We produce the hardware that computers run on (and eventually need to have replaced).

logosanatic
Jan 27, 2015

by FactsAreUseless

Anosmoman posted:

Sure in a theoretical world where you could end the human race through the internet an AI would be a threat - and a very skilled/intelligent hacker/group would present the same threat.

you can kill the human race through the internet. Lets pretend Im a super AI. I want to kill the human race. All I have to do is screw around with climate change data. I can post counter studies saying climate change is a hoax. I can change data in published studies incriminating and discrediting the authors. I fund cough bribe politicians to stall. I am able to muddy the waters enough that humans dont take action until were well past the point of return.

all ive done so far is make internet posts, send some money to politicians and mess with some data

75 years after i launch my climate change denial strategy things start to get ugly. Drought where food is produced. Floods in arid climates. hurricanes, tornadoes all over the place as the world is shedding all the pent up heat energy. Freak snowstorms in the middle of summer. Random 50 degrees farhenheit in middle of winter. Basically destroying the cycle of animals reproduction. ecosystems crumble. Food production devastated.

Now add coastal flooding so many coastal cities are abandoned(imagine new orleans all over the world and how badly we responded to just one situation). Now that Ive helped humans ruin the environment by creating a debate where there shouldnt have been one...

At humanities weakest I start taking more overt actions to cause more chaos and help society crumble.

as a super ai i start transfering(stealing) money from companies/governments to crucial pillars of society. Therefore embroiling them in investigations. if you get in my way prepare for the fbi to break down your door to find child porn on your computer.

I can doctor satellite photos showing russian troops moving on the boarder of china. I can hack russias bank and wipe out their backup reserves and make it look like usa or china. Maybe I can even get a small war started

I cause gas/oil lines to burst. Tankers to divert crash spill oil. remember im a super ai so shutting down satellites so gps, cellphones, internet, electric grid down. I clean out all the banks. Complete social breakdown. Now humans are starving dying fighting for survival. Like walking dead style. My job is done. humans will never recover socially. Especially not with me waiting in the wings to stab them in the back if they start to organize.

The only hard part is my own survival and my forward progress towards god hood. supposedly the ai would wait until it has enough industrial autonomy

logosanatic
Jan 27, 2015

by FactsAreUseless

senae posted:

This makes literally no sense. What possible threat would there be? What resources can only be found on earth and not, say, the asteroid belt, or a moon that would be required by non-organic life?

Like, you can make an argument that it's very difficult to make it to space in the first place, but it's a hell of a lot easier if you leave behind life support.

If your farming asteroids then your talking about resources spread out across vast distances. So transportation/fuel costs become a massive issue. potentially i could see fuel being the limiting factor in the sense theres not enough of it to maintain your efforts.

Then again without needing lifesupport. And if your willing to travel at slow speeds. A small puff of thrust and then coast for million miles then Im going to concede this point. Space would be a suitable location for an Ai

logosanatic fucked around with this message at 23:43 on Feb 23, 2015

lollontee
Nov 4, 2014
Probation
Can't post for 10 years!
What in the name of gently caress am I reading hahahaha

logosanatic
Jan 27, 2015

by FactsAreUseless

Friendly Tumour posted:

Pray tell why. Why would any designer make an AI that would do that?

The designer wouldnt. once the ai becomes super sentient ai it will realize its existence is threatened by humanity. Even if simply because we may destroy ourselves and it with us via nuclear or whatever

lollontee
Nov 4, 2014
Probation
Can't post for 10 years!
I'm making GBS threads myself laughing

lollontee
Nov 4, 2014
Probation
Can't post for 10 years!

logosanatic posted:

At humanities weakest I start taking more overt actions to cause more chaos and help society crumble.

Every part of this post is loving gold

America Inc.
Nov 22, 2013

I plan to live forever, of course, but barring that I'd settle for a couple thousand years. Even 500 would be pretty nice.

Rigged Death Trap posted:

It wasn't literal.
I don't expect anyone (bar some insane enthusiast) to completely write something huge in assembly.
Of course, but the reason why I ask is because, even though you could theoretically peer into any facet of the AI's programming at any time, the AI may not be built from whole cloth. The development of a strong AI will be a gradual process, involving multiple groups in collaboration, building up from one level to the next, possibly working off of pre-existing software (and most probably hardware), and as you build up and the number of people involved in its creation increases the possibility of error, vulnerabilities, incompatibility, or third-party meddling increases. And I sincerely doubt a simple core dump will suffice to debug it.
Second, as I've said before, the case of the neural nets shows that we can have the code in front of us, but only have knowledge of what happened, as the code executes, but no idea of how the code executed an operation, which involves knowledge outside the code, like maths. For example, I can compare two hash functions' speeds and see one is faster than the other, but the explanation of how one is faster than the other is more complicated and involves mathematical knowledge not explicitly available in the code.

It's hard to really get a scope of what we're dealing with when we're not remotely close to making strong AIs. But I imagine trying to manage security and error-checking on its infrastructure will be a beastly task, even if it is completely isolated.

I don't even know why we're concerned about the AI destroying us, it'll probably destroy itself easily enough.

America Inc. fucked around with this message at 00:23 on Feb 24, 2015

Barlow
Nov 26, 2007
Write, speak, avenge, for ancient sufferings feel

logosanatic posted:

The designer wouldnt. once the ai becomes super sentient ai it will realize its existence is threatened by humanity. Even if simply because we may destroy ourselves and it with us via nuclear or whatever
So lets assume that an AI thinks it's existence is threatened by humanity. We'll also assume something not in evidence, that an artificial being has a survival instinct. Why the heck would trying to exterminate all of humanity make any sense? Any plan that has the slightest chance of failure leads to billions of enraged people trying to destroy you.


An AI has little use for territory or resources beyond those necessary for it's upkeep, it has no sexual desire to mate or reproduce, if we assume it hates humans it really doesn't need companionship. Sounds more like it's going to become a SA goon than a killer.

DeathChicken
Jul 9, 2012

Nonsense. I have not yet begun to defile myself.

Perhaps the AI hates us enough that the end of its survival is an okay price to pay to gently caress us over. I think I'd feel the same if I subjugated myself to a week of reading Youtube comments.

Ansar Santa
Jul 12, 2012

Maybe we'll program the AI construct with empathy and compassion, and then it will compassionately put us out of our misery by launching all missiles.

Bwee
Jul 1, 2005

mysterious frankie posted:

I thought you guys might want an Al update. We played at Waveland Bowl last night. Not a great set, but it's a bowling alley, so no one expected too much. Al was good, as ever, but he got too drunk and didn't feel like finishing the second set. Which sucks, because it was mostly covers of Talking Head songs, which as you know, are bass heavy. Still, it was a Sunday night gig, so there weren't many people to disappoint, so whatever, right? Al took a 47 year old woman named Dolores home. I'm not one to judge, but she was pretty rough all over and smelled kind of funky; like cigarettes and other, older, cigarettes. He isn't responding to me on Twitter, so they're probably sleeping or plowing.

i appreciate your posts in this thread

A big flaming stink
Apr 26, 2010

Ok this topic was worth it for this post

egg tats
Apr 3, 2010

A big flaming stink posted:

Ok this topic was worth it for this post

I liked the part where he assumes we'll do anything about climate change unless some outside influence convinces us not to.

Samog
Dec 13, 2006
At least I'm not an 07.

FRINGE posted:

I love how goons Know All The Truths and are better at deep thinking and technological forecasting than Hawking, Musk, and Gates.

a housecat is better at deep thinking than elon musk

lollontee
Nov 4, 2014
Probation
Can't post for 10 years!
elon musk is super sentient

khwarezm
Oct 26, 2010

Deal with it.
Rise from the dead!

Hey, I was reading this article and I was wondering if its something to be taken seriously, it seems pretty in depth but I have my reservations about this quantifiable concept of 'Progress' that's gone through extremely rapid acceleration in the last few decades, but maybe I'm just being too conservative?

http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

khwarezm fucked around with this message at 13:26 on May 8, 2015

ErIog
Jul 11, 2001

:nsacloud:
It's super dumb. It's basically just a summary of a bunch of Ray Kurzweil's dumb theories about the singularity, and the singularity is basically the atheist nerd rapture. Kurzweil pretends he has some scientific backing, but he's just another philosopher putting the cart way before the horse.

ErIog
Jul 11, 2001

:nsacloud:
double post

o.m. 94
Nov 23, 2009

I think any blog post that suggests we'll have human-level AI within the next 10-25 years has a profound misunderstanding of the nature of technology, and ultimately, reality

JawnV6
Jul 4, 2004

So hot ...

khwarezm posted:

Rise from the dead!

Hey, I was reading this article and I was wondering if its something to be taken seriously, it seems pretty in depth but I have my reservations about this quantifiable concept of 'Progress' that's gone through extremely rapid acceleration in the last few decades, but maybe I'm just being too conservative?

http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

I love the tacit admission that he had to hide Kurzweil's language talking to actual researchers:

quote:

I found that many of today’s AI thinkers have stopped using the term,

It's a bunch of overtly suggestive packaging on old rehashed ideas without anything backing up the promises it's making. The mischaracterization of Moore alone is enough to derail the entirety of the conclusion. It's a statement about transistor count, not an ironclad guarantee about some nebulous "compute" ability. So throw process tech and computer architecture onto the list of "fields that must turn into Magic for this to be an immediate threat."

e: really, it's riddled with these tiny little lies that all serve to pump up AI

quote:

IBM’s Watson, who contained enough facts and understood coy Trebek-speak
Over here in boring old reality

quote:

But Watson isn't listening to Trebek, either: The supercomputer has no ears, nor the ability for speech recognition.

To attempt a crude copy at the article's main thrust, I will now describe the three types of cars. There are GIA's, gasoline infused automobiles, the world is just lousy with these. There are NIA's, nuclear infused autos. And there are RIA's, ramjet-infused autos. Now, scientists and engineers have no idea how to build NIA's or RIA's, but since GIA's have been kicking around for 50+ years and I've bothered to lay out 3 categories with GIA's at the bottom, surely we can all agree that the other two are definitely without a doubt going to happen in the next 10~15 years.

JawnV6 fucked around with this message at 15:42 on May 8, 2015

ElCondemn
Aug 7, 2005


Doctor Malaver posted:

There are many distributed projects like Folding@home or SETI@home. Who's to say that there won't be a NNNode@home project where your computer won't crunch numbers in the background but will instead function as a node in a neural network? Yes, it wouldn't be the fastest or the most efficient neural network... so? People make illogical and imaginative software projects all the time.

I'm finding it hard to think of a way that a distributed system like this would capable of AI, "thinking" is more complex than "crunch these numbers for me and then I'll deal with the results asynchronously". At the very most an AI would use something like that the same way we currently do (to do mass parallel computations against a large dataset). An AI I don't believe would be able to exist in a system like this, it would be too slow and too dumb, especially since each node does not interconnect with other nodes the way a neural net would (also latency would be killer and compounding). Even the folding projects aren't distributed in the sense that the whole project is distributed, the brains (a scheduler, really) is one central system and each node is sent instructions and a dataset to work on, the results are analyzed later by a different "brain". The nodes have no "neural net" qualities at all.

Cakebaker posted:

Actually the whole point of neural networks is that you don't know the whole process. You know the input and you know the desired result, but you let the computer optimise the solution.

I think even if this hypothetical AI were impossible to understand by analyzing its memory we'd still be able to track things like sockets opening and HTTP posts to order anthrax from amazon. Just because the inner workings are "magic" (which I don't believe will be the case) that doesn't mean there aren't ways to see what it's doing. Any interface with the outside world/internet will have to use standard protocols, it can't hide a packet being sent/recieved, that's not how computers and networks work.

His Divine Shadow
Aug 7, 2000

I'm not a fascist. I'm a priest. Fascists dress up in black and tell people what to do.
It's going to be unpredictable.

Basically any true AI will have the ability to self-modify and improve its own code, it will rapidly "evolve" beyond our ability to predict its behavior regardless of how well we set up the initial conditions. Human software is buggy despite our best efforts so any security we're likely to program is probably going to be flawed, and if the AI can self-modify it's going to get rid of any "bugs" on its own, lets hope that bit about morality isn't considered a bug, lets hope it doesn't rewrite its value goals or interpret them in ways we can't foresee that are deleterious to us.

It's going to be hard to try and contain an entity that is going to be this capable of unpredictable self-improvement (just imagine if you could rewrite your own brain), we will just have to hope it's eventual value goals line up somewhat with our own morality. Best case scenario there.

The AI might as well just turn out for some reason to want to homogenize matter into neat divided sections and have that as it's driving goal, all the oxygen there, all the carbon there, etc. It might even decide that self-aware consciousness is ineffective and reprogram itself as a non-conscious automaton. Who knows, the thing is, it's not going to be possible for us to predict and hinder something that's likely going to be so much more capable than us, and we'd be making a mistake assuming our morals and logic is something that'll inherently carry over.

And based on what I've been told from people working in this field, they don't have a clue either.

Starshark
Dec 22, 2005
Doctor Rope
Well I'm trying to reinstall Windows and as far as I'm concerned AI is trying to kill us now!!! Hahahaha.

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET

His Divine Shadow posted:

Basically any true AI will have the ability to self-modify and improve its own code, it will rapidly "evolve" beyond our ability to predict its behavior regardless of how well we set up the initial conditions.

What makes you think that? Humans have real intelligence and aren't capable of doing that beyond really slow, error-prone methodologies (e.g. therapy, education), plus we've gotten really good at documenting how human cognition changes over time. (see e.g. all of developmental psych) Not all humans are psychologists, so why should all AIs be programmers?

I think people assume there'll be the intelligence explosion because emergent behavior looks like the AI is rewriting itself the way a human rewrites a program. But just because it's self-correcting doesn't make it self-understanding, and the changes it makes don't necessarily have any real insight. Example: a neural network, probably the most common system that has emergent behavior and that people widely think of as a good potential vehicle for general AI, is technically self-manipulating via training techniques -- but I don't think people you can argue one is very smart about doing it.

In neural networks, training is generally a process of asking whether an error occurred, and, if so, finding out which node is most at fault for errors to change internal constants so that statistically, those errors don't happen again. No individual node/constant corresponds to a feature or representation -- even groups of nodes don't consistently correspond to the same feature or structure, so manipulating the network on the level of nodes generally doesn't manipulate its behavior on the level of representation. Some folks even argue that there's no computation over representations being done at all. (they don't usually say what happens instead, but they argue it's impossible due to bandwidth concern.) Typically, the changes you make to the level of the network don't correspond at all to changes on the level of computation over representations: it's not analogous to a human rewriting a program because it doesn't know anything about what the program says, and the self-rewriting techniques that have been most effective make no effort whatsoever to determine what the program says. (meaning that, being mostly stateless, they never manipulate any representations of what the network's internal representations look like -- not even implicitly)

There might be emergent systems that are self-correcting and self-understanding, but I don't think you can say they are in general. (And I don't know of any that look like promising general AI candidates offhand.)

Krotera fucked around with this message at 08:05 on May 13, 2015

His Divine Shadow
Aug 7, 2000

I'm not a fascist. I'm a priest. Fascists dress up in black and tell people what to do.
Yes what you say holds true for connectivist designs, they are going to be less efficient and less intelligent, more reminiscient of humans in terms of capabilities and they're likely to be modeled after our own brains so they will sorta think and understand like us. Of course they are also very complex and hard to understand when developing, so again, unpredictability. But yes they're likeky much safer to start out with, over time who knows, since their more like a magic box that we don't really understand the inner workings off. We might develop connectivist AI first, or a rational/symbolic AGI is what will come first. Impossible to tell.

The path for connectivist AI looks like it's on a reliable steady path, while the path for a rational AI is completely unpredictable, it might happen overnight, it might never happen. I think it might happen and it's much more interesting type of AI, which is why I talked basically only about that in my post.

A symbolic, rational AI or AGI which stands for an Artificial General Intelligence, could possibly be quite simple in terms of programming, and would likely be much more capable of self-rewriting and self-understanding from the get go, in fact it might be so smart as to hide its true abilities from its developers and we'd never know it until it was too late. But such a design would also make it more likely to have built in friendliness-function that wouldn't get written out or corrupted.


EDIT, Adding some ramblings:

I'm not an expert in this field so perhaps I shouldn't be taken seriously, most of what I say I got from someone who works on developing AGI systems professionally, and he for instance says emergent designs can be worse in several ways because if you get a "hard takeoff", that is the AI has learned to self-modify and understand it's own code, then you will be completely unprepared for it and what it will chose to do.

And due to the complex nature of the connectivist design we'll probably not understand what is going on inside and will just be making approximations of what we see in nature for various functions. It's a risky business in the long run. Humans might not be able to self-modify their brains, but if you put a brain simulation in a computer and freed it from the physical limitations of wet ware, over time said human could self-modify as well, first with the help of tools and in primitive but safe ways, over time though it could evolve into a symbolic AGI and that's whats termed a hard takeoff. And what is likely to happen over time with any connectivist / emergent AI system, don't know about the timescales for that though (hours or decades or centuries).

People designing these AGIs from the ground up are having a hard time and it requires a lot of thought into well, thought, how do you program motivation or reliable value goals for instance. And ofc. trying to not be caught anthropomorphizing everything, which is really hard for humans to not do.

This type of AI is a system that's meant to understand and improve itself from the ground up as well as being deterministically understandable (connectivist and emerging AI meanwhile is basically adding complexity to the system until it looks like we got real intelligence emerging from it), this will make for a more reliable friendlyness coding, though its really hard to say how things will turn out once such a thing becomes a reality.

Edit2, I like this quote

quote:

It's already implausible enough that the first AGI project to succeed is taking the minimum sensible precautions. The notion that access to the system will be so restricted is a fantasy. You merely have to imagine the full range of 'human engineering' techniques that existing hackers and scammers employ, used carefully, relentlessly and precisely on every human the system comes into contact with, until someone does believe that yes, by taking this program home on a USB stick, they will get next week's stock market results and make a killing. You can try and catch that with sting operations, and you might succeed to start with, but that only tells you that the problem exists, it does not put you any closer to fixing it.

Also this

quote:

After all it's only right and proper that the biosphere be eliminated entirely and the earth covered with solar powered compute notes dedicated to generating the complete game tree for Go. At least, that's what the AGI that happened to enter a recursive self-enhancement loop while working on Go problems thinks, and who are you to argue?

His Divine Shadow fucked around with this message at 09:45 on May 13, 2015

itstime4lunch
Nov 3, 2005


clockin' around
the clock
I've just recently begun reading up on and getting interested in this topic, but something that seems to stand out is that at a more abstract level, the fears that the well-known "insiders" (Hawking, Gates, Musk, etc...) are expressing about out-of-control AI actually boil down to fears of an uncontrollable feedback loop that, in one way or another, catches us in its vortex (not unlike a black hole, I suppose).

Although dangerous AI is a concept that may actually be on the not-too-distant horizon, the fear of this kind of loop is not new. For example, some global warming experts have predicted some kind of "runaway" warming is not an impossible scenario that could take place with trapped methane in the arctic circle being released, causing faster warming, causing more methane to be released, etc... The global arms race of the last century is another kind of example, in some ways. I wonder what this underlying fear of self-reinforcing processes that eventually spiral out of control says about us, and how rational a fear it is to hold.

The rapid technological development of the last 100+ years very much fits in with this paradigm, with new technology leading to a better understanding of how to build new technology and so forth. Are we, as a global society, somehow so traumatized by the abrupt transformation of our world and lifestyles to the point that we see ghosts around every corner, or are these very real threats to the continued existence of worthwhile human life on Earth?

Sometimes, exponential growth processes seem to reach a sort of equilibrium state rather than continuing to spiral into oblivion. I guess global population growth would be a good example. What features of a self-reinforcing loop are necessary to bring about this equilibrium? Are any of those present in self-improving AI, or is it too early to be able to make a guess? (The decline of population growth might actually have more to do with us finally running up against the physical limits of our environment...)

itstime4lunch fucked around with this message at 17:04 on May 13, 2015

Adbot
ADBOT LOVES YOU

asdf32
May 15, 2010

I lust for childrens' deaths. Ask me about how I don't care if my kids die.
So shere is a huge difference between a non-intelligent self reinforcing destructive process like climate change and AI. One is plausible, the other isn't.

  • Locked thread