Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
JawnV6
Jul 4, 2004

So hot ...

His Divine Shadow posted:

A symbolic, rational AI or AGI which stands for an Artificial General Intelligence, could possibly be quite simple in terms of programming, and would likely be much more capable of self-rewriting and self-understanding from the get go, in fact it might be so smart as to hide its true abilities from its developers and we'd never know it until it was too late. But such a design would also make it more likely to have built in friendliness-function that wouldn't get written out or corrupted.
This strikes me a lot like the ontological argument for God. "Oh, right, a neural net's inscrutable internals wouldn't be trivial to self-modify to suit arbitrary goals, but we can all imagine this Other kind of AI/AGI, which would offer easier introspection and self-modification, and since it's clearly so much better it will necessarily exist." Like every engineering constraint that's attempted to be applied gets brushed off in favor of more magic suddenly cropping up.

You're right though, this is a fun quote:

quote:

It's already implausible enough that the first AGI project to succeed is taking the minimum sensible precautions. The notion that access to the system will be so restricted is a fantasy. You merely have to imagine the full range of 'human engineering' techniques that existing hackers and scammers employ, used carefully, relentlessly and precisely on every human the system comes into contact with, until someone does believe that yes, by taking this program home on a USB stick, they will get next week's stock market results and make a killing. You can try and catch that with sting operations, and you might succeed to start with, but that only tells you that the problem exists, it does not put you any closer to fixing it.
Halfway through a cheesy sci-fi opener and they have the sheer gall to accuse others of indulging in fantasy. Then spends the rest of the paragraph imagining a table stakes threat vector thoroughly solved by several facilities I've worked at. It's typical of singulars writing, bad sci-fi gimmicks stripped of characters and story, delivered with heaps of smug superiority.

Adbot
ADBOT LOVES YOU

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET

His Divine Shadow posted:

This type of AI is a system that's meant to understand and improve itself from the ground up as well as being deterministically understandable (connectivist and emerging AI meanwhile is basically adding complexity to the system until it looks like we got real intelligence emerging from it), this will make for a more reliable friendlyness coding, though its really hard to say how things will turn out once such a thing becomes a reality.

Like I said, it's possible that an algorithm like this exists, but we don't know one and a lot of people don't think it's possible. For an AI to be self-understanding would mean it does computation on representations and that we can determine what those representations are by examining it (or we already know). A lot of people don't think you can get to AI by just by doing computations on representations because in a humanlike brain, features of any fact can be relevant to any fact and features of the whole body of knowledge can be relevant to any specific fact -- a system built around super local representations, even if you use tricks like making each fact name other relevant facts, ultimately seems to fail e.g. because there's no relevance-caching strategy that can't be broken by changes in the relevance rules. (Philosophy of AI dudes call this the 'frame problem.') In addition to that, the actual time constraints indicate not a lot of neurons are actually hit when a real human responds rapidly to a question.

We really don't know any kind of computation over representations that's very good at dealing with this kind of problem -- especially not one that works under brainlike constraints (things like "thou shalt not perform more than 200 dereferences") --- and we've got a lot of evidence suggesting that we don't actually need to do computation over representations to solve the same problem, and that existing solutions don't.

I'm kind of skeptical of any "computation over representations will totally do it" claims that don't come with an algorithm even though I don't think "neural nets!" is the whole answer because I've never seen anyone solve the frame problem with them. I think neural nets are just a pretty good example of a system with the kind of properties that are probably necessary to subserve cognition.

("Connectionist" is the word I've usually heard fwiw but I don't know for sure..)

Doctor Malaver
May 23, 2007

Ce qui s'est passé t'a rendu plus fort

itstime4lunch posted:

(The decline of population growth might actually have more to do with us finally running up against the physical limits of our environment...)

Isn't population growth these days pretty much inversely proportional to the amount of resources a society has at its disposal?

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET
Making a go at analogy and hoping chemical/petro/mechanical engineers don't kill me. What you're (Shadow) saying reads to me kind of like appealing to "yes, but when they invent clean gas engines, we're going to turn the environment around!" because while I guess it's hypothetically possible we can do something that makes burning gas cleaner, we don't really have a super good way of doing that AFAIK and we already know that other power sources are cleaner-per-watt. It gets to the point where if you appeal to "clean gasoline" you've already given yourself another thing to explain.

nelson
Apr 12, 2009
College Slice

Doctor Malaver posted:

Isn't population growth these days pretty much inversely proportional to the amount of resources a society has at its disposal?

Um, no? How is this even possible?

Berk Berkly
Apr 9, 2009

by zen death robot

nelson posted:

Um, no? How is this even possible?

http://en.wikipedia.org/wiki/Demographic-economic_paradox

computer parts
Nov 18, 2010

PLEASE CLAP

nelson posted:

Um, no? How is this even possible?

People don't actually want to have as many children as possible.

itstime4lunch
Nov 3, 2005


clockin' around
the clock

Doctor Malaver posted:

Isn't population growth these days pretty much inversely proportional to the amount of resources a society has at its disposal?

Good point.

A big flaming stink
Apr 26, 2010

nelson posted:

Um, no? How is this even possible?

The more education women have the less they want to pop out kids

nelson
Apr 12, 2009
College Slice
Oh I see. I read the statement literally, as in "If you have zero food you will have infinite population growth."

Jeffrey of YOSPOS
Dec 22, 2005

GET LOSE, YOU CAN'T COMPARE WITH MY POWERS

nelson posted:

Oh I see. I read the statement literally, as in "If you have zero food you will have infinite population growth."

I imagine this is mostly true. Not with literally zero food, but even a subsistence farmer will benefit from specialization and economies of scale by having more children that grow more food.

Fried Miltonman
Jan 4, 2015
I felt that this was worth linking: Slate Star Codex posted about some actual AI researchers' views on AI risk. It's obviously not an argument to believe everything you read on LW, but it should suffice to show that not being able to tell ELIZA apart from HAL9000 is not a prerequisite for thinking that it's worthwhile to spend some resources on getting artificial general intelligence right. And, more importantly, that the argument is not about banning AI research (especially as all that that would accomplish is only allow research to proceed in places that have less concern for safety!), it's about how much resources should be spent on the problem of getting an artificial general intelligence to have values that accord with those of humanity. There's plenty of room to disagree on how likely you think that an intelligence explosion is possible in the first place, on how fast it could be if it is, not to mention when it might happen, while still having friendly AI be a research topic worth some funding from society.

nelson
Apr 12, 2009
College Slice
Just a heads up, Ex Machina is a pretty good movie that explores this subject.

Doctor Malaver
May 23, 2007

Ce qui s'est passé t'a rendu plus fort
http://svedic.org/

I think this is an interesting take on the subject.

Zel posted:

...
Superintelligence is unlikely to be a risk to humankind unless we try to control it.

Why? In nature, conflicts between species happen when:

- Resources are scarce.
- The value of resources is higher than the cost of conflict.

Examples of scarce resources: food, land, water, and the right to reproduce. Nobody fights for air on Earth because, although very valuable, it is abundant. Lions don’t attack elephants because the cost of fighting such a large animal is too high.

Conflicts can also happen because of irrational behaviour, but we can presume that a superintelligence would be more rational than we are. It would be a million times smarter than any human and would know everything that has ever been published online. If the superintelligence is a rational agent, it would only start a conflict to acquire scarce resources that are hard to get otherwise.

What would those resources be? The problem is that humans exhibit anthropocentric bias; something is valuable to us, so we presume it is also valuable to other forms of life. But, is that so?
...

It isn't, in the author's opinion.

Bel Shazar
Sep 14, 2012

We are human. Lots of people would try to control it. Syfy movies would be made about quests to control it.

ex post facho
Oct 25, 2007
I'm somewhat surprised nobody has brought up Roko's Basilisk yet (or maybe i just missed it)

http://rationalwiki.org/wiki/Roko%27s_basilisk

Ichabod Sexbeast
Dec 5, 2011

Giving 'em the old razzle-dazzle
I think that was in the lesswrong mock thread. There's a link in here somewhere...

Adbot
ADBOT LOVES YOU

Silver2195
Apr 4, 2012

a shameful boehner posted:

I'm somewhat surprised nobody has brought up Roko's Basilisk yet (or maybe i just missed it)

http://rationalwiki.org/wiki/Roko%27s_basilisk

That's because this thread is for semi-serious discussion, and Roko's Basilisk is almost universally agreed to be asinine.

  • Locked thread