Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
suck my woke dick
Oct 10, 2012

:siren:I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!:siren:

Put this cum-loving slave on ignore immediately!

Condiv posted:

no, but my argument has never been that computers can't generate good or entertaining things. they just can't generate anything with meaning without a human behind them.

Define meaning
it's a meaningless concept :v: :bsdsnype:

Adbot
ADBOT LOVES YOU

Condiv
May 7, 2008

Sorry to undo the effort of paying a domestic abuser $10 to own this poster, but I am going to lose my dang mind if I keep seeing multiple posters who appear to be Baloogan.

With love,
a mod


blowfish posted:

Define meaning
it's a meaningless concept :v: :bsdsnype:

i already have. go read my post history if you want that instead of jumping into an argument that's already overstayed it's welcome by quite a bit

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Condiv posted:

a strong AI could attach meaning to works too.

And how would you be able to tell if it was lying about having done that?

Condiv
May 7, 2008

Sorry to undo the effort of paying a domestic abuser $10 to own this poster, but I am going to lose my dang mind if I keep seeing multiple posters who appear to be Baloogan.

With love,
a mod


Owlofcreamcheese posted:

And how would you be able to tell if it was lying about having done that?

about what, its work having meaning? well, it would have purpose (to deceive foolish humans into thinking it created a meaningful work) and therefore meaning.

suck my woke dick
Oct 10, 2012

:siren:I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!:siren:

Put this cum-loving slave on ignore immediately!

Condiv posted:

about what, its work having meaning? well, it would have purpose (to deceive foolish humans into thinking it created a meaningful work) and therefore meaning.

Does a screwdriver have meaning?

suck my woke dick
Oct 10, 2012

:siren:I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!:siren:

Put this cum-loving slave on ignore immediately!

Condiv posted:

i'm getting huffy because it's annoying to argue against people who are misrepresenting your argument. why again did you say i was pretending that meaning was some physical property?

If meaning depends solely on the creator and has no physical effect on the product, then it is meaningless.

Condiv
May 7, 2008

Sorry to undo the effort of paying a domestic abuser $10 to own this poster, but I am going to lose my dang mind if I keep seeing multiple posters who appear to be Baloogan.

With love,
a mod


blowfish posted:

Does a screwdriver have meaning?

is a screwdriver cognizant?

Condiv
May 7, 2008

Sorry to undo the effort of paying a domestic abuser $10 to own this poster, but I am going to lose my dang mind if I keep seeing multiple posters who appear to be Baloogan.

With love,
a mod


blowfish posted:

If meaning depends solely on the creator and has no physical effect on the product, then it is meaningless.

to you maybe, not to everyone

for example, people hounded mark twain about the meaning behind his works and he hated it to death

Condiv fucked around with this message at 00:38 on Nov 30, 2016

Rush Limbo
Sep 5, 2005

its with a full house

Owlofcreamcheese posted:

Okay but what use is there in saying humans can attach magical "meaning" to objects but nothing else could? What use is that property if it has zero effect on an object that has it and you can't even determine if it exists from the object?

I don't necessarily believe that humans are the only creatures capable of creating objects with meaning. Bowerbirds, when they create their house, are conveying meaning. That meaning may be 'Hey, let's gently caress' and is not intended to be interpreted in that way by other species, but it is still a meaning nonetheless.

AI isn't even capable of that level of meaning or intent.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Rush Limbo posted:

I don't necessarily believe that humans are the only creatures capable of creating objects with meaning. Bowerbirds, when they create their house, are conveying meaning. That meaning may be 'Hey, let's gently caress' and is not intended to be interpreted in that way by other species, but it is still a meaning nonetheless.

So if I created a machine that could build a little house and then you hosed it it would magically become alive? How are you claiming to know the innerworkings of a bird's mind? Maybe birds are just automatons with no inner life at all.

Mercrom
Jul 17, 2009
I think Condiv is being willfully dense.

Condiv posted:

it is an abstract concept yes. it's also a defined term so i'm not sure why you're having so much trouble with it


it's quite simple

meaning: the end, purpose, or significance of something.

a computer without strong AI cannot have intention or a purpose of its own, it can only express our intent. a work created by a randomized process has no meaning because the process behind it has no purpose of its own

i think you guys are thinking i'm making some quality argument that a work created by a neural net could never be considered as good as a human or that neural networks can't outclass humans in some areas. that's not the case. however, neural networks are currently outclassed wrt intelligence by all humans (and a poo poo ton of animals)
Haha, your dumb definition is a list of synonyms.

It's not an abstract concept or a "defined" term. It's a feeling. That's literally it.

Condiv posted:

are you being intentionally obtuse? do you not understand abstract concepts at all?
I think you conflated the term "abstract concepts" with "bullshit".

Condiv posted:

it can be checked. if the work has a creator, and the creator is cognizant, there's meaning behind it. there's the test. what is that meaning? you'd have to ask the creator. if you don't care, you don't have to care.
Hmm yes very scientific.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

Mercrom posted:

I think Condiv is being willfully dense.

To be fair, I really think he isn't. I think humans are hard wired with a huge toolset of mental models for other people's consciousnesses and we do just have a bunch of "I'll know it when I see it!" stuff that doesn't make real sense but works so well day to day no one realistically questions it. Like I can tell very fast if something moves "like it's alive" and that is a really useful tool for me and my ancestors, but if I really tried to break it down it's not real, it's weak and heuristic and not absolute.

KOTEX GOD OF BLOOD
Jul 7, 2012

blowfish posted:

Does a screwdriver have meaning?
The right question to ask is can a screwdriver have meaning, which it can.

Cingulate
Oct 23, 2012

by Fluffdaddy
Can there be a meaningless screwdriver

KOTEX GOD OF BLOOD
Jul 7, 2012

Sure?

VectorSigma
Jan 20, 2004

Transform
and
Freak Out



Owlofcreamcheese posted:

To be fair, I really think he isn't. I think humans are hard wired with a huge toolset of mental models for other people's consciousnesses and we do just have a bunch of "I'll know it when I see it!" stuff that doesn't make real sense but works so well day to day no one realistically questions it. Like I can tell very fast if something moves "like it's alive" and that is a really useful tool for me and my ancestors, but if I really tried to break it down it's not real, it's weak and heuristic and not absolute.

One of these neat subconscious mechanisms is the ability to hear a smile in someone's voice without seeing them.

Human intelligence and thus our entire perspective is centered on our physical form and its needs and limitations. In order to get a truly human perspective from an AI one would have to give it the form and function of a human, with an artificial brain at least capable of low-level emulation of a real human neural network. In addition, it would have to "grow up" around humans, since our entire identity is derived from social interaction. Without the human experience, the intelligence would be alien in some way, and any attempt to program or hardwire behavior defeats the purpose of a true emergent intelligence. Barring some insane breakthrough, I don't see this happening in less than 150 years, closer to 200. The proper theory will exist, and for sure the first tests will be run on room-size computing clusters in this century, but this is a solid goal for the 2200s.

In 2216, you might walk past a normal-looking person but their brain is a softball-sized sphere of solid nano-doped diamond electro/optical computronium with enough quantum computing capacity to probabalistically model the neural activity of several biological brains, and bodies composed of nanocomposite membranes modeled after actual human anatomy, down to the extraction of chemical energy from food and gas exchange through breathing. Even then, I suspect these artificial people will seem not quite human, despite all conscious cues telling you otherwise.

edit: To sum up, by the time machines are indistinguishable from humans, humans will be indistinguishable from machines.

VectorSigma fucked around with this message at 05:50 on Dec 2, 2016

Rush Limbo
Sep 5, 2005

its with a full house

Owlofcreamcheese posted:

So if I created a machine that could build a little house and then you hosed it it would magically become alive? How are you claiming to know the innerworkings of a bird's mind? Maybe birds are just automatons with no inner life at all.

I'm not claiming to know the inner workings of a bird's mind. It clearly does have meaning doing what it is doing, though. It may not be meaning that is meaningful to humans, but it is still meaningful.

And once again, a machine that builds a house doesn't have any intrinsic meaning beyond what the the human who made it has imparted on it. It itself is not making any meaningful choices in its construction of the house.

I mean, ultimately there's no meaning at all because we're just a collection of matter and the arguments between free will and determinism have been raging ever since we could communicate the idea at all.

BabyFur Denny
Mar 18, 2003
In the age of computers innovation and tech has grown exponentially.
Google Translate, to the surprise of its engineers, is developing its own language:
https://research.googleblog.com/2016/11/zero-shot-translation-with-googles.html

Remember when we all thought it's still going to take many years until a machine can beat a Go champion?
Oh, and computers are already better at recognising cat pictures from the internet than we are.

In the year of Brexit and Trump it's kind of ridiculous to assume that computers won't be able to reach the same level of intelligence as the average human person. Our own brain is nothing more than a machine programmed by evolution.

I think the actual hard part will be for us to accept a machine's consciousness. Even right now we can only assume that the people around us, that we are interacting with, have a consciousness and are not just machines with very complex rulesets.

Cingulate
Oct 23, 2012

by Fluffdaddy

BabyFur Denny posted:

In the age of computers innovation and tech has grown exponentially.
Google Translate, to the surprise of its engineers, is developing its own language:
https://research.googleblog.com/2016/11/zero-shot-translation-with-googles.html

Remember when we all thought it's still going to take many years until a machine can beat a Go champion?
Oh, and computers are already better at recognising cat pictures from the internet than we are.

In the year of Brexit and Trump it's kind of ridiculous to assume that computers won't be able to reach the same level of intelligence as the average human person. Our own brain is nothing more than a machine programmed by evolution.

I think the actual hard part will be for us to accept a machine's consciousness. Even right now we can only assume that the people around us, that we are interacting with, have a consciousness and are not just machines with very complex rulesets.
Machines have been making exponential progress in some places, linear progress in others, and much slower progress in others. Language is a fascinating example. The best neural nets are very impressive: using extreme amounts of computational power, they can do very cool stuff. But you can do 90% of what they can do with .01% of the computational power invested in a very simple stochastic algorithm. How much will it take to get to 110% of what we currently have - another ~1000fold increase in power? And what will it take to get the things to actually be as good at language as a 6 year old? We don't know. Maybe we'll have a talking computer running on something on the order of today's supercomputers by 2030. Maybe not. What's your linear interpolation?

Do you have one? Do you have an idea of what's at stake?

BabyFur Denny
Mar 18, 2003

Cingulate posted:

Machines have been making exponential progress in some places, linear progress in others, and much slower progress in others. Language is a fascinating example. The best neural nets are very impressive: using extreme amounts of computational power, they can do very cool stuff. But you can do 90% of what they can do with .01% of the computational power invested in a very simple stochastic algorithm. How much will it take to get to 110% of what we currently have - another ~1000fold increase in power? And what will it take to get the things to actually be as good at language as a 6 year old? We don't know. Maybe we'll have a talking computer running on something on the order of today's supercomputers by 2030. Maybe not. What's your linear interpolation?

Do you have one? Do you have an idea of what's at stake?

There are still many developments in this area where we're just at the beginning that are very promising. Deep Learning has just been open sourced, running your algorithm on GPUs is happening, cloud computing, hey, even MapReduce as a concept is barely a decade old and already outdated. As soon as you can parallelise a problem, it's basically solved.
We increased the computing resources of our cluster by 10x over the past two years, could easily do another 10x (for a total 100fold increase inperformance) by just throwing a lot of money at it, and only if we go for another 10x after that would be where we had to invest some effort into making it run.

And yeah, it's still inefficient and expensive, but every new technology is at the beginning. It's all just a matter of time, and probably a lot less time than a lot of people are expecting.

Xae
Jan 19, 2005

BabyFur Denny posted:

There are still many developments in this area where we're just at the beginning that are very promising. Deep Learning has just been open sourced, running your algorithm on GPUs is happening, cloud computing, hey, even MapReduce as a concept is barely a decade old and already outdated. As soon as you can parallelise a problem, it's basically solved.
We increased the computing resources of our cluster by 10x over the past two years, could easily do another 10x (for a total 100fold increase inperformance) by just throwing a lot of money at it, and only if we go for another 10x after that would be where we had to invest some effort into making it run.

And yeah, it's still inefficient and expensive, but every new technology is at the beginning. It's all just a matter of time, and probably a lot less time than a lot of people are expecting.

MR was old tech when Google rebranded DSNA as MapReduce.

You may have added 10x capacity in 5 years, but CPU hardware is seeing 10-20% gains a year right now. Moore's law is dead and counting on ever faster hardware is foolish.

Cingulate
Oct 23, 2012

by Fluffdaddy

BabyFur Denny posted:

There are still many developments in this area where we're just at the beginning that are very promising. Deep Learning has just been open sourced, running your algorithm on GPUs is happening
The code for the ILSVRC-2012 win was immediately open sourced. Deep Learning was never closed source. And Deep Learning basically never ran on anything but GPUs.

BabyFur Denny posted:

As soon as you can parallelise a problem, it's basically solved.
That's a very strong claim.

BabyFur Denny posted:

We increased the computing resources of our cluster by 10x over the past two years, could easily do another 10x (for a total 100fold increase inperformance) by just throwing a lot of money at it, and only if we go for another 10x after that would be where we had to invest some effort into making it run.
Ok, but adding 10% performance on top of a closed-form solution takes 100.000% more computational power (numbers I pulled out of my rear end, but that's basically the difference between an SVM and a deep net).


Xae posted:

MR was old tech when Google rebranded DSNA as MapReduce.

You may have added 10x capacity in 5 years, but CPU hardware is seeing 10-20% gains a year right now. Moore's law is dead and counting on ever faster hardware is foolish.
Neural nets/deep learning happens on the CPU*. Everyone is looking at NVIDIA, and so far they're delivering.

Edit: GPU!

Cingulate fucked around with this message at 14:28 on Dec 3, 2016

Xae
Jan 19, 2005

Cingulate posted:

Neural nets/deep learning happens on the CPU. Everyone is looking at NVIDIA, and so far they're delivering.
They are still running into the same barriers Intel is. They just started further back so they are hitting them later.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Cingulate posted:

Neural nets/deep learning happens on the CPU. Everyone is looking at NVIDIA, and so far they're delivering.

Inference happens on CPUs, but the learning is almost all GPUs these days. There are various other specialization approaches as well.

Cingulate
Oct 23, 2012

by Fluffdaddy

Xae posted:

They are still running into the same barriers Intel is. They just started further back so they are hitting them later.
Oh, I didn't want to imply GPGPU would scale into infinity. Just that there's still a lot of growth to be had.

Subjunctive posted:

Inference happens on CPUs, but the learning is almost all GPUs these days. There are various other specialization approaches as well.
Argh, spelling mistake. I meant to write "GPU".

Siggy
Jan 23, 2002

no leads, geddit?

khwarezm posted:

I'm trying to garner how far along this technology is exactly. Its hard to know if the hype coming from Technological singularity futurist tech fanboys actually has much merit to it. Still though, technology seems to be moving so fast these days.


In my experience, media interest in AI seems to peak and trough every 10 years.

I'd like to suggest that we're just in another peak, given (and this is a strong ubsubstatiated claim) the mathematics behind AI approaches hasnt really changed much over the last 30 years, merely the applications, ease of application and technological improvements regarding deployment. ( i believe this includes the current success of "deep learning").

However there is a difference now because the "media" showing interest are actually the parties investing into the area (google, facebook, amazon etc), so i expect this current peak to last longer regardless of fundamental improvements.

So if we believe that the basics of building intelligent systems were uncovered in the 50's - 80's and all we have to do is increase the technical power whilst applying them in ever smarter ways, allowing them to bounce off each other (and us) in order to get clever, then that singularity is just around the corner.

If you believe there's more to be uncovered about how natural systems adapt and learn (as i do) then we might still be quite far away.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Do you not think that, f.e., LSTMs or memnets represent meaningful advances?

Cingulate
Oct 23, 2012

by Fluffdaddy

Subjunctive posted:

Do you not think that, f.e., LSTMs or memnets represent meaningful advances?
The math behind LSTMs is 20 years old. It's correct that what's really changed is
- training data availability
- GPUs
- people actually putting it all together

That, however, is a massive practical change. It's not just hype. Sure, it's overhyped, but it's also powerful.

And Memnet is in a totally different category from LSTMs.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Cingulate posted:

And Memnet is in a totally different category from LSTMs.

Yes, I know. They were two examples.

twodot
Aug 7, 2005

You are objectively correct that this person is dumb and has said dumb things

Subjunctive posted:

Yes, I know. They were two examples.
Without any sort of knowledge about "memnets", it looks pretty weak to offer up two examples, and totally fail to defend one example, and just say "but my other example was the good one". It reads like a lazy gish gallop.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

I was asking a question in earnest -- did he not consider them to be meaningful advances. I wasn't asserting anything about what he should believe. Sometimes a question is just a question, not a Socratic feint.

twodot
Aug 7, 2005

You are objectively correct that this person is dumb and has said dumb things

Subjunctive posted:

I was asking a question in earnest -- did he not consider them to be meaningful advances. I wasn't asserting anything about what he should believe. Sometimes a question is just a question, not a Socratic feint.
What do you care about random Internet person's opinions other than to make a point? Either they're aware of the stuff you mentioned, and they by definition don't consider them meaningful advances in mathematics, since they've already stated their position, or they're not aware, and you're trying to score points by having better jargon than them. (seriously "memnet" turned up nothing obviously related) The way to phrase that question earnestly is "Why don't you consider memnets to be a meaningful advance in mathematics?"

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

They're two things that I think of as being modern advances, that didn't seem to fit his description. I apologize for imperfect wording of my question. Perhaps I shouldn't post quickly from my phone in threads discussing such sensitive matters. Your accusation of bad faith posting seems disproportionate, and in bad faith itself.

Memnets turn up more in the literature as "memory networks" it seems; Weston was primary on the original paper. I always heard them called memnets where I worked.

Cingulate
Oct 23, 2012

by Fluffdaddy
The interesting point is that the obvious mathematical innovation - LSTMs - predates the actual impact on the field of AI - LSTM networks revolutionizing applied ML - by two decades.

Can't really say anything about "memnets" because it's much too early to tell. Maybe they'll be a big thing in 10 years? Maybe not.

Siggy
Jan 23, 2002

no leads, geddit?

Subjunctive posted:

Do you not think that, f.e., LSTMs or memnets represent meaningful advances?

I appreciate my post was a bit blasé. There are no doubt incredible advances in many fields over time.

My main 2 points were that

1) every few years the leaders in technology tell us about a bright new future (usually because of a break though they have invested in) that the media then picks up and sells until they tire from lack of "progress" and that i dont see current hype as remarkably different.

2) I dont think the problems currently solved are dramatically different in flavour (although they are amazing!) they are "just" more sophisticated and the question remains whether what we fail to describe accurately as human intelligence is just more sophisticated applications of what we know already or something a bit different altogether.

I appreciate some may not like the fence I'm sitting on a perhaps claim that the same approach would suggest landing on the moon was more-or-less the same as combining a cannonball with "a bit more calculus", but i do feel the capability of many technological breakthroughs, particular in this area, are often over-egged.

Siggy fucked around with this message at 19:56 on Dec 23, 2016

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord
We are also in a cycle of people saying "X is coming in 10 years" then X coming out after ten years then people saying "that isn't a big deal, they were already talking about that ten years ago"

A Wizard of Goatse
Dec 14, 2014

Oh is that how that works, well in that case in ten years I'll have a flying car that folds up into a briefcase like George Jetson.

Owlofcreamcheese
May 22, 2005
Probation
Can't post for 9 years!
Buglord

A Wizard of Goatse posted:

Oh is that how that works, well in that case in ten years I'll have a flying car that folds up into a briefcase like George Jetson.

Unless you grew up in like 1920 or something no one has ever promised you a flying car.

mdemone
Mar 14, 2001

VectorSigma posted:

One of these neat subconscious mechanisms is the ability to hear a smile in someone's voice without seeing them.

Human intelligence and thus our entire perspective is centered on our physical form and its needs and limitations. In order to get a truly human perspective from an AI one would have to give it the form and function of a human, with an artificial brain at least capable of low-level emulation of a real human neural network. In addition, it would have to "grow up" around humans, since our entire identity is derived from social interaction. Without the human experience, the intelligence would be alien in some way, and any attempt to program or hardwire behavior defeats the purpose of a true emergent intelligence. Barring some insane breakthrough, I don't see this happening in less than 150 years, closer to 200. The proper theory will exist, and for sure the first tests will be run on room-size computing clusters in this century, but this is a solid goal for the 2200s.

In 2216, you might walk past a normal-looking person but their brain is a softball-sized sphere of solid nano-doped diamond electro/optical computronium with enough quantum computing capacity to probabalistically model the neural activity of several biological brains, and bodies composed of nanocomposite membranes modeled after actual human anatomy, down to the extraction of chemical energy from food and gas exchange through breathing. Even then, I suspect these artificial people will seem not quite human, despite all conscious cues telling you otherwise.

edit: To sum up, by the time machines are indistinguishable from humans, humans will be indistinguishable from machines.

I was going to type a bunch of poo poo but then you went and did it better. Basically I think the fact that Blue Brain is already having emergent 40-60 Hz synchronization tips us off that complexity/interconnectivity will naturally produce the potential processing power necessary (and within this century), but the development of a true ego tunnel, a real self-model, depends on "growing up human", as you say.

I was once more optimistic about AI in the time-until-realization sense, now I'm more optimistic that whatever eventually exists will be truly awesome in a way we are presently incapable of appreciating.

Adbot
ADBOT LOVES YOU

Pochoclo
Feb 4, 2008

No...
Clapping Larry

Owlofcreamcheese posted:

Unless you grew up in like 1920 or something no one has ever promised you a flying car.

Plenty of scientific divulgation magazines promised flying cars early in the 1990s. I vividly remember an article about Michael Jackson preordering one, even.

  • Locked thread