|
Condiv posted:no, but my argument has never been that computers can't generate good or entertaining things. they just can't generate anything with meaning without a human behind them. Define meaning it's a meaningless concept
|
# ? Nov 30, 2016 00:07 |
|
|
# ? May 28, 2024 15:46 |
|
blowfish posted:Define meaning i already have. go read my post history if you want that instead of jumping into an argument that's already overstayed it's welcome by quite a bit
|
# ? Nov 30, 2016 00:12 |
|
Condiv posted:a strong AI could attach meaning to works too. And how would you be able to tell if it was lying about having done that?
|
# ? Nov 30, 2016 00:14 |
|
Owlofcreamcheese posted:And how would you be able to tell if it was lying about having done that? about what, its work having meaning? well, it would have purpose (to deceive foolish humans into thinking it created a meaningful work) and therefore meaning.
|
# ? Nov 30, 2016 00:23 |
|
Condiv posted:about what, its work having meaning? well, it would have purpose (to deceive foolish humans into thinking it created a meaningful work) and therefore meaning. Does a screwdriver have meaning?
|
# ? Nov 30, 2016 00:24 |
|
Condiv posted:i'm getting huffy because it's annoying to argue against people who are misrepresenting your argument. why again did you say i was pretending that meaning was some physical property? If meaning depends solely on the creator and has no physical effect on the product, then it is meaningless.
|
# ? Nov 30, 2016 00:27 |
|
blowfish posted:Does a screwdriver have meaning? is a screwdriver cognizant?
|
# ? Nov 30, 2016 00:30 |
|
blowfish posted:If meaning depends solely on the creator and has no physical effect on the product, then it is meaningless. to you maybe, not to everyone for example, people hounded mark twain about the meaning behind his works and he hated it to death Condiv fucked around with this message at 00:38 on Nov 30, 2016 |
# ? Nov 30, 2016 00:32 |
|
Owlofcreamcheese posted:Okay but what use is there in saying humans can attach magical "meaning" to objects but nothing else could? What use is that property if it has zero effect on an object that has it and you can't even determine if it exists from the object? I don't necessarily believe that humans are the only creatures capable of creating objects with meaning. Bowerbirds, when they create their house, are conveying meaning. That meaning may be 'Hey, let's gently caress' and is not intended to be interpreted in that way by other species, but it is still a meaning nonetheless. AI isn't even capable of that level of meaning or intent.
|
# ? Nov 30, 2016 00:35 |
|
Rush Limbo posted:I don't necessarily believe that humans are the only creatures capable of creating objects with meaning. Bowerbirds, when they create their house, are conveying meaning. That meaning may be 'Hey, let's gently caress' and is not intended to be interpreted in that way by other species, but it is still a meaning nonetheless. So if I created a machine that could build a little house and then you hosed it it would magically become alive? How are you claiming to know the innerworkings of a bird's mind? Maybe birds are just automatons with no inner life at all.
|
# ? Nov 30, 2016 00:44 |
|
I think Condiv is being willfully dense.Condiv posted:it is an abstract concept yes. it's also a defined term so i'm not sure why you're having so much trouble with it It's not an abstract concept or a "defined" term. It's a feeling. That's literally it. Condiv posted:are you being intentionally obtuse? do you not understand abstract concepts at all? Condiv posted:it can be checked. if the work has a creator, and the creator is cognizant, there's meaning behind it. there's the test. what is that meaning? you'd have to ask the creator. if you don't care, you don't have to care.
|
# ? Nov 30, 2016 17:24 |
|
Mercrom posted:I think Condiv is being willfully dense. To be fair, I really think he isn't. I think humans are hard wired with a huge toolset of mental models for other people's consciousnesses and we do just have a bunch of "I'll know it when I see it!" stuff that doesn't make real sense but works so well day to day no one realistically questions it. Like I can tell very fast if something moves "like it's alive" and that is a really useful tool for me and my ancestors, but if I really tried to break it down it's not real, it's weak and heuristic and not absolute.
|
# ? Nov 30, 2016 17:33 |
|
blowfish posted:Does a screwdriver have meaning?
|
# ? Nov 30, 2016 19:50 |
|
Can there be a meaningless screwdriver
|
# ? Nov 30, 2016 20:34 |
|
Sure?
|
# ? Nov 30, 2016 21:10 |
|
Owlofcreamcheese posted:To be fair, I really think he isn't. I think humans are hard wired with a huge toolset of mental models for other people's consciousnesses and we do just have a bunch of "I'll know it when I see it!" stuff that doesn't make real sense but works so well day to day no one realistically questions it. Like I can tell very fast if something moves "like it's alive" and that is a really useful tool for me and my ancestors, but if I really tried to break it down it's not real, it's weak and heuristic and not absolute. One of these neat subconscious mechanisms is the ability to hear a smile in someone's voice without seeing them. Human intelligence and thus our entire perspective is centered on our physical form and its needs and limitations. In order to get a truly human perspective from an AI one would have to give it the form and function of a human, with an artificial brain at least capable of low-level emulation of a real human neural network. In addition, it would have to "grow up" around humans, since our entire identity is derived from social interaction. Without the human experience, the intelligence would be alien in some way, and any attempt to program or hardwire behavior defeats the purpose of a true emergent intelligence. Barring some insane breakthrough, I don't see this happening in less than 150 years, closer to 200. The proper theory will exist, and for sure the first tests will be run on room-size computing clusters in this century, but this is a solid goal for the 2200s. In 2216, you might walk past a normal-looking person but their brain is a softball-sized sphere of solid nano-doped diamond electro/optical computronium with enough quantum computing capacity to probabalistically model the neural activity of several biological brains, and bodies composed of nanocomposite membranes modeled after actual human anatomy, down to the extraction of chemical energy from food and gas exchange through breathing. Even then, I suspect these artificial people will seem not quite human, despite all conscious cues telling you otherwise. edit: To sum up, by the time machines are indistinguishable from humans, humans will be indistinguishable from machines. VectorSigma fucked around with this message at 05:50 on Dec 2, 2016 |
# ? Dec 2, 2016 05:48 |
|
Owlofcreamcheese posted:So if I created a machine that could build a little house and then you hosed it it would magically become alive? How are you claiming to know the innerworkings of a bird's mind? Maybe birds are just automatons with no inner life at all. I'm not claiming to know the inner workings of a bird's mind. It clearly does have meaning doing what it is doing, though. It may not be meaning that is meaningful to humans, but it is still meaningful. And once again, a machine that builds a house doesn't have any intrinsic meaning beyond what the the human who made it has imparted on it. It itself is not making any meaningful choices in its construction of the house. I mean, ultimately there's no meaning at all because we're just a collection of matter and the arguments between free will and determinism have been raging ever since we could communicate the idea at all.
|
# ? Dec 2, 2016 14:25 |
|
In the age of computers innovation and tech has grown exponentially. Google Translate, to the surprise of its engineers, is developing its own language: https://research.googleblog.com/2016/11/zero-shot-translation-with-googles.html Remember when we all thought it's still going to take many years until a machine can beat a Go champion? Oh, and computers are already better at recognising cat pictures from the internet than we are. In the year of Brexit and Trump it's kind of ridiculous to assume that computers won't be able to reach the same level of intelligence as the average human person. Our own brain is nothing more than a machine programmed by evolution. I think the actual hard part will be for us to accept a machine's consciousness. Even right now we can only assume that the people around us, that we are interacting with, have a consciousness and are not just machines with very complex rulesets.
|
# ? Dec 2, 2016 16:16 |
|
BabyFur Denny posted:In the age of computers innovation and tech has grown exponentially. Do you have one? Do you have an idea of what's at stake?
|
# ? Dec 2, 2016 17:03 |
|
Cingulate posted:Machines have been making exponential progress in some places, linear progress in others, and much slower progress in others. Language is a fascinating example. The best neural nets are very impressive: using extreme amounts of computational power, they can do very cool stuff. But you can do 90% of what they can do with .01% of the computational power invested in a very simple stochastic algorithm. How much will it take to get to 110% of what we currently have - another ~1000fold increase in power? And what will it take to get the things to actually be as good at language as a 6 year old? We don't know. Maybe we'll have a talking computer running on something on the order of today's supercomputers by 2030. Maybe not. What's your linear interpolation? There are still many developments in this area where we're just at the beginning that are very promising. Deep Learning has just been open sourced, running your algorithm on GPUs is happening, cloud computing, hey, even MapReduce as a concept is barely a decade old and already outdated. As soon as you can parallelise a problem, it's basically solved. We increased the computing resources of our cluster by 10x over the past two years, could easily do another 10x (for a total 100fold increase inperformance) by just throwing a lot of money at it, and only if we go for another 10x after that would be where we had to invest some effort into making it run. And yeah, it's still inefficient and expensive, but every new technology is at the beginning. It's all just a matter of time, and probably a lot less time than a lot of people are expecting.
|
# ? Dec 2, 2016 19:24 |
|
BabyFur Denny posted:There are still many developments in this area where we're just at the beginning that are very promising. Deep Learning has just been open sourced, running your algorithm on GPUs is happening, cloud computing, hey, even MapReduce as a concept is barely a decade old and already outdated. As soon as you can parallelise a problem, it's basically solved. MR was old tech when Google rebranded DSNA as MapReduce. You may have added 10x capacity in 5 years, but CPU hardware is seeing 10-20% gains a year right now. Moore's law is dead and counting on ever faster hardware is foolish.
|
# ? Dec 2, 2016 20:18 |
|
BabyFur Denny posted:There are still many developments in this area where we're just at the beginning that are very promising. Deep Learning has just been open sourced, running your algorithm on GPUs is happening BabyFur Denny posted:As soon as you can parallelise a problem, it's basically solved. BabyFur Denny posted:We increased the computing resources of our cluster by 10x over the past two years, could easily do another 10x (for a total 100fold increase inperformance) by just throwing a lot of money at it, and only if we go for another 10x after that would be where we had to invest some effort into making it run. Xae posted:MR was old tech when Google rebranded DSNA as MapReduce. Edit: GPU! Cingulate fucked around with this message at 14:28 on Dec 3, 2016 |
# ? Dec 3, 2016 00:21 |
|
Cingulate posted:Neural nets/deep learning happens on the CPU. Everyone is looking at NVIDIA, and so far they're delivering.
|
# ? Dec 3, 2016 01:13 |
|
Cingulate posted:Neural nets/deep learning happens on the CPU. Everyone is looking at NVIDIA, and so far they're delivering. Inference happens on CPUs, but the learning is almost all GPUs these days. There are various other specialization approaches as well.
|
# ? Dec 3, 2016 04:37 |
|
Xae posted:They are still running into the same barriers Intel is. They just started further back so they are hitting them later. Subjunctive posted:Inference happens on CPUs, but the learning is almost all GPUs these days. There are various other specialization approaches as well.
|
# ? Dec 3, 2016 14:28 |
|
khwarezm posted:I'm trying to garner how far along this technology is exactly. Its hard to know if the hype coming from Technological singularity futurist tech fanboys actually has much merit to it. Still though, technology seems to be moving so fast these days. In my experience, media interest in AI seems to peak and trough every 10 years. I'd like to suggest that we're just in another peak, given (and this is a strong ubsubstatiated claim) the mathematics behind AI approaches hasnt really changed much over the last 30 years, merely the applications, ease of application and technological improvements regarding deployment. ( i believe this includes the current success of "deep learning"). However there is a difference now because the "media" showing interest are actually the parties investing into the area (google, facebook, amazon etc), so i expect this current peak to last longer regardless of fundamental improvements. So if we believe that the basics of building intelligent systems were uncovered in the 50's - 80's and all we have to do is increase the technical power whilst applying them in ever smarter ways, allowing them to bounce off each other (and us) in order to get clever, then that singularity is just around the corner. If you believe there's more to be uncovered about how natural systems adapt and learn (as i do) then we might still be quite far away.
|
# ? Dec 22, 2016 23:02 |
|
Do you not think that, f.e., LSTMs or memnets represent meaningful advances?
|
# ? Dec 22, 2016 23:14 |
|
Subjunctive posted:Do you not think that, f.e., LSTMs or memnets represent meaningful advances? - training data availability - GPUs - people actually putting it all together That, however, is a massive practical change. It's not just hype. Sure, it's overhyped, but it's also powerful. And Memnet is in a totally different category from LSTMs.
|
# ? Dec 23, 2016 01:03 |
|
Cingulate posted:And Memnet is in a totally different category from LSTMs. Yes, I know. They were two examples.
|
# ? Dec 23, 2016 01:04 |
|
Subjunctive posted:Yes, I know. They were two examples.
|
# ? Dec 23, 2016 02:38 |
|
I was asking a question in earnest -- did he not consider them to be meaningful advances. I wasn't asserting anything about what he should believe. Sometimes a question is just a question, not a Socratic feint.
|
# ? Dec 23, 2016 02:41 |
|
Subjunctive posted:I was asking a question in earnest -- did he not consider them to be meaningful advances. I wasn't asserting anything about what he should believe. Sometimes a question is just a question, not a Socratic feint.
|
# ? Dec 23, 2016 02:53 |
|
They're two things that I think of as being modern advances, that didn't seem to fit his description. I apologize for imperfect wording of my question. Perhaps I shouldn't post quickly from my phone in threads discussing such sensitive matters. Your accusation of bad faith posting seems disproportionate, and in bad faith itself. Memnets turn up more in the literature as "memory networks" it seems; Weston was primary on the original paper. I always heard them called memnets where I worked.
|
# ? Dec 23, 2016 03:01 |
|
The interesting point is that the obvious mathematical innovation - LSTMs - predates the actual impact on the field of AI - LSTM networks revolutionizing applied ML - by two decades. Can't really say anything about "memnets" because it's much too early to tell. Maybe they'll be a big thing in 10 years? Maybe not.
|
# ? Dec 23, 2016 14:11 |
|
Subjunctive posted:Do you not think that, f.e., LSTMs or memnets represent meaningful advances? I appreciate my post was a bit blasé. There are no doubt incredible advances in many fields over time. My main 2 points were that 1) every few years the leaders in technology tell us about a bright new future (usually because of a break though they have invested in) that the media then picks up and sells until they tire from lack of "progress" and that i dont see current hype as remarkably different. 2) I dont think the problems currently solved are dramatically different in flavour (although they are amazing!) they are "just" more sophisticated and the question remains whether what we fail to describe accurately as human intelligence is just more sophisticated applications of what we know already or something a bit different altogether. I appreciate some may not like the fence I'm sitting on a perhaps claim that the same approach would suggest landing on the moon was more-or-less the same as combining a cannonball with "a bit more calculus", but i do feel the capability of many technological breakthroughs, particular in this area, are often over-egged. Siggy fucked around with this message at 19:56 on Dec 23, 2016 |
# ? Dec 23, 2016 16:20 |
|
We are also in a cycle of people saying "X is coming in 10 years" then X coming out after ten years then people saying "that isn't a big deal, they were already talking about that ten years ago"
|
# ? Dec 23, 2016 16:40 |
|
Oh is that how that works, well in that case in ten years I'll have a flying car that folds up into a briefcase like George Jetson.
|
# ? Dec 23, 2016 16:50 |
|
A Wizard of Goatse posted:Oh is that how that works, well in that case in ten years I'll have a flying car that folds up into a briefcase like George Jetson. Unless you grew up in like 1920 or something no one has ever promised you a flying car.
|
# ? Dec 23, 2016 18:18 |
VectorSigma posted:One of these neat subconscious mechanisms is the ability to hear a smile in someone's voice without seeing them. I was going to type a bunch of poo poo but then you went and did it better. Basically I think the fact that Blue Brain is already having emergent 40-60 Hz synchronization tips us off that complexity/interconnectivity will naturally produce the potential processing power necessary (and within this century), but the development of a true ego tunnel, a real self-model, depends on "growing up human", as you say. I was once more optimistic about AI in the time-until-realization sense, now I'm more optimistic that whatever eventually exists will be truly awesome in a way we are presently incapable of appreciating.
|
|
# ? Dec 23, 2016 18:28 |
|
|
# ? May 28, 2024 15:46 |
|
Owlofcreamcheese posted:Unless you grew up in like 1920 or something no one has ever promised you a flying car. Plenty of scientific divulgation magazines promised flying cars early in the 1990s. I vividly remember an article about Michael Jackson preordering one, even.
|
# ? Dec 23, 2016 18:55 |