Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Cingulate
Oct 23, 2012

by Fluffdaddy
We've been making massive gains in AI recently - the most noteworthy developments being multilayer networks running on GPUs. If you scale this linearly (from 2010 to 2015), we're basically looking at superhuman AI within a decade or 3. The question is, does linear hold? We're observing problem solving capability so far does not scale linearly, but more something like exponentially. This actually leaves open the possibility that we can build a near-human level AI running on a massive supercomputer in 2030, but won't be able to build a 2x human level AI with all the world's ressources in 2040 still, not to speak of Skynet-level orders-of-magnitude-beyond-any-humans.

"AI-go-foom" people are sold on the idea that once you have an AI that's around human levels, it will trivially be able to improve itself. But then, we already have 7 billion human level intelligences around, and they haven't really found a way to come up with anything smarter than humans. And we know with computers, it's not as simple as adding more Hz to make it faster; a quad core isn't 4x as fast.

On the other hand,

khwarezm posted:

I'm trying to garner how far along this technology is exactly. Its hard to know if the hype coming from Technological singularity futurist tech fanboys actually has much merit to it. Still though, technology seems to be moving so fast these days.

I don't care about robots overthrowing humanity and plugging us all into the matrix, I'm more curious about when we reach the point that an artificial intelligence could create meaningful artwork or have a discussion on Ethical Philosophy and it be just as if not more sophisticated than anything we humans can do.
Being good at very specific tasks is certainly within the realm of possibility within decades or even years. We already have plenty of art-creating AI, and I doubt an AI bot trained to speak like a continental philosopher is harder than training it to filter spam with 50% better accuracy than today's machines.

What's open for debate, or even very much doubtful, is something that improves itself at near-linear, or even superlinear, speed.

Adbot
ADBOT LOVES YOU

Cingulate
Oct 23, 2012

by Fluffdaddy

Raspberry Jam It In Me posted:

You are doing psychology research, right? I always wondered, is there actually a correlation between intelligence(in humans) and life satisfaction/mental health?
There is a positive correlation, but it's a bit complicated because you can't really separate how good your life is (which is already correlated with IQ) with how happy you are.

Raspberry Jam It In Me posted:

I mean, could you hypothetically increase a human's intelligence through something like doubling his working memory and analytical abilities
Sadly, real human beings aren't RPG creatures and working memory isn't as simple as RAM. There are people who measure as having more "slots" in their WM, and it's easy to develop skills to get better at memorizing specific things (programmers memorize stuff in programming languages they know better), but in many ways, human memory is more of a process than a place.
Different way to look at it: the "shelf" you store memory on is infinitely large, but the reliability and speed with which you retrieve stuff, and the likelihood of breaking it while retrieving it, or retrieving a completely irrelevant thing (possibly a false memory), are real limits to human memory performance.

In contrast, computers have this awesome thing where they simply store everything under a specific handle and keep a perfect database of where everything is.

So what does this mean wrt. your question? Well, two things. First, humans have developed cultural tools to enlargen their memory, and you can yourself immediately see that everything is completely changed now that we have libraries and Google. If you could extend these cultural tools, you could again expect major changes.
Second, it's not quite clear what it would mean to make human memory larger. In principle, there should be room to at least improve the reliability, speed and precision of working memory processes, and within a certain range, we wouldn't necessarily, inherently expect major drawbacks. It really depends on how you do it: currently, humans already ARE able to improve the stability of their working memory, but it's a trade-off: you have to juggle 1. how precise your memory is, 2. how well you'll respond to unexpected external input. With how our memory is set up right now, you can only change one axis so much before the other one suffers.
But there shouldn't be anything standing in principle against re-engineering the whole thing to up the limit. And within the boundaries we currently have, it is well within what should be feasible to place yourself on a different place on the trade-off that fits maybe less well with the jungle (with its tigers and stuff), but better with Stanford and tests and long, focused discussions and nights in the lab.
See: everyone taking Modafinil, which is doing basically that.

This is the vague kind of non-answer you'll always get from neuroscience people I fear.

Cingulate
Oct 23, 2012

by Fluffdaddy

twodot posted:

How well does g predict the ability to render H264 video or the ability to store a trillion bytes of information in long term memory?
Really well, I would guess. It'd be rather nonlinear though.

The point about humans is we have the same brains as every other higher species when it comes to our brains doing what would require a computer to do ridiculous number crunching in highly specialized areas of perception and movement; and on top of that, we have the very unique ability to use a part of our minds for basically everything ever. It's super poo poo at that whenever you can compare it to a specialized system (i.e., your motor system is a lot better at solving complex nonlinear equations than your conscious mind!), but it's so far the only thing in the world that can do all of these things.

All the amazing AI tools we have are super specialized, too. I can right now, given sufficient data, program on my Macbook a system that recognizes 8x8 pixel digits better than any human being. But nobody in the world can build a robot that reads a city map, takes the subway, walks up a flight of stairs AND explains a 3rd grader trivial math problems. All of these in isolation, yes. Together, no.

Cingulate
Oct 23, 2012

by Fluffdaddy

blowfish posted:

What do you mean by "massive gains"? How do you quantify how close an AI is to becoming superhuman (as you pointed out, giving the AI an arbitrary number of extra processors doesn't make it superhuman by itself)? How do you define superhuman?
I don't, really. What I mean by massive gains is, we've started improving our cognition with education and writing and organized science, and suddenly we're on the loving moon.

We are superhuman, if you will. AIs are, as has been repeatedly pointed out in here, in some areas already superhuman, but nobody can really see anything scalable that is as good at general, non-specialized cognition as humans are.

Edit: argh, I though you were responding to a different post of mine.

By massive gains I basically mean that within a few years, beating humans at a bunch of well-defined tasks has gone from a pipe dream to a Google Engineer's weekend job. I think a big symbol is the 2012 ILSVCR win by a deep conv net. Since then, everything has been deep learning everywhere, and now nobody is really surprised anymore by AlphaGo and self-driving cars.

And as I said, the interesting thing would be something that is as general and as capable as humans on a scalable architecture. If you have a massive supercomputer that is just as smart as a human being and whose performance grows logarithmically with added nodes, that doesn't really change much, but if you have something you can easily grow linearly, then things will very rapidly begin to change.

Now look at exponential growth and we have guaranteed Skynet by Tuesday, which is what I think a bunch of AI fanatics are talking about, but how realistic is superlinear growth given the limits of physics (which also matter for information processing)?

Cingulate fucked around with this message at 23:07 on Nov 27, 2016

Cingulate
Oct 23, 2012

by Fluffdaddy

Owlofcreamcheese posted:

That seems extremely reasonable. No one would expect there to be a physical device that can do every physical task. It seems perfectly fine to claim different designs for information processing are better or worse at different things.And no design is just usable for everything.
Human cognition is reasonably described as general. See Kahneman's Thinking, Fast and Slow, and specifically System 2 vs. 1 as a fairly prominent discussion. The contrast is particularly striking when comparing it to AI, which is a bunch of systems all of which excel on one task each, with most of these tasks falling under the umbrella of System 1.

Cingulate
Oct 23, 2012

by Fluffdaddy

Owlofcreamcheese posted:

Yeah, but what if I told you your brain was just a bunch of systems that excel at one task each and are terrible at anything else.
Well then I would first tell you to actually read the book I told you to read before making grand claims about the brain, and then I'd say that this might be true on a local level in the same sense as a pocket calculator is just a device to shuffle electrons around, but it's wrong as a non-trivial description of human cognition, where we find very clear differences between quasi-modular, highly specialized systems and a general, cross-modal global workspace.

Cingulate
Oct 23, 2012

by Fluffdaddy

Thug Lessons posted:

There are psychologists who believe that, primarily a subset of evolutionary psychologists, but it's certainly not an accepted paradigm. The psychology of intelligence in particular are great evidence against it.
The overlap between g-focused IQ researchers and evolutionary psychologists is pretty substantial though :D

Cingulate
Oct 23, 2012

by Fluffdaddy

Owlofcreamcheese posted:

To restate the actual point I'm making:

we are never going to hit "human level AI" because human isn't a level, it's a specific implementation.
We can imagine an AI that reaches or surpasses humans on every measurable dimension.

Cingulate
Oct 23, 2012

by Fluffdaddy

Willie Tomg posted:

The visual sensory organs of living creatures--of which humans are a middling sample--are extraordinarily acute and to this day the only metric which technology has more reasonably approximated them has been in resolution which is a function of display development and not computational development. It's actually actively regressed in terms of ability to display/capture color information (video has a latitude of roughly 3.5 f-stops in either direction with black being incorrectably black and white being incorrectably white) with silver halide recording volumes of information through mechanical/chemical processes of which only 20% is actually perceptible by the eye without further processes to bring them into the visible color range. If human visual processing is so deficient, why is it such a bastard of a hurdle when making robots that respond to an array of visual stimuli? Of the five senses you could literally have not chosen one in which humans have more of an advantage.


you've completely hosed up describing the capacities and practical implementation of any average human eyeball, i am so stoked to hear your opinions on the practical implementation of The Human Being, In General

Though of course, on hyperspecialized tasks (like rating pixelated images on a monitor), AIs match human beings. (That's not to dispute the actual point. No machine matches the occipital lobe on teraflops/watt, not even close.)

Reveilled posted:

If you're interested in this sort of stuff, I'd highly recommend the book Superintelligence by Nick Bostom. It makes a pretty persuasive case that unless we're super careful about how we carry out AI research, we'll probably only ever make one human-level intelligence (before it kills us). It's really dry at the start as it spends multiple chapters laying out the case of why human-level AI is likely to be developed at some point, but very interesting once it starts discussing the various scenarios that could play out depending on when, where, and how the first AIs are developed, and how different security precautions would or would not work.
Does Bostrom respond to what I brought up in here? Specifically that his claims depend from what I can tell on linear or better scalability.

Cingulate fucked around with this message at 01:11 on Nov 28, 2016

Cingulate
Oct 23, 2012

by Fluffdaddy

Reveilled posted:

There's discussion of the process of developing a superintelligence, and that right now it's impossible to tell how close a human-level AI might be, and what obstacles might exist that could stall things indefinitely at a sub-human level. In terms of the AI improving itself, he has discussion of different kinds of "take-off" which depend on how easily a superintelligent AI can scale itself up, but makes the point that an AI does not necessarily need to be astronomically more intelligent than humans to pose a threat, depending on what other skills it possesses. Much of the book does deal with the fast takeoff scenario, but that's understandably because the book's central thesis is "prepare for the worst".
Does he ever show any indication of knowing there are hard physical limits on information processing?

Cingulate
Oct 23, 2012

by Fluffdaddy

Thug Lessons posted:

It's highly likely that the reason human brains which are good at facial recognition also tend to be good at speaking Spanish isn't an accident of evolution but something much more fundamental about intelligence.
AIs are, incidentally, also pretty good at spanish and face recognition.

The interesting question is still about the stuff computers are really bad at.

Cingulate
Oct 23, 2012

by Fluffdaddy

Reveilled posted:

I'd say it's not really overly relevant to his point. Humans don't come anywhere near those limits, so you don't need to be able to process information faster than the speed of light to process information fast than the speed of human.
That's not the problem. The problem is his fears rely on linear or better superlinear scaling, and that's simply not what we're currently seeing.


Owlofcreamcheese posted:

Your response is a good example. You do not like my 'flawed' reasoning so you respond to it by describing your emotional state and by doing some vague threat that I need to stop having that reasoning because it should effect my emotional state in that negative way. That is not a thing that a computer is going to level up and then just download from somewhere. That is a wicked human response that a computer could not have without a bunch of really really weird programming that is unlikely to be feasible and probably not even desirable.
Not sure what you're trying to say, but Willie pointed out you don't even know what human brains are capable of so what business do you have talking human-level AI until you do?

Cingulate
Oct 23, 2012

by Fluffdaddy

Thug Lessons posted:

Well yeah, we're right on the cusp on the end of development for integrated circuits. You'll have an answer to your question 3-5 years.
No. Even while Moores law holds, performance doesn't scale superlinear. Of course, it's only gonna get worse, but Bostroms fears depend on scalability.

Cingulate
Oct 23, 2012

by Fluffdaddy

Dolash posted:

The answer to the Chinese Room Puzzle and the Turing Test stuff, to me, is that it's irrelevant if what's going on is "true" intelligence or if it has the "spark" of life or is "self-aware" or "human-level" or any of the other terms we try to use for the ineffable difference between us and the toaster.

We'll have intelligence when we build something that we're not comfortable treating as not intelligent. If you want to get Cartesian you don't know for sure that anyone's intelligent except maybe yourself, you extend recognition to others because you have to and because it's hard not to when they demand it. If machines are made that are sufficiently autonomous and convincing then we might as well recognize them once it becomes socially awkward not to and leave the fine detail work to the philosophers. To that end, we'll make bigger gains in public perception with things that aren't even part of the core intelligence question, like improved natural language skills and better-engineered body-language.
Qualia isn't equal to information processing.

Cingulate
Oct 23, 2012

by Fluffdaddy

Dolash posted:

Qualia isn't real / isn't important / isn't provable in anyone outside yourself / can be done without for the sake of social graces if the inert thing in front of you insists on its autonomy. Take your pick.
None of the picks, and the main point was that your post was confused about the difference between qualia and information processing.

KOTEX GOD OF BLOOD posted:

You are making several of the bad assumptions here which Dreyfus describes, or at minimum the biological and psychological assumptions.
Alrernatively,
http://faculty.ucr.edu/~eschwitz/SchwitzAbs/USAconscious.htm

Cingulate
Oct 23, 2012

by Fluffdaddy

Dolash posted:

If the question is "how do we get human-level qualia for an artificial intelligence?"
It's not.

Cingulate
Oct 23, 2012

by Fluffdaddy

Blue Star posted:

I dont think that's true. I think it's obvious that technological progress is slowing down and will probably stagnate in our lifetimes. Compare the first half of the 20th century to the second half: the first half saw way more progress. Cars, airplanes, electric power, nuclear energy, radio, telephones, television, x-rays, and much more all came out in the period between 1900 and 1950, give or take. But now look at the period from 1950 to 2000, there's way less progress. Yeah computers got smaller and faster, we got video games and cell phones and internet stuff. Visual effects in movies got better. And...that's about it.

The only really significant progress has been in computer chips. But now even that is ending, since Moore's Law will stop soon if it hasn't already. All the heady progress in computers that has been made over the past few decades will now stop. Computers in 30 years will probably be barely any better than computers today. Video game graphics probably aren't going to get any better. We're probably never going to be able to emulate a mammalian brain, even that of a mouse, let alone a human. And other fields, such as medicine, will be even slower. Drug development has slowed down dramatically.

2016 is basically 1986 except we got tablets and cellphones and social media. I think 2046 will be like 2016 almost exactly, at least technology-wise.
Literally everything changed within the decade since the introduction of the iPhone, much like literally everything changed in the decade after the internet going mainstream in the 90s.
Next: an election run via twitter, self-driving cars, computers that talk to you, the end of coal in the West, ...

Cingulate
Oct 23, 2012

by Fluffdaddy

TheNakedFantastic posted:

In general the most profound changes the last couple of decades revolve around the internet and less tangible material shifts. We're living through one of the largest social and economic upheavals in human history but these changes are more subtle than a new electronic media player you can hold in your hand.
Go take a bus. It will be a completely different world than what the same thing would have been 20 years ago. Everyone is looking at a tiny supercomputer in their hands, communicating with somebody either a few miles away, or possibly halfway around the globe. Everyone. This is extremely different from a bus ride in 1995.

Condiv posted:

a neural net which can only reflect the biases of its creators.
What

Cingulate
Oct 23, 2012

by Fluffdaddy

TheNakedFantastic posted:

Well that's true, but people are using those computers because of the internet.
Yes, and the internet depends on electricity and globalization, and it's revolutions all the way down.

Cingulate
Oct 23, 2012

by Fluffdaddy
The art discussion is really unproductive cause it will have to be about what art, particularly meaningful art, is. Andy warhol made soup cans art and some people have a really hard time keeping the difference between craftsmanship and art clear. This isn't gonna lead anywhere.

Cingulate
Oct 23, 2012

by Fluffdaddy

Owlofcreamcheese posted:

I can neither translate or be programmed in a natural language
Absolutely, man. Much better than any computer, in fact. I can control your brain and your behavior using certain words over a much wider spectrum than any computer.

Cingulate
Oct 23, 2012

by Fluffdaddy

A Wizard of Goatse posted:

A million monkeys banging on a million typewriters for a million years might eventually produce the complete works of Shakespeare, but it's gonna take a lot more man-hours to find the Shakespeare in all the gibberish than it did for Shakespeare to just write it.

There's no reason machines have to take the million-monkeys approach, but that's where they're at right now.
This is stupid. This is not how current AIs work.


Oh, but it's so easy. For example: two and two are five. Hey, I just controlled you into thinking something akin to "that's wrong". It's pretty hard to get an AI to be controlled by words like that!


RedFlag posted:

Ladies and Gentlemen, I give you the 2016 Presidential election.
Or any election, ever, which are examples of people's brains being controlled by words.

Cingulate
Oct 23, 2012

by Fluffdaddy
Can there be a meaningless screwdriver

Cingulate
Oct 23, 2012

by Fluffdaddy

BabyFur Denny posted:

In the age of computers innovation and tech has grown exponentially.
Google Translate, to the surprise of its engineers, is developing its own language:
https://research.googleblog.com/2016/11/zero-shot-translation-with-googles.html

Remember when we all thought it's still going to take many years until a machine can beat a Go champion?
Oh, and computers are already better at recognising cat pictures from the internet than we are.

In the year of Brexit and Trump it's kind of ridiculous to assume that computers won't be able to reach the same level of intelligence as the average human person. Our own brain is nothing more than a machine programmed by evolution.

I think the actual hard part will be for us to accept a machine's consciousness. Even right now we can only assume that the people around us, that we are interacting with, have a consciousness and are not just machines with very complex rulesets.
Machines have been making exponential progress in some places, linear progress in others, and much slower progress in others. Language is a fascinating example. The best neural nets are very impressive: using extreme amounts of computational power, they can do very cool stuff. But you can do 90% of what they can do with .01% of the computational power invested in a very simple stochastic algorithm. How much will it take to get to 110% of what we currently have - another ~1000fold increase in power? And what will it take to get the things to actually be as good at language as a 6 year old? We don't know. Maybe we'll have a talking computer running on something on the order of today's supercomputers by 2030. Maybe not. What's your linear interpolation?

Do you have one? Do you have an idea of what's at stake?

Cingulate
Oct 23, 2012

by Fluffdaddy

BabyFur Denny posted:

There are still many developments in this area where we're just at the beginning that are very promising. Deep Learning has just been open sourced, running your algorithm on GPUs is happening
The code for the ILSVRC-2012 win was immediately open sourced. Deep Learning was never closed source. And Deep Learning basically never ran on anything but GPUs.

BabyFur Denny posted:

As soon as you can parallelise a problem, it's basically solved.
That's a very strong claim.

BabyFur Denny posted:

We increased the computing resources of our cluster by 10x over the past two years, could easily do another 10x (for a total 100fold increase inperformance) by just throwing a lot of money at it, and only if we go for another 10x after that would be where we had to invest some effort into making it run.
Ok, but adding 10% performance on top of a closed-form solution takes 100.000% more computational power (numbers I pulled out of my rear end, but that's basically the difference between an SVM and a deep net).


Xae posted:

MR was old tech when Google rebranded DSNA as MapReduce.

You may have added 10x capacity in 5 years, but CPU hardware is seeing 10-20% gains a year right now. Moore's law is dead and counting on ever faster hardware is foolish.
Neural nets/deep learning happens on the CPU*. Everyone is looking at NVIDIA, and so far they're delivering.

Edit: GPU!

Cingulate fucked around with this message at 14:28 on Dec 3, 2016

Cingulate
Oct 23, 2012

by Fluffdaddy

Xae posted:

They are still running into the same barriers Intel is. They just started further back so they are hitting them later.
Oh, I didn't want to imply GPGPU would scale into infinity. Just that there's still a lot of growth to be had.

Subjunctive posted:

Inference happens on CPUs, but the learning is almost all GPUs these days. There are various other specialization approaches as well.
Argh, spelling mistake. I meant to write "GPU".

Cingulate
Oct 23, 2012

by Fluffdaddy

Subjunctive posted:

Do you not think that, f.e., LSTMs or memnets represent meaningful advances?
The math behind LSTMs is 20 years old. It's correct that what's really changed is
- training data availability
- GPUs
- people actually putting it all together

That, however, is a massive practical change. It's not just hype. Sure, it's overhyped, but it's also powerful.

And Memnet is in a totally different category from LSTMs.

Cingulate
Oct 23, 2012

by Fluffdaddy
The interesting point is that the obvious mathematical innovation - LSTMs - predates the actual impact on the field of AI - LSTM networks revolutionizing applied ML - by two decades.

Can't really say anything about "memnets" because it's much too early to tell. Maybe they'll be a big thing in 10 years? Maybe not.

Cingulate
Oct 23, 2012

by Fluffdaddy

mdemone posted:

Blue Brain is already having emergent 40-60 Hz synchronization
How is Blue Brain Project simulations having something vaguely around Singer's gamma range important?

Pochoclo posted:

Plenty of scientific divulgation magazines promised flying cars early in the 1990s. I vividly remember an article about Michael Jackson preordering one, even.
If Michael Jackson preordered one, that means totally legitimate scientists must have made totally serious promises.

Cingulate
Oct 23, 2012

by Fluffdaddy
I'm a bit confused about what the significance of flying car stories from the 80s is right now.

Cingulate
Oct 23, 2012

by Fluffdaddy

Elias_Maluco posted:

A whole lot of movies, cartoons, sci-fi books and comic books did pictured flying cars during the 80s and 90s
Ok what does this mean?

Cingulate
Oct 23, 2012

by Fluffdaddy

A Wizard of Goatse posted:

there is exactly the same basis for the guarantee we will have flying cars any day now as the guarantee we'll have sapient computers any day now, they come from the same place, and the exact same brainless handwaving about "well things were different in 1800 than they are now therefore who knows what the future will bring??? probably robot girlfriends" works for both equally. They're both asinine, OOCC just wants a robot girlfriend enough more than he wants a sweet hovercar to ignore how asinine the first one is.
I have a friend who's a climate change denier. He once made the following argument:
- in the 90s, scientists said there'd be no ice left in the artic and antarctic by 2015
- that's clearly not happened
- global warming is a hoax invented by the chinese to destroy US manufacturing

See my point?

Cingulate
Oct 23, 2012

by Fluffdaddy

A Wizard of Goatse posted:

it's almost like the rightness of what "people" or "scientists" or "they" predicted X years ago about the modern day says nothing at all to inform what's going to happen in the future, and is meaningless noise for idiots.
Hm. But surely some people's predictions are more relevant and promising than others'?

Cingulate
Oct 23, 2012

by Fluffdaddy

KOTEX GOD OF BLOOD posted:

On the other hand, Hubert Dreyfus, a philosopher with no training in technology whatsoever who probably has trouble setting up a projector for a class session, said that all the "guys who build the robot people" promising strong AI in the near term were full of poo poo back in the 70s and turned out to be right. So if anything it's more prudent to take things AI researchers say with a heaping mound of salt given their utter inability to deliver on any of the huge promises they have been making for decades.
And on the other other hand, the Hubert Dreyfus was mostly skeptical about GOFAI, and was much more optimistic about neural networks (writing a book about how learning works with his brother, who then later made important contributions to neural nets).

And what is today's AI like? Well, it's not Good Old Fashioned AI. It's neural nets.
So the Dreyfus Argument can be used in both ways actually.


Pochoclo posted:

Going by your posts, I say we close the thread, it's painfully clear that sentient AI is a reality
I don't get what people's confusion about sentience is in this context, but more importantly - what?


KOTEX GOD OF BLOOD posted:

Or useful. I mean, that's the main thing with flying cars - based on movies people expect a regular car that can take off and hover thanks to some magic inertialess drive. The "flying cars" we have now are more like planes with folding wings that can be driven up to 60mph on little taxiing wheels or w/e.

Except for the Moller Skycar, which is fake bullshit.
Future AI will be to what we imagine AI to be right now as Tesla and Uber are to flying cars.

Cingulate
Oct 23, 2012

by Fluffdaddy

KOTEX GOD OF BLOOD posted:

OK, but neural nets are not a good enough reason to be any less skeptical about AI given its past
But you do understand this is just the opposite of Dreyfus' (the guy you brought up) point from the 70s?

KOTEX GOD OF BLOOD posted:

without any basis in anything other than masturbatory sci-fi dreams.
It's an ignoramus et ignorabimus actually (or at least an ignoramus).

Cingulate
Oct 23, 2012

by Fluffdaddy

CommieGIR posted:

You figure the Israelis could come up with something better than Microsoft Tay
If they do, they'll not show, but only insinuate that they are playing in the big league.

"If you force us yet again to descend from the face of the Earth to the depths of the Earth — let the Earth roll toward the Nothingness." :hint: :hint:

Cingulate
Oct 23, 2012

by Fluffdaddy

Thalantos posted:

When it comes to AI, won't we reach a point where it all becomes philosophical anyway?

People still argue whether humans have free will or not, so once we reach a point that AIs start passing the turing test, won't whether they're "really" intelligent become rather a moot point?

My smart phone would be considered a mind blowing AI just a few decades ago.
I am perfectly sure there will come a time point in the future where two perfectly intelligent and rational sets of humans will scoff at each other and consider the other party obviously ridiculously wrong, and one will believe the superhuman truly conscious etc AI walk amongst us, the other will believe contemporary AI only further shows that machines will never think/feel/express itself creatively.

Cingulate
Oct 23, 2012

by Fluffdaddy

Raspberry Jam It In Me posted:

Also, you can now slow down or even reverse your Alzheimer's in parts of your brain at home, with a strobe light
Let me make a prediction here which will only be corroborated with a few massively expensive studies in the future: no, you can't.

Cingulate
Oct 23, 2012

by Fluffdaddy

Owlofcreamcheese posted:

Yeah, if you have genetically engineered brain cells that are designed to fire when exposed to light. Which you do not.
You just have plenty of completely normal brain cells that are "designed" to fire when exposed to light.

Also, optogenetics will come to a brain near you very soon, or at least so everyone is hoping ...

Adbot
ADBOT LOVES YOU

Cingulate
Oct 23, 2012

by Fluffdaddy

Thalantos posted:

It seems to me we're really kinda close to AIs passing the turing test
Depending on the specification, extremely stupid programs have beaten the test already in the 60s.

  • Locked thread