Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
GABA ghoul
Oct 29, 2011

Cingulate posted:

We've been making massive gains in AI recently - the most noteworthy developments being multilayer networks running on GPUs. If you scale this linearly (from 2010 to 2015), we're basically looking at superhuman AI within a decade or 3. The question is, does linear hold? We're observing problem solving capability so far does not scale linearly, but more something like exponentially. This actually leaves open the possibility that we can build a near-human level AI running on a massive supercomputer in 2030, but won't be able to build a 2x human level AI with all the world's ressources in 2040 still, not to speak of Skynet-level orders-of-magnitude-beyond-any-humans.

"AI-go-foom" people are sold on the idea that once you have an AI that's around human levels, it will trivially be able to improve itself. But then, we already have 7 billion human level intelligences around, and they haven't really found a way to come up with anything smarter than humans. And we know with computers, it's not as simple as adding more Hz to make it faster; a quad core isn't 4x as fast.

On the other hand,

Being good at very specific tasks is certainly within the realm of possibility within decades or even years. We already have plenty of art-creating AI, and I doubt an AI bot trained to speak like a continental philosopher is harder than training it to filter spam with 50% better accuracy than today's machines.

What's open for debate, or even very much doubtful, is something that improves itself at near-linear, or even superlinear, speed.

You are doing psychology research, right? I always wondered, is there actually a correlation between intelligence(in humans) and life satisfaction/mental health?

I mean, could you hypothetically increase a human's intelligence through something like doubling his working memory and analytical abilities, and still get a functioning, stable individual? Or would you get some depressed weirdo obsessively writing surrealist short stories about turning into huge insects?

Adbot
ADBOT LOVES YOU

GABA ghoul
Oct 29, 2011

Hey guys, have you talked about the Blue Brain Project itt yet?

https://en.m.wikipedia.org/wiki/Blue_Brain_Project

It involves modelling hundreds of millions of somewhat biologically accurate neurons and arranging them in a similar configurations as in the rat/human brain, with similar interconnections. Just seeing what kind of cool poo poo and emergent behaviour will arise, without having to cut open some poor rat/human gently caress.

Also, you can now slow down or even reverse your Alzheimer's in parts of your brain at home, with a strobe light:

https://www.theatlantic.com/science/archive/2016/12/beating-alzheimers-with-brain-waves/509846/

GABA ghoul
Oct 29, 2011

Cingulate posted:

Just define "truly intelligent", I'll gladly take care of the rest.

True intelligence is a quality of human intelligence. So just test for human intelligence.

Like, if a machine is statistically indistinguishable from a human being in all possible conversations, it has human intelligence and therefore true intelligence. :chord:

  • Locked thread