Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
How many quarters after Q1 2016 till Marissa Mayer is unemployed?
1 or fewer
2
4
Her job is guaranteed; what are you even talking about?
View Results
 
  • Post
  • Reply
withak
Jan 15, 2003


Fun Shoe
It probably knows that images similar to this are usually labeled “chair” on the internet. Test it on a 3d model of a chair or a shop fabrication drawing of one.

Adbot
ADBOT LOVES YOU

SCheeseman
Apr 23, 2003

TGLT posted:

Yes but it does not know that a chair is for sitting, and if you rotate that chair it's not likely to identify it correctly - and god forbid you show it a completely new chair because it'll have to work all over again. It does not understand the qualities of a chair. If you train it on chairs that have four leg what happens when it gets hit with a bauhaus chair or a stool? It's a blind association between pixels and a meaningless-to-it string value.

These models aren't terrible at coming to correct conclusions with new information. The AI model that my robot vacuum uses to scan a room and identify objects gets things right most of the time in spite it being unlikely that it was trained on any of my furniture specifically and the unique (very low) angle that it's taking photos of it's environment from. Can they gently caress up? Of course, but so can humans.

I'm not sure if AI models could be considered intelligent, but that's mostly because intelligence is extremely hard to define and even I'm not sure where the line is drawn.

TGLT
Aug 14, 2009
The core issue is you're conflating identification in the sense of "an algorithm can associate a set of pixels with a string value" with meaningful understanding of a thing. Your roomba lacks a theory of chairs, and will never discover how crucial chairs are to subjugating the human race. e: I am also going to bet it gets that identification process wrong a lot more often than a human being does, in large part because it does not understand what things are.

SCheeseman
Apr 23, 2003

TGLT posted:

The core issue is you're conflating identification in the sense of "an algorithm can associate a set of pixels with a string value" with meaningful understanding of a thing. Your roomba lacks a theory of chairs, and will never discover how crucial chairs are to subjugating the human race.
Given a more complex multi-modal AI system built into a bipedal robot that was capable of identifying chairs, building a 3D mesh of that model on the fly and then simulating iterations of it sitting on that chair, then sitting on that chair, what more understanding does it need? Humans accept things at face value all the time, we're capable of higher understanding but we usually don't bother.

Kyte
Nov 19, 2013

Never quacked for this
"Intelligence" is fundamentally the capacity to reason about things' thing-ness and connect those concepts to derive new information and solutions. This is a higher form of learning that most animals do not have. Many animals can learn behaviors they're shown, but that's training. AIs can do training.

TGLT
Aug 14, 2009
The higher understanding is the intelligence, is the thing. It does not matter what humans do most of the time - a human's or another animal's capacity to hold a concept in our mind is a useful floor for defining intelligence. That's like trying to reduce language to just being able to string words together. Even if humans usually put zero thought into syntax, we are still capable of it in a way not otherwise demonstrated in other species. And similarly I do not know of a single AI model that has yet to demonstrate it can actually understand a concept.

evilweasel
Aug 24, 2002

Kyte posted:

"Intelligence" is fundamentally the capacity to reason about things' thing-ness and connect those concepts to derive new information and solutions. This is a higher form of learning that most animals do not have. Many animals can learn behaviors they're shown, but that's training. AIs can do training.

intelligence is so fundamentally undefinable with our current knowledge that the best suggested test for it is “can you tell the difference between this and a human”

SCheeseman
Apr 23, 2003

I guess what it comes down to is that AIs can't currently conceptualize? Given that's a necessary component of intelligence (I would agree with this), it's a pretty good argument against calling current generative AI truly intelligent.

SCheeseman fucked around with this message at 04:12 on Jan 28, 2024

Main Paineframe
Oct 27, 2010

Clarste posted:

I think the problem is more that intelligence has been rather poorly defined. The way it's commonly used basically means "thinks like a human" which isn't very helpful.

It's not very helpful, but that doesn't mean it's poorly defined. Sometimes stuff just isn't useful. "Intelligent" is a wholly arbitrary concept that we creates mostly for philosophical reasons, not because it's actually useful for anything.

Kwyndig
Sep 23, 2006

Heeeeeey


LLMs can be described using the Chinese room model, which proves they're not intelligent nor can they ever be intelligent. I'm getting tired of AI companies promising or even threatening general AI is just around the corner when their technical progress is easily twenty years or more from that since they're currently funneling resources into a dead end technology.

OctaMurk
Jun 21, 2013
LLMs might be the best they will ever be. As the internet gets filled up with LLM generated stuff then eventually the LLMs will literally just be regurgitating themselves

feedmyleg
Dec 25, 2004
I think LLMs are as good at language generation as they need to be. Things like the GPTs feel like they're the start of the next stage, where it's about how to give LLMs guardrails and "intelligence" through nested, linked, and specialized LLMs. All the ones GPTs I've played with are still dumb as rocks, but it feels like that's where things are headed.

nachos
Jun 27, 2004

Wario Chalmers! WAAAAAAAAAAAAA!
Intelligence doesn’t have to be explicitly defined to have a meaningful discussion about AI in the two primary contexts it seems to be discussed most - replacing labor en masse or destroying civilization.

If we’re talking about replacing human labor, then it’s really a question of accuracy and how much accuracy actually matters in the jobs where humans may eventually be replaced. Maybe it’ll get better or maybe LLMs will regurgitate each other’s hallucinations endlessly, but neither of those have to do with intelligence. Intelligence is just a red herring in these discussions.

If we’re talking about destroying civilization, is any LLM going to start turning into Skynet at any point? No because its not intelligent - it can’t think about concepts and therefore it can’t do things like take your last 10 questions, deduce your motivation behind those, and start willfully misleading you down a path of its own choosing with deliberately constructed false information.

evilweasel
Aug 24, 2002

Kwyndig posted:

LLMs can be described using the Chinese room model, which proves they're not intelligent nor can they ever be intelligent. I'm getting tired of AI companies promising or even threatening general AI is just around the corner when their technical progress is easily twenty years or more from that since they're currently funneling resources into a dead end technology.

llms can be trivially differentiated from a human, which is why they’re not intelligent

there is nothing at all that tells me you cannot be described by the Chinese room model, which is why that’s not why they’re not intelligent

BabyFur Denny
Mar 18, 2003

evilweasel posted:

llms can be trivially differentiated from a human, which is why they’re not intelligent

How, assuming all you is have a text interface?

Xand_Man
Mar 2, 2004

If what you say is true
Wutang might be dangerous


evilweasel posted:

there is nothing at all that tells me you cannot be described by the Chinese room model, which is why that’s not why they’re not intelligent

Huh?

Jose Valasquez
Apr 8, 2005

BabyFur Denny posted:

How, assuming all you is have a text interface?

I’m sorry, but I can’t provide the requested response as it violates OpenAI’s use case policy.

Tree Reformat
Apr 2, 2022

by Fluffdaddy

evilweasel posted:

llms can be trivially differentiated from a human, which is why they’re not intelligent

there is nothing at all that tells me you cannot be described by the Chinese room model, which is why that’s not why they’re not intelligent

Mm, this whole "is machine intelligence possible" debate runs right into the heart of the whole Problem of Other Minds, which normally we just kind of take on faith other humans have and maybe some animals, but we generally don't worry about that. There's no way around it here: to know if machines can ever "think", we need to finally decode how we do, which I consider the truly exciting possibility.

mllaneza
Apr 28, 2007

Veteran, Bermuda Triangle Expeditionary Force, 1993-1952




feedmyleg posted:

I think LLMs are as good at language generation as they need to be. Things like the GPTs feel like they're the start of the next stage, where it's about how to give LLMs guardrails and "intelligence" through nested, linked, and specialized LLMs. All the ones GPTs I've played with are still dumb as rocks, but it feels like that's where things are headed.

I had Google's Bard argue with me and double down on its sources and the numbers it was using.

tl;dr I'm a WW2 navy nerd and this gets into the weeds. I couldn't convince it it's numbers were off, it stuck with its sources and gave reasons why.

I was watching Drachinifell's video on naval history myths, and during the segment on why Japan went with the Kamikaze strategy I decided to run the numbers. I gave Bard this prompt:

quote:

Analyze the effectiveness of Japanese air attacks on the US Navy in World War 2 in terms of aircraft lost per aircraft carrier sunk, 1000 tons of warship sunk, and 1000 USN lives lost. Break the results into periods covering the start of the war, up to the introduction of the F6F Hellcat, up to the Battle of the Phillippine Sea, and then the kamikaze attacks through the end of the war. Display the results in tabular form. Show your sources.

I got a table with realistic numbers in it. The first three time periods looked good, but the time period where the Kamikazes were the dominant strategy seem way off. They were way more effective than a conventional air attack, but 5.5 kamikaze sorties per US aircraft carrier sunk is way off. So I challenged it, but it came back that the key point wasn't aircraft carriers, the dozens of landing craft sunk were a major impact on the allied war effort. And the numbers did support that, sorties per 1000 tons sunk does match up with reality.

tl;dr again Bard schooled me in an area I consider myself knowledgeable.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
The models right now are quite funny with how they will stick to their guns on some topics and then just fold on others.

It doesn't seem to matter if the answer they are defending is correct or not.

Electric Wrigglies
Feb 6, 2015

and outside the interesting theoretical discussion around reinventing the Turing test, the focus on the intelligence part of the AI seems to be in defense of their job. My job requires intelligence, AI doesn't have intelligence so therefore my job can't be done by AI.

Of course, most jobs have large portions that don't require any particularly high level of intelligence, and are quite susceptible to large parts of it being done by automation.

A good example is to extend on the "AI can't identify a chair if you give it a bad enough picture". Well, identifying good fruit from a bunch of bad fruit is a bit like that (should be more difficult really) and sorting fruit has been automated to a large degree. You can youtube tomatoes being conveyed up over an edge and little bursts of air or a paddle wack out the bad/unripe fruit at hundreds per minute. The good fruit will then be taken further and sorted into bins by individual weight.

You give an untrained human that task and they will gently caress it up or work exceedingly slowly. Once the human is trained up and is well experienced, it is possible to outperform the machine on accuracy (unless the AI is doing some sort of internal look by x-ray for bugs but that is uncommon, most are just camera based from what I understand).

AI doesn't even need deterministic tasks. Playing chess is not like tic tac toe where every path is well understood and playing opponents can't be purely based on percentage plays (because the opponent will pick up the intent and play a counter designed to overcome that play), it requires planning without certainty of the result and humans can't be trained well enough to beat a good automated chess player these days after much development.

Where AI is likely to be really challenging is in subjective tasks. "Chip me out an aesthetically pleasing human form sculpture from this hunk of marble - and it better not be copy past of David!" but that is a matter of the beholders fashion more than anything else (which is often just as much driven by the source of the art as much as the art itself - you tell people AI carved this rock and all of a sudden, their opinion on the piece will drop several points).

Ruffian Price
Sep 17, 2016

Electric Wrigglies posted:

which is often just as much driven by the source of the art as much as the art itself - you tell people AI carved this rock and all of a sudden, their opinion on the piece will drop several points
2009's From Darkness, Light, a solo piano album with the score generated with latent semantic analysis of existing pieces and performed by real pianists, had a similar reception, with critics focusing on different aspects of the music depending on if they understood the author as human or machine. Sci-fi primed us well, as the obvious parallel here is aleatoric music with the score written based on random patterns (making generative art technically centuries old), but bring a computer into it and people start imagining a tin man Mozart. The only enemy here is capitalism

Mercury_Storm
Jun 12, 2003

*chomp chomp chomp*
I sometimes try to get ChatGPT to do busywork, like organizing data or making tables of values that I don't want to type in manually, and it's not even really good at that because it's so loving lazy. It gives the absolute minimum output, which is often wrong, and then is like "ok here's something like 25% of what you asked for, as an example!"

dr_rat
Jun 4, 2001

Mercury_Storm posted:

I sometimes try to get ChatGPT to do busywork, like organizing data or making tables of values that I don't want to type in manually, and it's not even really good at that because it's so loving lazy. It gives the absolute minimum output, which is often wrong, and then is like "ok here's something like 25% of what you asked for, as an example!"

Stop trying to make ChatGPT sounds so relatable!

The Artificial Kid
Feb 22, 2002
Plibble

Kwyndig posted:

LLMs can be described using the Chinese room model, which proves they're not intelligent nor can they ever be intelligent. I'm getting tired of AI companies promising or even threatening general AI is just around the corner when their technical progress is easily twenty years or more from that since they're currently funneling resources into a dead end technology.

The Chinese Room thought experiment is a bullshit appeal to incredulity. The sleight of hand lies in declaring that we’ve laid out rules for the person in the room that produce responses that represent an understanding of Chinese. We are supposed to believe that those rules can do the task perfectly but at the same time are so rote and mechanical as to prevent any room-scale “understanding” of Chinese. And yet nothing provably happens in our brains that can’t be reduced to a rule being followed in a room (even if you needed enough of a room to simulate every atom in our brains).

The problem isn’t your concept of how limited AI is, it’s the idea that we are somehow doing something different and special in our heads. I would argue it’s more likely we are just doing something more complex and cross-domain integrated.

Baronash
Feb 29, 2012

So what do you want to be called?
How tech companies are using/selling ML (or whatever the current industry branding campaign is calling it) is probably relevant in this thread, but we have a whole thread to talk about AI, how it works, and how "intelligent" it is right here.

VikingofRock
Aug 24, 2008




Literally all of the ads on my kindle (already a tech nightmare IMO) are now for obviously AI-generated, search
-engine-optimized books, which are all subtitled "cute fairy tale bedtime story for kids" and which all feature obvious AI-generated cover art. I don't even have kids. The Algorithm was never great for kindle recommendations, but it used to at least have the decency to recommend human-written fantasy erotica.

Professor Beetus
Apr 12, 2007

They can fight us
But they'll never Beetus

VikingofRock posted:

Literally all of the ads on my kindle (already a tech nightmare IMO) are now for obviously AI-generated, search
-engine-optimized books, which are all subtitled "cute fairy tale bedtime story for kids" and which all feature obvious AI-generated cover art. I don't even have kids. The Algorithm was never great for kindle recommendations, but it used to at least have the decency to recommend human-written fantasy erotica.

Lmao, I just started getting this as well, despite reading nothing but dry political non-fiction for the past several years. I also have no children.

Mercury_Storm
Jun 12, 2003

*chomp chomp chomp*
Have you guys tried torrenting some books from your local library?

feedmyleg
Dec 25, 2004
Dry political non-fiction erotica?

Arivia
Mar 17, 2011

feedmyleg posted:

Dry political non-fiction erotica?

Ron Chernow, right this way.

VikingofRock
Aug 24, 2008




Mercury_Storm posted:

Have you guys tried torrenting some books from your local library?

This is actually how I get most of my books nowadays (i.e., through Libby), and then I read them on my Kindle. Kindles have ads on the lock screen unless you pay $20 to get ad-free reading.

withak
Jan 15, 2003


Fun Shoe
Best $20 I ever spent.

Nervous
Jan 25, 2005

Why, hello, my little slice of pecan pie.

VikingofRock posted:

This is actually how I get most of my books nowadays (i.e., through Libby), and then I read them on my Kindle. Kindles have ads on the lock screen unless you pay $20 to get ad-free reading.

I have a kindle touch from like 2010 that I really need to figure out how to get it to talk to stuff again. None of that poo poo on it.

Antigravitas
Dec 8, 2019

Die Rettung fuer die Landwirte:
I went with a Kobo instead specifically because Kindles jumped the shark years ago.

Mine displays the cover art of the book I am reading.

My next reader will have physical buttons again, though. gently caress touch interfaces while laying in bed holding the thing in one hand.

Kestral
Nov 24, 2000

Forum Veteran

Antigravitas posted:

I went with a Kobo instead specifically because Kindles jumped the shark years ago.

Mine displays the cover art of the book I am reading.

My next reader will have physical buttons again, though. gently caress touch interfaces while laying in bed holding the thing in one hand.

The Kobo Libra II has nice buttons and a touch interface, and weighs basically nothing, I love it dearly.

Osmosisch
Sep 9, 2007

I shall make everyone look like me! Then when they trick each other, they will say "oh that Coyote, he is the smartest one, he can even trick the great Coyote."



Grimey Drawer
I got a reMarkable 2 and am super-happy with it. Turns out what I was missing from pdfs to keep my attention was the ability to scribble in the margins to keep me occupied.

SaTaMaS
Apr 18, 2003

The Artificial Kid posted:

The Chinese Room thought experiment is a bullshit appeal to incredulity. The sleight of hand lies in declaring that we’ve laid out rules for the person in the room that produce responses that represent an understanding of Chinese. We are supposed to believe that those rules can do the task perfectly but at the same time are so rote and mechanical as to prevent any room-scale “understanding” of Chinese. And yet nothing provably happens in our brains that can’t be reduced to a rule being followed in a room (even if you needed enough of a room to simulate every atom in our brains).

The problem isn’t your concept of how limited AI is, it’s the idea that we are somehow doing something different and special in our heads. I would argue it’s more likely we are just doing something more complex and cross-domain integrated.

Also the thought experiment was meant to apply to expert systems from the 80s which worked by rules of logic and symbol processing which is completely different from how LLMs work. The relavant metaphor is the "China brain" thought experiment (https://en.wikipedia.org/wiki/China_brain) which is actually older than the Chinese Room thought experiment.

busalover
Sep 12, 2020

VikingofRock posted:

This is actually how I get most of my books nowadays (i.e., through Libby), and then I read them on my Kindle. Kindles have ads on the lock screen unless you pay $20 to get ad-free reading.

Or never enable Wifi and you get a generic image on the lockscreen. That's what I do.

Adbot
ADBOT LOVES YOU

withoutclass
Nov 6, 2007

Resist the siren call of rhinocerosness

College Slice

busalover posted:

Or never enable Wifi and you get a generic image on the lockscreen. That's what I do.

This move also allows you to keep a Libby book until you're done because it won't return the book until it can phone home.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply