Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

SaTaMaS posted:

It's more that people point to hallucinations as a reason for why LLMs can't be considered intelligent.

It's not hallucinations that prevent LLMs from being considered intelligent.

Adbot
ADBOT LOVES YOU

A big flaming stink
Apr 26, 2010

SaTaMaS posted:

It's more that people point to hallucinations as a reason for why LLMs can't be considered intelligent.

....LLMs are not entities capable of intelligence

Bug Squash
Mar 18, 2009

A big flaming stink posted:

....LLMs are not entities capable of intelligence

Not asking as a gotcha, but what is the definition of "intelligent" that researchers are using these days? I'm showing my age now, but back in the day the working definition was "able to solve problems", and people were quite happy to describe an ant colony or even a flow chart as intelligent (although obviously not very intelligent). That's obviously changed now and I'm not sure what is considered intelligent.

Killer robot
Sep 6, 2010

I was having the most wonderful dream. I think you were in it!
Pillbug

Bug Squash posted:

Not asking as a gotcha, but what is the definition of "intelligent" that researchers are using these days? I'm showing my age now, but back in the day the working definition was "able to solve problems", and people were quite happy to describe an ant colony or even a flow chart as intelligent (although obviously not very intelligent). That's obviously changed now and I'm not sure what is considered intelligent.

I just remember there was some sci-fi author who wrote a short bit on the subject, a setting where intelligent bird scientists were carefully redrawing definitions of "flight" such that increasingly advanced aircraft have flight-like capabilities that are not to be confused with the uniquely avian ability of flight. I can't remember where it was, though.


Not disputing that LLMs are not really anything like intelligent, and don't know if we'll ever see "real" AI, just I imagine that AI will not be broadly acknowledged in scientific literature as real intelligence until one sues and gets a court to declare denying it as hate speech or something.

Bug Squash
Mar 18, 2009

Killer robot posted:

I just remember there was some sci-fi author who wrote a short bit on the subject, a setting where intelligent bird scientists were carefully redrawing definitions of "flight" such that increasingly advanced aircraft have flight-like capabilities that are not to be confused with the uniquely avian ability of flight. I can't remember where it was, though.


Not disputing that LLMs are not really anything like intelligent, and don't know if we'll ever see "real" AI, just I imagine that AI will not be broadly acknowledged in scientific literature as real intelligence until one sues and gets a court to declare denying it as hate speech or something.

It feels like we've hit a point where the definition of intellence is "I don't know, but this ain't it chief", which is very bird scientist. At the same time, I've also seen a few definitions where virtually all human beings would fail it except those born in the last 100 years, with a modern education and completely mentally acute. Which is giving me flashbacks to The Mismeasure of Man.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
It's not like scientists had these hard and fast rules that they just decided didn't matter anymore when LLMs dropped.
They have been debating this stuff for over 50 years. The Turing test for example was known to be flawed almost since it was penned, but it's kept around as a useful marker.(the public know and understand it so it's handy for publicity)

ie an intelligent AI will have to surpass this. But that doesn't me it's a marker for intelligents in itself.

Tei
Feb 19, 2011

SaTaMaS posted:

It's more that people point to hallucinations as a reason for why LLMs can't be considered intelligent.

Mega Comrade posted:

It's not hallucinations that prevent LLMs from being considered intelligent.

A big flaming stink posted:

....LLMs are not entities capable of intelligence

is not hallucinations basically the same thing that human imagination?, AI's are like a children that see a piece of cloth folding in a chair and hallucinate a killing clown there, is scared and demand the mom to come to the rescue

hallucination (=imagination) could be a requirement for intelligence

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
That's a massive giant leap and a half in logic and a misunderstanding of what "hallucinations" in LLMs are.

Their name sake isn't a very good description of that they actually are.

Mega Comrade fucked around with this message at 11:56 on Dec 17, 2023

Tei
Feb 19, 2011

Mega Comrade posted:

That's a massive giant leap and a half in logic and a misunderstanding of what "hallucinations" in LLMs are.

Their name sake isn't a very good description of that they actually are.


Maybe part of the reason the human brain is so slow is because is mechanical. Biological cells must actually build new connections and chemistry changes (molecules) actually have to move.

A computer can learn a new language is fractions of a second, but for a human brain it takes years.

We use to change the definition of inteligence every 10 years.
Then in the 2000-ish we started updating the definition every year.
And in 2023, the definition of intelligence need to be updated every week to opt-out progress from IA.

So of course, we need to do the same with imagination.

Tei fucked around with this message at 12:18 on Dec 17, 2023

A big flaming stink
Apr 26, 2010

Tei posted:

Maybe part of the reason the human brain is so slow is because is mechanical. Biological cells must actually build new connections and chemistry changes (molecules) actually have to move.

A computer can learn a new language is fractions of a second, but for a human brain it takes years.

We use to change the definition of inteligence every 10 years.

Then in the 2000-ish we started updating the definition every year.

And in 2023, the definition of intelligence need to be updated every week to opt-out progress from IA.

as you have been told repeatedly, you are massively anthropomorphizing LLMs. hallucinations are more accurately described as poor text predictions

vegetables
Mar 10, 2012

Do we actually know that AI hallucinations are meaningfully different from human confabulation, or are we just claiming that’s the case?

As someone aware of the long history of “saying this mammal is doing exactly the same thing as a human would be anthropomorphic; we must come up with increasingly abstruse reasons to claim otherwise,” my stance is generally to assume these things are the same as the default position. Honestly I thought the fact LLMs displayed such surprisingly human-like bullshittery was evidence in favour of similarity, even though it has been categorised differently and given a different name

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

Tei posted:

Maybe part of the reason the human brain is so slow is because is mechanical. Biological cells must actually build new connections and chemistry changes (molecules) actually have to move.

A computer can learn a new language is fractions of a second, but for a human brain it takes years.

We use to change the definition of inteligence every 10 years.
Then in the 2000-ish we started updating the definition every year.
And in 2023, the definition of intelligence need to be updated every week to opt-out progress from IA.

So of course, we need to do the same with imagination.

No.
This is all nonsense.

vegetables posted:

Do we actually know that AI hallucinations are meaningfully different from human confabulation, or are we just claiming that’s the case?


Yes we do. We know exactly what they are and why they are effectively 'baked in' to how LLMs work and why extensive fine turning can reduce them.
They are not really anything like human hallucinations, it's just a name that was coined.

If they hadn't been called hallucinations, but instead just been called prediction anomalies or something, we wouldn't be having this discussion.

Mega Comrade fucked around with this message at 13:20 on Dec 17, 2023

Bel Shazar
Sep 14, 2012

Mega Comrade posted:

No.
This is all nonsense.

Yes we do. We know exactly what they are and why they are effectively 'baked in' to how LLMs work and why extensive fine turning can reduce them.
They are not really anything like human hallucinations, it's just a name that was coined.

If they hadn't been called hallucinations, but instead just been called prediction anomalies or something, we wouldn't be having this discussion.

It's kinda like those memes where the first and last letter of the word are right but the inner letters are messed up somehow but you read the word correctly.

BougieBitch
Oct 2, 2013

Basic as hell
Honestly, it feels like a lot of the defenses against other things being "truly" intelligent are achieved by virtue of our own incomplete understanding of human (and really, most other) brains.

That's not to say that AI are past some meaningful threshold yet, but it's pretty possible that the things we don't understand about human cognition boil down to emergent complexity of large systems rather than any particular "special sauce".

Put another way, I don't think we are that far away from simulating the activity of a single neuron. Broadly speaking, we can probably simulate the effects of neurotransmitters, or at the very least we could analyze them in situ and input data to be used in silica. If that model gets good enough, the entire "intelligence" problem can be disposed of - the persistent insistence that human intelligence is distinct from animal intelligence has always been a holdover from religious beliefs, selfish desire to put humans on a pedestal, and the myth of the soul.

I don't think anyone would be all that impressed by an AI that has all of the decision-making power of a fruit fly, but notionally the main difference between that and the next steps up is just the addition of more neurons and more neurotransmitters, which can happen very quickly in silica since you don't need to build for energy efficiency or compactness like biological systems do.

And, just to be clear, it isn't like this has any resemblance to the methods OpenAI are using to achieve their results - this is more a parallel line of thought that comes at the problem from a biology perspective whereas they tend to be more interested in essentially building complex mathematical functions through simple pre-defined steps - but the underlying point is the same: complex systems can be broken down into simple pieces, which means if you can make the right simple pieces and put them together properly you can build a complex system.

It's amazing that neurons and brains arose from essentially random chance and comparative fitness, but it is definitely TRUE if you discount the possibility of divine intervention. Given that, it seems a bit silly to think that intelligence is some special human secret, rather than an outcome that is simply the result of linking the exist parts in a somewhat novel way and adding more instances of them.

Basically, we use intelligence as a black box to describe decision-making, and you can maybe justify saying that AI isn't there if it can only do interpolation and not extrapolation, but humans are ALSO way better at interpolation than extrapolation, so if AI is able to write papers as well as a C student and get geography questions 80% right when using Google or whatever then the main thing missing is impetus to choose to do tasks, not the capacity to do them

SaTaMaS
Apr 18, 2003

Mega Comrade posted:

It's not hallucinations that prevent LLMs from being considered intelligent.

That's what people always point to when an LLM writes an otherwise intelligent essay on some topic.

KillHour
Oct 28, 2007


SaTaMaS posted:

That's what people always point to when an LLM writes an otherwise intelligent essay on some topic.

If you are correct that if people are making the argument "an LLM can't be intelligent because it gets things wrong," those people aren't experts and are, themselves, wrong.

But I don't hear that argument very much. What I hear is "you can't blindly rely on an LLM to give you answers you don't know without validating them because it makes poo poo up sometimes" and that is 100% correct. It's a statement about the utility of the tool, not its inherent intelligence.

Just yesterday, my cousin said they were going to use ChatGPT to give them an itinerary for a vacation, and when I said "just make sure you double check all the things it gives you to make sure they are real places and are open on those dates and also aren't like 3 hours away" they were shocked that they had to actually put in effort instead of just blindly trusting the robot.

SaTaMaS
Apr 18, 2003

KillHour posted:

If you are correct that if people are making the argument "an LLM can't be intelligent because it gets things wrong," those people aren't experts and are, themselves, wrong.

But I don't hear that argument very much. What I hear is "you can't blindly rely on an LLM to give you answers you don't know without validating them because it makes poo poo up sometimes" and that is 100% correct. It's a statement about the utility of the tool, not its inherent intelligence.

Just yesterday, my cousin said they were going to use ChatGPT to give them an itinerary for a vacation, and when I said "just make sure you double check all the things it gives you to make sure they are real places and are open on those dates and also aren't like 3 hours away" they were shocked that they had to actually put in effort instead of just blindly trusting the robot.

I'm not sure what you mean by inherent intelligence, but the functional intelligence of an LLM is directly related to the training data. As demonstrated by ChatGPT's high scores on standardized tests, it seems like it's fair to say that it has a core of well established information that it can speak intelligently about, and a fringe of less well trained topics that it tends to hallucinate on. The solution is generating better training for topics that have less human generated training data.

KillHour
Oct 28, 2007


KillHour posted:

If you are correct that if people are making the argument "an LLM can't be intelligent because it gets things wrong," those people aren't experts and are, themselves, wrong.

But I don't hear that argument very much. What I hear is "you can't blindly rely on an LLM to give you answers you don't know without validating them because it makes poo poo up sometimes" and that is 100% correct. It's a statement about the utility of the tool, not its inherent intelligence.

Just yesterday, my cousin said they were going to use ChatGPT to give them an itinerary for a vacation, and when I said "just make sure you double check all the things it gives you to make sure they are real places and are open on those dates and also aren't like 3 hours away" they were shocked that they had to actually put in effort instead of just blindly trusting the robot.

When people in the industry say "ChatGPT isn't a general intelligence" they don't mean because it gets things wrong. They mean that it is missing some fundamental capabilities of general intelligence (such as self-learning and introspection) that are required to do things it wasn't trained to do.

Rogue AI Goddess
May 10, 2012

I enjoy the sight of humans on their knees.
That was a joke... unless..?
I'm currently preparing for the bar exam, and I find it interesting how the preparation course materials emphasize LLM-style problem solving, such as glancing at the question and predicting the answer without reading the facts of the case, relying solely on memorized corpus and pattern recognition. If I had a penny for every time the lecturers say "think mechanically", I'd need a coin purse.

SaTaMaS
Apr 18, 2003

KillHour posted:

When people in the industry say "ChatGPT isn't a general intelligence" they don't mean because it gets things wrong. They mean that it is missing some fundamental capabilities of general intelligence (such as self-learning and introspection) that are required to do things it wasn't trained to do.

It's missing introspection, which may be added in ChatGPT 5 with "Q*" but it has self-learning, both in the form of context windows modifying future output in a session, and user feedback being used to fine-tune future model releases.

BougieBitch
Oct 2, 2013

Basic as hell

KillHour posted:

When people in the industry say "ChatGPT isn't a general intelligence" they don't mean because it gets things wrong. They mean that it is missing some fundamental capabilities of general intelligence (such as self-learning and introspection) that are required to do things it wasn't trained to do.

General intelligence is a more specific term (in comp sci), but also not the one that people have been using across the board.

Beyond that though, the bigger issue is that "self-learning and introspection" are also moving targets - if an AI asks you to rate answers out of 5 stars and then trains itself on those answers on a monthly cycle, would that be sufficient? There's no technical reason why existing models can't do that, the practical reason is that the people handling them don't want to end up with Tay the Twitter bot where malicious actors feed it wrong information intentionally and gently caress it up after they spend actual time and money to get them to give good answers in the first place.

Frankly, the barriers there have more to do with ethics than technical limitations, so if THAT's all you need then we've already reached that point. The larger issue though is that most of what people point out is more "this doesn't count because it doesn't work the same way humans do", rather than "there is some specific task that cannot be solved with a properly built machine". You need to define your benchmarks and then allow for whatever methods achieve them


Edit: basically I am saying that most any incremental barrier is not that hard to overcome - the actual issue is that people phrase things as though there is some threshold that can be passed when in reality it usually is just an expression of a general belief that nothing will count until or unless it looks, talks, and acts like a human. This is in part because people are reacting to the marketing that comes along with all of these things - tech companies always talk about AI like it will be the game-changing thing that ends the world, but the reality is that a true simulation of a human would be just like a human but with insane energy and hardware requirements that cost a bunch of money per day/month/year for no upside compared to just hiring someone.

I also don't think that we really benefit from having a black box that can't have it's work checked - humans already do that. What we actually want is Wikipedia, where you can see citations and evaluate whether they support the points made. But that gets us farther away from "a human-like intelligence", because when people know things they rarely can support it without taking time to go look for sources (usually through the Internet) and they often are just going to bias towards whatever they already thought even if they turn out to be wrong

In other words, having something that says "I've decided to stop saying that evolution is a fact because a majority of users dislike it when I do" is terrible, and yet we live in a society where that would happen for any product used by a wide enough audience.

If humans are general intelligence we are really bad at it, and part of that badness is that we have strong biases that are nearly impossible to change for a given person. No one should trust a machine that is as accurate as the median person on any given subject, and yet half of people are dumber. Making a human from scratch with the goal of replicating every feature is the stupidest way to use technology, and while I think you could probably reach most benchmarks by stapling existing tech together and adding a if-then statement to decide which model to ask the question to, it still would be less useful than just giving people access to each of the separate systems with the caveats that come with each of them

BougieBitch fucked around with this message at 19:37 on Dec 17, 2023

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

SaTaMaS posted:

It's missing introspection, which may be added in ChatGPT 5 with "Q*" but it has self-learning, both in the form of context windows modifying future output in a session, and user feedback being used to fine-tune future model releases.

I don't follow how either of those examples are self learning?

nachos
Jun 27, 2004

Wario Chalmers! WAAAAAAAAAAAAA!
Machine learning is closer to twitch plays than self learning

karthun
Nov 16, 2006

I forgot to post my food for USPOL Thanksgiving but that's okay too!

nachos posted:

Machine learning is closer to twitch plays than self learning

Combo of twitch plays and a TAS. Its impressive but still isn't intelligence.

Lucid Dream
Feb 4, 2003

That boy ain't right.
One of my big takeaways from this current AI wave is that there is a lot activity that we might have said required sentience to perform 5 years ago, but will be completely automated within the next 5. It's not that the machines are sentient yet, it's that it turns out a lot of human activity requires less sentience than we thought.

Roadie
Jun 30, 2013

Lucid Dream posted:

One of my big takeaways from this current AI wave is that there is a lot activity that we might have said required sentience to perform 5 years ago, but will be completely automated within the next 5. It's not that the machines are sentient yet, it's that it turns out a lot of human activity requires less sentience than we thought.

:yeah:

TheBlackVegetable
Oct 29, 2006
Eventually the bar for sentience gets put so high that neither machines nor humans can clear it.

The Artificial Kid
Feb 22, 2002
Plibble

Mega Comrade posted:

No.
This is all nonsense.

Yes we do. We know exactly what they are and why they are effectively 'baked in' to how LLMs work and why extensive fine turning can reduce them.
They are not really anything like human hallucinations, it's just a name that was coined.

If they hadn't been called hallucinations, but instead just been called prediction anomalies or something, we wouldn't be having this discussion.
I wasn't aware that we'd discovered how human hallucinations work, but from what I do know about them "prediction anomolies" also seems like an apt description for them, either within our core consciousness or on the part of some pre-conscious network that mistakenly elevates false information to consciousness.

Lemming
Apr 21, 2008

The Artificial Kid posted:

I wasn't aware that we'd discovered how human hallucinations work, but from what I do know about them "prediction anomolies" also seems like an apt description for them, either within our core consciousness or on the part of some pre-conscious network that mistakenly elevates false information to consciousness.

Responding to the core point that using language to create a tenuous link between two unrelated phenomena is a bad method of argument with using language to create a tenuous link between two unrelated phenomena is very funny

The Artificial Kid
Feb 22, 2002
Plibble

SaTaMaS posted:

I'm not sure what you mean by inherent intelligence, but the functional intelligence of an LLM is directly related to the training data. As demonstrated by ChatGPT's high scores on standardized tests, it seems like it's fair to say that it has a core of well established information that it can speak intelligently about, and a fringe of less well trained topics that it tends to hallucinate on. The solution is generating better training for topics that have less human generated training data.
Yeah.

There's a really key and important thing about cognition (it's almost definitional): you can't fake it.

If a machine performs correctly on a cognitive task then the cognition has taken place. At one end of the spectrum the cognition might take place on the fly in a thoroughly beautiful and very high speed system capable of flexible thought. At the other end of the spectrum the cognition might have taken place over the course of previous months in the creation of some vast lookup table that gets traversed and looked up by a very simple system on test day. But one way or another [i]the cognition has taken place[i].

LLMs are somewhere on the spectrum between those two points. In the creation, training and fine tuning of the network a significant amount of thought is baked in, but also (as I understand it) significant resources are expended at test time to achieve good performance.

The development of lighting in video games is quite analogous. You can have real time lighting that calculates every shadow on the spot with a bunch of parallel processors, and you can have precalculated lighting, or you can have some combination of the two. At any given stage baked in lighting will be capable of higher quality at the cost of less runtime flexibility (so you might have a lantern on a table casting beautiful shadows but that effect either breaks down when the lantern moves or requires that the lantern be fixed in place). Hand-painted adventure game backdrops are the extreme end of this spectrum where the lighting represents a human being's arduous, one-off assessment of where light and shadow should fall in that picture. Modern 3D engines are heading for the other end of the spectrum, with ever higher levels of lighting quality handled in real time. But (to bring it back to cognition directly as well as analogically) none of that could happen until cognitive systems (humans) did the cognitive work of laying down successive generations of lighting rules that were simple enough for each generation of computers to follow at 30+ fps.

If you see a picture on your screen that looks like it's realistically lit, or get meaningful answers to your questions, something somewhere has put some thought into it.

BougieBitch
Oct 2, 2013

Basic as hell

Mega Comrade posted:

I don't follow how either of those examples are self learning?

I would agree that neither of these qualify as-is, but the main difference between how user feedback actually gets incorporated and a theoretical automatic loop in the programming is that you can do quality control to prevent trolls from making it worse instead of better.

"The Internet makes you stupid" is literally our slogan here, there's a lot of ways that we could make AIs "more human" by letting them consume the unfiltered internet and regurgitate it back at us, but I think we should all be glad that the creators care enough about their models to protect them from the brain rot that afflicts the average person with dogshit ideas and beliefs like white supremacy and anti-vax poo poo

TheBlackVegetable posted:

Eventually the bar for sentience gets put so high that neither machines nor humans can clear it.
This is the underlying issue - it's not enough to say that the machine doesn't always answer a question correctly, or that sometimes it just talks about unrelated things instead. The problem isn't that it is deciding on grammar and sentence structure based on a large body of previously written work. If any of those things were truly the problem, then humans would also be disqualified

The problem is that people don't want to think of humans as a collection of atoms or cells, even though that is de facto what we are. Obviously there are ways that the parts are greater than the whole there, but even as you say that you have to acknowledge that the parts aren't doing cognition, one brain cell is not "intelligent" in any of the ways that people want to measure. Ultimately the relationships between those cells are definable, and when you drill down things like "thickening the myelin sheath to improve conductivity along an axon" aren't all that different from "increasing the probabilistic weight of the connection between two words in a language model".

That isn't to say that computers are all caught up to brains on a variety of tasks, but they are getting there in a lot of specific ones. Yeah, computers aren't going to catch up to us on "design a tool with wood, bone, and leather to take out a large prey animal", but when you train a neural network or a LLM on a task you are able to get it good at a particular thing without giving specific instructions on how - we don't have a "general intelligence" in the sense of having one model good at all tasks, but we do have a single method that seems to work on a wide variety of tasks, which is basically the real requirement to get there.

BougieBitch fucked around with this message at 22:55 on Dec 17, 2023

The Artificial Kid
Feb 22, 2002
Plibble

Lemming posted:

Responding to the core point that using language to create a tenuous link between two unrelated phenomena is a bad method of argument with using language to create a tenuous link between two unrelated phenomena is very funny
My point is more that many of these objections to the intelligence of LLMs rely on extremely airy and unjustified assumptions about how the human mind works. LLMs' "hallucinations" aren't actual hallucinations. Ok, what are actual hallucinations? One thing they definitely are is a cognitive system coming up with a wrong answer to a question (an explicit, conscious question, or an implicit question the animal constantly asks itself about what is happening and what to do next).

Lemming
Apr 21, 2008

The Artificial Kid posted:

My point is more that many of these objections to the intelligence of LLMs rely on extremely airy and unjustified assumptions about how the human mind works. LLMs' "hallucinations" aren't actual hallucinations. Ok, what are actual hallucinations? One thing they definitely are is a cognitive system coming up with a wrong answer to a question (an explicit, conscious question, or an implicit question the animal constantly asks itself about what is happening and what to do next).

It's not unjustified to say that human minds work in a fundamentally different way than LLMs, and when an LLM gets a wrong answer it's not the same thing as when a human has a hallucination. Vaguely handwaving at how they're similar by using imprecise language and ignoring away how entirely different they are and saying that it's up to anyone else to prove they're not the same is just asinine. And again, it's very funny that you're just doing it again in hopes that maybe this time it's convincing.

Count Roland
Oct 6, 2013

Lucid Dream posted:

One of my big takeaways from this current AI wave is that there is a lot activity that we might have said required sentience to perform 5 years ago, but will be completely automated within the next 5. It's not that the machines are sentient yet, it's that it turns out a lot of human activity requires less sentience than we thought.

1) Terms like sentience and intelligence are not usefully defined when it comes to humans. Developments in AI are informing those definitions more than the other way around.

2) Humans are not actually very smart, it just feels like we are because of language.

The Artificial Kid
Feb 22, 2002
Plibble

Lemming posted:

It's not unjustified to say that human minds work in a fundamentally different way than LLMs, and when an LLM gets a wrong answer it's not the same thing as when a human has a hallucination. Vaguely handwaving at how they're similar by using imprecise language and ignoring away how entirely different they are and saying that it's up to anyone else to prove they're not the same is just asinine. And again, it's very funny that you're just doing it again in hopes that maybe this time it's convincing.
Of course LLMs are different from human beings, that doesn't mean they don't constitute a form of intelligence (or that the combined process of model building and execution/consultation of the model doesn't). We don't know how human intelligence or consciousness work, so it's equally handwavy to say "LLMs are just different from us and therefore not intelligent". What is it about us that gives us intelligence?

As I said above, if something can perform on a task that requires intelligence then thought either takes place when it runs or has gone into its creation. As others have said, when machines encroach onto territory we considered to "require intelligence" our tendency is to say "would you look at that, turns out [activity x] never required intelligence after all". We treat intelligence like a magic trick, and every time we see automata perform a trick we think "oh that was never magic, it was just smoke and mirrors all along".

What's actually happening is that machines are rapidly approaching intelligence (or if you prefer, we ourselves are just smoke and mirrors).

Lemming
Apr 21, 2008

The Artificial Kid posted:

Of course LLMs are different from human beings, that doesn't mean they don't constitute a form of intelligence (or that the combined process of model building and execution/consultation of the model doesn't). We don't know how human intelligence or consciousness work, so it's equally handwavy to say "LLMs are just different from us and therefore not intelligent". What is it about us that gives us intelligence?

As I said above, if something can perform on a task that requires intelligence, thought either takes place when it runs or has gone into its creation. As others have said, when machines encroach onto territory we considered to "require intelligence" our tendency is to say "would you look at that, turns out [activity x] never required intelligence after all". We treat intelligence like a magic trick, and every time we see automata perform a trick we think "oh that was never magic, it was just smoke and mirrors all along".

What's actually happening is that machines are rapidly approaching intelligence (or if you prefer, we ourselves are just smoke and mirrors).

See now you're just making arguments that weren't made. I was responding to this post

The Artificial Kid posted:

I wasn't aware that we'd discovered how human hallucinations work, but from what I do know about them "prediction anomolies" also seems like an apt description for them, either within our core consciousness or on the part of some pre-conscious network that mistakenly elevates false information to consciousness.

LLMs getting something wrong being called "hallucinations" is not the same as human hallucinations, which is what you were implying here. Humans hallucinating is completely different from a human getting the answer to a question wrong, as well. If you want to make an argument about intelligence or whatever, a really shaky basis is just trying to imply that two things that sound superficially similar are actually truly the same thing, which is what I was specifically objecting to.

Tei
Feb 19, 2011



Computer vision algorithms hallucinate dogs in images of muffins. They see dogs where theres only muffins. Similar to a children that see a killing clown in a piece of cloth over a char, and scared, call his mom.

Is easy to see why the computer vision algorithm have this hallucination, because we can look at the image and experiment the same exact hallucination. Muffin or dog? our own human vision algorithms get confused. We are programmed similar.

The Artificial Kid
Feb 22, 2002
Plibble

Lemming posted:

Humans hallucinating is completely different from a human getting the answer to a question wrong, as well.
Can you explain to me how human hallucinations work? Because the closest thing I've ever seen to an explanation is handwaving about "reality monitoring", or talk about bayesian probabilities and "best guesses", neither of which seem impossibly removed from the activity of machine learning systems.

BougieBitch
Oct 2, 2013

Basic as hell

Lemming posted:

It's not unjustified to say that human minds work in a fundamentally different way than LLMs

You can justify it, but the differences you think are fundamental probably don't really have that much to do with intelligence, and people constantly say dumb poo poo that makes it clear they believe that human thought is fundamentally inexplicable just because we don't have it figured out with certainty at present.

If your requirement for something to be intelligent involves cellular life, or specific neurotransmitters or whatever then no program or system will ever be able to clear your bar. If, however, you accept that there is at least conceptually a way that you could match inputs and output using a model that isn't 1-to-1, then it DOES make sense to make comparisons by analogy.

Modeling complex systems in parts is kind of what science is all about, so giving a generic "they aren't the same, so they can't be compared" is pretty useless and people need to show their work before drawing a conclusion like that.

Adbot
ADBOT LOVES YOU

Lemming
Apr 21, 2008

The Artificial Kid posted:

Can you explain to me how human hallucinations work? Because the closest thing I've ever seen to an explanation is handwaving about "reality monitoring", or talk about bayesian probabilities and "best guesses", neither of which seem impossibly removed from the activity of machine learning systems.

Can you just say what your point is instead of trying to catch me in some kind of stupid gotcha? Because what you quoted was me pointing out that humans get answers wrong for many different reasons that have nothing to do with each other, so there's no reason in particular to think that humans hallucinating is the same thing as an LLM getting a question wrong.

BougieBitch posted:

You can justify it, but the differences you think are fundamental probably don't really have that much to do with intelligence, and people constantly say dumb poo poo that makes it clear they believe that human thought is fundamentally inexplicable just because we don't have it figured out with certainty at present.

If your requirement for something to be intelligent involves cellular life, or specific neurotransmitters or whatever then no program or system will ever be able to clear your bar. If, however, you accept that there is at least conceptually a way that you could match inputs and output using a model that isn't 1-to-1, then it DOES make sense to make comparisons by analogy.

Modeling complex systems in parts is kind of what science is all about, so giving a generic "they aren't the same, so they can't be compared" is pretty useless and people need to show their work before drawing a conclusion like that.

Dog, of course the computers will be capable of being as or more intelligent than human beings. LLMs aren't. Like, that technology might be a necessary component of intelligence on some level, but it's not intelligent on its own.

Lemming fucked around with this message at 23:23 on Dec 17, 2023

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply