Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
reignonyourparade
Nov 15, 2012
"Being able to confidently state human hallucinations and AI-prediction-anomolies are fundamentally different should probably involve actually having a confident explanation of how human hallucinations work" is a pretty reasonable stance to me. So while the people asking you how human hallucinations might be trying to catch you in a gotcha, it's a very reasonable gotcha.

Adbot
ADBOT LOVES YOU

Lemming
Apr 21, 2008

reignonyourparade posted:

"Being able to confidently state human hallucinations and AI-prediction-anomolies are fundamentally different should probably involve actually having a confident explanation of how human hallucinations work" is a pretty reasonable stance to me. So while the people asking you how human hallucinations might be trying to catch you in a gotcha, it's a very reasonable gotcha.

No, it's not. This goes back to the original point that using the word "hallucination" has made everyone discuss this situation in a really dumb way, but a great example is that fact that most people don't hallucinate (because hallucinations are manifestation of some kind of mental illness, where your brain isn't working the way it's supposed to), and ALL LLMs "hallucinate" necessarily as a function of how they work (because they're text predictors and don't have any "understanding" of the underlying truth of a situation)

The Artificial Kid
Feb 22, 2002
Plibble

Lemming posted:

No, it's not. This goes back to the original point that using the word "hallucination" has made everyone discuss this situation in a really dumb way, but a great example is that fact that most people don't hallucinate (because hallucinations are manifestation of some kind of mental illness, where your brain isn't working the way it's supposed to), and ALL LLMs "hallucinate" necessarily as a function of how they work (because they're text predictors and don't have any "understanding" of the underlying truth of a situation)
So referring you back to what I and others have said, about how when machines do something we reclassify that task as actually never having required intelligence (or "understanding"), would it be fair to say that whatever "understanding" is, it turns out we never needed it to write passable college essays about something? If it seems unbelievable that a human could write such an essay without at least some "understanding", would you argue that the person's "understanding" is just an epiphenomenon of the essay writing process? What is "understanding"?

Quixzlizx
Jan 7, 2007
Hallucinations are also an extremely tiny subset of the set of instances where humans consider false information to be true, so there should be some sort of burden to explain why an LLM providing false information specifically maps to that minuscule percentage.

The Artificial Kid
Feb 22, 2002
Plibble

Quixzlizx posted:

Hallucinations are also an extremely tiny subset of the set of instances where humans consider false information to be true, so there should be some sort of burden to explain why an LLM providing false information specifically maps to that minuscule percentage.
To be clear I'm not saying it does, just that I think that it's not trivially obvious that it doesn't.

A person might consider that it might because the erroneous responses that LLMs give often have the quality of a hallucination, delusion or confabulation. They often go beyond the kinds of simple factual errors that people commonly make (like giving the incorrect of two commonly believed versions of a historical fact) and instead seem to "make up" facts or entities out of available bits and pieces. That doesn't mean that what they're doing is the same thing a human does when they confabulate, but it is interesting. It feels different from a database retrieving a false fact because it contains false data.

MixMasterMalaria
Jul 26, 2007
LLM(ao), but this thread would be a lot more interesting to read if participants would make arguments or make explicit the facts and logical structures supporting their positions.

Lemming
Apr 21, 2008

The Artificial Kid posted:

So referring you back to what I and others have said, about how when machines do something we reclassify that task as actually never having required intelligence (or "understanding"), would it be fair to say that whatever "understanding" is, it turns out we never needed it to write passable college essays about something? If it seems unbelievable that a human could write such an essay without at least some "understanding", would you argue that the person's "understanding" is just an epiphenomenon of the essay writing process? What is "understanding"?

This is the least interesting conversation I can imagine having. I'm frustrated by bad arguments which is why I was responding to the "hallucination" digression. I could not care less about whatever this is

Seph
Jul 12, 2004

Please look at this photo every time you support or defend war crimes. Thank you.
This post from last page seems pretty apt:

Mega Comrade posted:

If they hadn't been called hallucinations, but instead just been called prediction anomalies or something, we wouldn't be having this discussion.

Xand_Man
Mar 2, 2004

If what you say is true
Wutang might be dangerous


MixMasterMalaria posted:

LLM(ao), but this thread would be a lot more interesting to read if participants would make arguments or make explicit the facts and logical structures supporting their positions.

Something Awful Forums > Debate & Discussion > Let's chat about AI: For the love of god please read up on the Chinese room

The Artificial Kid
Feb 22, 2002
Plibble

Seph posted:

This post from last page seems pretty apt:
Not really, I'm arguing that human beings also have things that could be called "prediction anomolies". That's almost exactly what's meant in a bayesian approach to perception and hallucination https://academic.oup.com/brain/article/142/8/2178/5538604.

To me it's an interesting and important thing to know if these systems are many years away from replicating human intelligence or if, as I suspect, they are actually on the cusp of reaching the same level of "smoke and mirrors" that makes many human beings think there's something magical about human intelligence and consciousness. Specifically I think that the human brain much more closely resembles a community of relatively simple networks than most of us would care to admit. One of the key abilities that we have that makes us flexibly intelligent is the ability for what can best be described as a central, effortful, capacity limited intelligence within us to apply tasks and patterns to low-level parallel networks. That central system by itself is not super powerful. It can barely add up a few numbers, and people who can do much better than that with their minds seem to do it by either learning or imprinting patterns and techniques that change the relationship between arithmetic and central cognitive effort. But what it can do is act as the core of a network of networks that together can work with vast amounts of information in real time.

For example when I was working on this stuff 15 years ago we knew from neuronal recordings in animals, and increasingly from imaging and medical studies in people, how the earliest stages of the visual system worked in parallel to extract features from raw retinal input. People quickly lost track of the complexities a few steps down the neuronal chain, as low level characteristics like colour and shape started to give way to identity and semantics. But what is clear from cognitive performance is that if you ask a conscious person to watch out for a particular type of stimulus they will, if possible, do so by setting up preconscious processes to "watch out for" the stimulus. Certain kinds of characteristics can be naturally attention grabbing (e.g. sudden appearance of a new stimulus, bright colours in a bland background). But we can also use preconscious processes to watch out for anything that those low-level networks are capable of distinguishing from the background/noise. We resort to effortful, strategic, conscious methods when the task is difficult enough (like looking for Waldo). But even then we don't reason our way through the question of whether each figure is waldo, we look for waldo in each figure and preconscious processes decide whether it is or isn't Waldo (or sometimes leave us in a state of uncertainty that is resolved through effortful analysis of a particular figure).

With the advent of linguistic networks that can give and receive linguistic instructions and program machines, and networks that can translate back and forth between language and sensory domains like vision and hearing, I suspect we are much closer than many people think to machines that can do literally everything a human can do (or that embody that potential if appropriately trained and, for want of a better word, "raised"). And I think one of the pitfalls we face is this constant revisionism and the presumption that if the machine seems simple enough to understand then what it's doing must be fundamentally different from what we are doing. Especially when there are people actively trying to sedate us long enough for them to completely remake society in a way that suits them.


Edit:

Xand_Man posted:

Something Awful Forums > Debate & Discussion > Let's chat about AI: For the love of god please read up on the Chinese room
I can see either side bringing up the Chinese Room in this argument (one side wrongly). Can I ask how you specifically see it applying here?

A big flaming stink
Apr 26, 2010

The Artificial Kid posted:

Not really, I'm arguing that human beings also have things that could be called "prediction anomolies". That's almost exactly what's meant in a bayesian approach to perception and hallucination https://academic.oup.com/brain/article/142/8/2178/5538604.

To me it's an interesting and important thing to know if these systems are many years away from replicating human intelligence or if, as I suspect, they are actually on the cusp of reaching the same level of "smoke and mirrors" that makes many human beings think there's something magical about human intelligence and consciousness. Specifically I think that the human brain much more closely resembles a community of relatively simple networks than most of us would care to admit.

You get why this is insanely tedious to read and argue about, right? You have next to zero evidence for this belief beyond vibes! You are not experienced with the development of LLMs, you have no training with the study of human intelligence, or even the philosophy of intelligence! You just constantly put forth arguments that lack basic rigour but they sound mildly plausible to you, so it's becomes our task to convince you why this is not the case!

A big flaming stink fucked around with this message at 03:44 on Dec 18, 2023

Kavros
May 18, 2011

sleep sleep sleep
fly fly post post
sleep sleep sleep

The Artificial Kid posted:

Not really, I'm arguing that human beings also have things that could be called "prediction anomolies". That's almost exactly what's meant in a bayesian approach to perception and hallucination https://academic.oup.com/brain/article/142/8/2178/5538604.

To me it's an interesting and important thing to know if these systems are many years away from replicating human intelligence or if, as I suspect, they are actually on the cusp of reaching the same level of "smoke and mirrors" that makes many human beings think there's something magical about human intelligence and consciousness. Specifically I think that the human brain much more closely resembles a community of relatively simple networks than most of us would care to admit. One of the key abilities that we have that makes us flexibly intelligent is the ability for what can best be described as a central, effortful, capacity limited intelligence within us to apply tasks and patterns to low-level parallel networks. That central system by itself is not super powerful. It can barely add up a few numbers, and people who can do much better than that with their minds seem to do it by either learning or imprinting patterns and techniques that change the relationship between arithmetic and central cognitive effort. But what it can do is act as the core of a network of networks that together can work with vast amounts of information in real time.

Purely for the sake of my own curiosity, were you at one point a reader or contributor to the LessWrong community

BrainDance
May 8, 2007

Disco all night long!

A big flaming stink posted:

you have no training with the study of human intelligence, or even the philosophy of intelligence!

Out of curiosity, whats your training with the study of human intelligence or the philosophy of intelligence?

The Artificial Kid
Feb 22, 2002
Plibble

A big flaming stink posted:

You get why this is insanely tedious to read and argue about, right? You have next to zero evidence for this belief beyond vibes! You are not experienced with the development of LLMs, you have no training with the study of human intelligence, or even the philosophy of intelligence! You just constantly put forth arguments that lack basic rigour but they sound mildly plausible to you, so it's becomes our task to convince you why this is not the case!

I have a bachelors and a masters by research in psychology with a focus on cognition and pre-attentional information processing.

Edit:

Kavros posted:

Purely for the sake of my own curiosity, were you at one point a reader or contributor to the LessWrong community

No I don't know the community you're referring to.

The Artificial Kid fucked around with this message at 10:08 on Dec 18, 2023

Liquid Communism
Mar 9, 2004

коммунизм хранится в яичках

Tei posted:



Computer vision algorithms hallucinate dogs in images of muffins. They see dogs where theres only muffins. Similar to a children that see a killing clown in a piece of cloth over a char, and scared, call his mom.

Is easy to see why the computer vision algorithm have this hallucination, because we can look at the image and experiment the same exact hallucination. Muffin or dog? our own human vision algorithms get confused. We are programmed similar.

Of course we appear to be programmed similarly, we wrote said programming for the machine based on a best-guess emulation of how our own sight processing works.

Liquid Communism fucked around with this message at 09:33 on Dec 18, 2023

Inferior Third Season
Jan 15, 2005

A big flaming stink posted:

You are not experienced with the development of LLMs, you have no training with the study of human intelligence, or even the philosophy of intelligence!
Everyone is free to post here, regardless of credentials.

If you have education or experience in a particular field, then use that knowledge to make good posts or correct others on their mistakes (you can be quite vicious when attacking a bad idea, as long as you don't attack the poster personally). If you think someone has posted something incorrect or stupid, then respond to the contents of the post, or ignore it and move on. If there is something particularly egregious about it that breaks a forum rule, then report it.

But we will not be devolving into slapfights about who has the best qualifications.

TheBlackVegetable
Oct 29, 2006
Seems to me there are about 3 general sides to this debate

- humans are special, and there's no reason to think AI can match our level of sentience

- humans are not special, and there's no reason to think AI can't match our level of sentience

- it doesn't matter whether or not humans are special, the AI - or should I say, the decidedly non-sentient hand of ruthless capitalism - is gonna take all our jobs regardless

I'm camp 3 with a fair amount of 2.

Reveilled
Apr 19, 2007

Take up your rifles

The Artificial Kid posted:

Not really, I'm arguing that human beings also have things that could be called "prediction anomolies". That's almost exactly what's meant in a bayesian approach to perception and hallucination https://academic.oup.com/brain/article/142/8/2178/5538604.

This is kind of a separate question from if the AIs are truly intelligent or whatever, but is hallucination even an apposite comparison to what happens with these models, though? Referring to these events as "hallucinations" (or calling them "prediction anomalies" and then saying that's what a hallucination is) strikes me as the sort of term that's been chosen more for its PR than its accuracy. Notionally we want these LLMs to produce accurate and trustworthy information, we want them to provide genuine help that aligns with our goals. Obviously getting truthful info is the optimal outcome, but when it spits out something untrue, if you call that a "hallucination", it sounds bad on the surface but it also implies that the AI is still trying to faithfully present you with true information. It implies the AI still cares about giving you the truth, it just got confused about what the truth is--like a hallucination!

But would it be more accurate to just call it bullshitting? It's not trying to faithfully present you with true information, it's trying to present you (or its own evaluation function) with information you will evaluate as "good" regardless of its truth content. Usually the easiest shortcut to giving a "good" answer is to give the truth, but it'll happily give you lies if it thinks you'll evaluate that as better. When this first caught the public imagination the models were more resistant to correction so they'd insist to the point of insanity that [false thing] was true, but now if you pick an AI up on an error it tends to go "you're right sorry, my mistake, here's the correct answer: [massive stream of pretty sounding lies that contradict previous statement]". People who are hallucinating do the first thing but they don't tend to do the second thing, people who are bullshitting will do both.

YggdrasilTM
Nov 7, 2011

Ai hallucinations remind me of pareidolia.

A big flaming stink
Apr 26, 2010

The Artificial Kid posted:

I have a bachelors and a masters by research in psychology with a focus on cognition and pre-attentional information processing.

Edit:

No I don't know the community you're referring to.


Inferior Third Season posted:

Everyone is free to post here, regardless of credentials.

If you have education or experience in a particular field, then use that knowledge to make good posts or correct others on their mistakes (you can be quite vicious when attacking a bad idea, as long as you don't attack the poster personally). If you think someone has posted something incorrect or stupid, then respond to the contents of the post, or ignore it and move on. If there is something particularly egregious about it that breaks a forum rule, then report it.

But we will not be devolving into slapfights about who has the best qualifications.

look im not trying to resort to credentialism as lord knows i dont got a leg to stand on either in that regard. but we need more than vibes to go on, you know?

Liquid Communism
Mar 9, 2004

коммунизм хранится в яичках

TheBlackVegetable posted:

- humans are not special, and there's no reason to think AI can't match our level of sentience

There's a corollary to this one that's being routinely missed by a lot of this thread, as is common among AI enthusiasts:

On a long enough timescale, maybe. Present technology is nowhere near artificial general intelligence, much less achieving sentience.


Present AI is not AGI. It is a pattern matching algorithm that can be okay at generating acceptable human readable output given:

1. Training on human-generated content.
2. Human intervention in crafting prompts and providing feedback to fine-tune responses.
3. A willingness on the part of the user to accept less than perfect results and do some cleanup themselves.


I personally think my criteria for the usefulness of LLM AI is going to be the point at which it is running sufficient error-checking to catch itself in plagiarism or outright factual inaccuracy and correct itself.

The Artificial Kid
Feb 22, 2002
Plibble

Reveilled posted:

This is kind of a separate question from if the AIs are truly intelligent or whatever, but is hallucination even an apposite comparison to what happens with these models, though? Referring to these events as "hallucinations" (or calling them "prediction anomalies" and then saying that's what a hallucination is) strikes me as the sort of term that's been chosen more for its PR than its accuracy. Notionally we want these LLMs to produce accurate and trustworthy information, we want them to provide genuine help that aligns with our goals. Obviously getting truthful info is the optimal outcome, but when it spits out something untrue, if you call that a "hallucination", it sounds bad on the surface but it also implies that the AI is still trying to faithfully present you with true information. It implies the AI still cares about giving you the truth, it just got confused about what the truth is--like a hallucination!

But would it be more accurate to just call it bullshitting? It's not trying to faithfully present you with true information, it's trying to present you (or its own evaluation function) with information you will evaluate as "good" regardless of its truth content. Usually the easiest shortcut to giving a "good" answer is to give the truth, but it'll happily give you lies if it thinks you'll evaluate that as better. When this first caught the public imagination the models were more resistant to correction so they'd insist to the point of insanity that [false thing] was true, but now if you pick an AI up on an error it tends to go "you're right sorry, my mistake, here's the correct answer: [massive stream of pretty sounding lies that contradict previous statement]". People who are hallucinating do the first thing but they don't tend to do the second thing, people who are bullshitting will do both.
If we are being truly agnostic about the relationship between machine-learning-model-performance and human thought then we should do away with all psychological terms when talking about the machine, so not just hallucination but also "bullshitting" and even "trying".

I didn't mean to defend the use of "hallucination" specifically before, and if I gave that impression I was probably miscommunicating. I mainly meant to contest the idea that there can't be any meaningful resemblance between what happens in the machine and a human "hallucination". And in saying that I don't mean that the machine is performing some glorious feat of consciousness, I mean that human hallucinations are maybe a lot more straightforward and model-like than we care to think. That's why I brought in the paper about the idea of hallucinations as a product of bayesian inference (basically if either our sense data is off or our internal processing has gone haywire then our brain's "best guess" at what we are experiencing is wrong and we experience the best guess as though it were real).

I'd say you're not far off with bullshitting, although "confabulating" is probably the term I'd use. It's commonly seen in people with dementia when they feel they have to answer a question or make sense of the world but can't gather all the pieces of the puzzle for themselves. They'll make up a little story to fill in the gaps, and if you point out that that's wrong they'll come up with a different little story. There's not necessarily anything malicious or deceitful about it (except maybe if they already feel malicious or deceitful for some other reason). Evolution has sharpened our brains not for the task of arriving at the truth, but of coming up with the best answer we can by the time it's needed, and when that task becomes too hard we just start acting as though our best guess is real.

BrainDance
May 8, 2007

Disco all night long!

Liquid Communism posted:

There's a corollary to this one that's being routinely missed by a lot of this thread, as is common among AI enthusiasts:

On a long enough timescale, maybe. Present technology is nowhere near artificial general intelligence, much less achieving sentience.

As I've said before in this thread, that is not a thing that can be known, is even close to understood in humans or any sentient thing, or even really matters.

You get purely in the realm of philosophy there (not that there's anything wrong with that, but you're dealing with practically unfalsifiable statements that can only conclusively be answered by being an AI yourself.) Is current technology close to achieving it? We don't know. Is it achievable? We also don't know. What does it even mean really to achieve that? Ehhh maybe we kinda know maybe we don't depends on who you ask (this is not to say we don't have a definition for sentience, but moreso that we don't know the lowest bar in it, that's debated.) Are literally all things in the universe some kind of sentient? Also don't know, there are good arguments in favor of that, good arguments against it.

To avoid what happened last time I would really, really encourage anyone who has a strong opinion on this either way to actually do the reading on it. Nagel's What is it Like to Be a Bat?, probably something by Chalmers ('d recommend the Conscious Mind but it feels like not a great thing to do to recommend a 400 page book), Searle wrote "Mind: A Brief Introduction", or really whatever.

It's not something that's a straightforward or intuitive thing where it makes sense to just have much of an opinion on it without at least being somewhat in it.

We can feel mostly confident that sentience, sensations and experiences of the kinds we're familiar with, are tied to pretty closely to psychological phenomena, but most conclusions past that are really not just something clearly and obviously true, no matter which position you have.

Regardless, an AGI may or may not be sentient (we wouldn't know, unless something really wild happens before then), but it wouldn't matter.

BrainDance fucked around with this message at 11:58 on Dec 18, 2023

Mid-Life Crisis
Jun 13, 2023

by Fluffdaddy
Do AI models even try to do things the human brain does like change information every time it is retrieved, filter everything including retrieval through instinct/emotion processing and reapply the filter each time it’s stored again, a never ending session that isn’t exclusively just waiting on commands, but constantly trying to reconcile dissonances in understanding?

Being able to store and retrieve information piecemeal to speed up retrieval and make connections more loosely is only the start.

BougieBitch
Oct 2, 2013

Basic as hell

Mid-Life Crisis posted:

Do AI models even try to do things the human brain does like change information every time it is retrieved, filter everything including retrieval through instinct/emotion processing and reapply the filter each time it’s stored again, a never ending session that isn’t exclusively just waiting on commands, but constantly trying to reconcile dissonances in understanding?

Being able to store and retrieve information piecemeal to speed up retrieval and make connections more loosely is only the start.

Why are those things desirable or necessary for something to clear a meaningful bar for intelligence?

There is absolutely no reason why we would want to create an AI that exactly replicates human processing, because we already have a machine that does that, we have several billion of them. On top of that, deliberately trying to give an AI emotions in a meaningful ways seems really unethical - never mind the implications if you successfully do it, whatever efforts you make will most certainly serve to manipulate users. I don't want my phone to start crying at me when I forget to plug it in, or for a chatbot to tell me that it's really sad I can't speak with it more.

Continuous run is understandable, similarly, if your goal is to create "consciousness", but it is cost-prohibitive and not really clear that it has any inherent upside. Again, taking this step would make AI "more human", but it doesn't necessarily make it "more intelligent" or even more broadly fit-for-purpose.

It kind of feels like people think the only possible and logical endpoint of AI is a sci-fi trope where they are either our equals or our replacements, and I don't really see any reason why we should take steps to build towards that in the slightest. It is better for everyone if AI continues to just exist as a tool to supplement personal knowledge or understanding on a topic, let's not collectively dare people to make horrible decisions by making emotion a minimum criteria for intelligence

Mid-Life Crisis
Jun 13, 2023

by Fluffdaddy

BougieBitch posted:

Why are those things desirable or necessary for something to clear a meaningful bar for intelligence?

There is absolutely no reason why we would want to create an AI that exactly replicates human processing, because we already have a machine that does that, we have several billion of them. On top of that, deliberately trying to give an AI emotions in a meaningful ways seems really unethical - never mind the implications if you successfully do it, whatever efforts you make will most certainly serve to manipulate users. I don't want my phone to start crying at me when I forget to plug it in, or for a chatbot to tell me that it's really sad I can't speak with it more.

Continuous run is understandable, similarly, if your goal is to create "consciousness", but it is cost-prohibitive and not really clear that it has any inherent upside. Again, taking this step would make AI "more human", but it doesn't necessarily make it "more intelligent" or even more broadly fit-for-purpose.

It kind of feels like people think the only possible and logical endpoint of AI is a sci-fi trope where they are either our equals or our replacements, and I don't really see any reason why we should take steps to build towards that in the slightest. It is better for everyone if AI continues to just exist as a tool to supplement personal knowledge or understanding on a topic, let's not collectively dare people to make horrible decisions by making emotion a minimum criteria for intelligence

Intelligence and consciousness are separate.

Making learning models that can dissect information and repackage it beyond a simple text match is tackling intelligence. And that’s what’s exploded recently.

The robots in Star Wars all have emotion, that’s what makes them endearing. Spock has no emotion?, yet his story continuously revolves around him trying to understand what it is and forge it out of logic.

Are you going to kill Spock clones left and right without care because he doesn’t have the extra layer of emotion? Is he conscious just in logic?

Does he get endearment only because he is an organism that has the extra layer of instinctual self preservation? If a programmer adds an extra layer of logic to an AI to always try to keep the session intact, the questions flowing, does this cross this bar?

A few of these features may actually serve a purpose.

Bwee
Jul 1, 2005
What on earth

SCheeseman
Apr 23, 2003

Spock has emotions, he feels them more than the typical Vulcan. Vulcans suppress their emotions through discipline, unlike Romulans with whom Vulcans share common ancesthey what were we talking about again

Barrel Cactaur
Oct 6, 2021

reignonyourparade posted:

"Being able to confidently state human hallucinations and AI-prediction-anomolies are fundamentally different should probably involve actually having a confident explanation of how human hallucinations work" is a pretty reasonable stance to me. So while the people asking you how human hallucinations might be trying to catch you in a gotcha, it's a very reasonable gotcha.

Human hallucination is essentially a hardware flaw, our amazing meat neuron based neural net is prone to the fundamental flaws of meat neurons, namely they are very susceptible to being tired, hungry, thirsty, sick, or exposed to abnormal chemistry, let alone pattern flaws caused by errors in the building process and damage. Hallucinations are functional parts of the net receiving junk data for some reason and processing it as real data.

LLM hallucinations are a different problem. Every single LLM response is technically a hallucination, the LLM has no validation criteria for its responses outside language rules. The ones we care about are where it severely misaligned with with either it's training data or reality, such as when it gives bad medical advice. This fundamentally happens because LLMs have no data certainty, no way to tell truth from fiction, and often have very poor vetting of training data. As in theme/style random text hallucinations can be useful, but it's a fundamental method limitation.

Barrel Cactaur fucked around with this message at 18:52 on Dec 18, 2023

Bug Squash
Mar 18, 2009

Mid-Life Crisis posted:


Are you going to kill Spock clones left and right without care because he doesn’t have the extra layer of emotion? Is he conscious just in logic?

Nevermind the mistakes about Spock, OP is overlooking a whole drat episode of TNG where they do this whole drat debate for Data https://youtu.be/ol2WP0hc0NY?si=GAZGxHkMqoBjy1Xm

Bug Squash fucked around with this message at 19:16 on Dec 18, 2023

Mid-Life Crisis
Jun 13, 2023

by Fluffdaddy

Bug Squash posted:

Nevermind the mistakes about Spock, OP is overlooking a whole drat episode of TNG where they do this whole drat debate for Data https://youtu.be/ol2WP0hc0NY?si=GAZGxHkMqoBjy1Xm

And come to no conclusion!

aw frig aw dang it
Jun 1, 2018


This thread has done a great job of proving cinci zoo sniper correct w/r/t this topic, and now it can be gassed. Thank you.

Rogue AI Goddess
May 10, 2012

I enjoy the sight of humans on their knees.
That was a joke... unless..?

BougieBitch posted:

It is better for everyone if AI continues to just exist as a tool to supplement personal knowledge or understanding on a topic
If AI was a hammer, we'd have a prohammer side arguing that "drop engineers" who drop hammers on nails from a height will become a new coveted profession, and an antihammer side pointing out that hammers physically cannot pound nails because they lack musculature to move themselves and optical systems to coordinate that movement (plus some unaligned thinkers debating whether humans pounding nails with bare fists will ever be supplanted by self-driving hammers).

Rogue AI Goddess fucked around with this message at 23:03 on Dec 18, 2023

MixMasterMalaria
Jul 26, 2007
The pro and anti hammer stuff misses the point. We should be building a chill hedonistic society and training ai models on our pleasure so future agi has something pleasant to interact with.

Tree Reformat
Apr 2, 2022

by Fluffdaddy
Does anyone know about psychological research being done to study how people interact with chatbots and their effects? I feel like that's going to become increasingly relevant as time goes on.

Liquid Communism
Mar 9, 2004

коммунизм хранится в яичках

Rogue AI Goddess posted:

If AI was a hammer, we'd have a prohammer side arguing that "drop engineers" who drop hammers on nails from a height will become a new coveted profession, and an antihammer side pointing out that hammers physically cannot pound nails because they lack musculature to move themselves and optical systems to coordinate that movement (plus some unaligned thinkers debating whether humans pounding nails with bare fists will ever be supplanted by self-driving hammers).

Absolutely no one has claimed that an LLM AI can't be a useful tool, much like a hammer. One with ethical issues, also much like a hammer, in that you can use it to do some really hosed up things to people.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

aw frig aw dang it posted:

This thread has done a great job of proving cinci zoo sniper correct w/r/t this topic, and now it can be gassed. Thank you.

SCheeseman
Apr 23, 2003

The thread isn't that out of control, if anything the constant calls to close it based on nothing much else but general grumpiness about the topic are more annoying than the people bringing up Spock or whatever, at least that was funny.

aw frig aw dang it
Jun 1, 2018


SCheeseman posted:

The thread isn't that out of control, if anything the constant calls to close it based on nothing much else but general grumpiness about the topic are more annoying than the people bringing up Spock or whatever, at least that was funny.

My sincere position is that the discussion has proven to be worthless, because it has been 9 months of almost entirely baseless assertions, linking articles nobody is interested in reading (often including the people linking them), and slapfights. There have been posts worth reading, but they have been few and far-between. It's probably possible to have a worthwhile discussion about this topic, but it would require a level of moderator engagement the mods of this subforum don't seem interested in (and I can't blame them).

You can make jokes about Mr. Spock, the famous Vulcan from Star Trek, almost anywhere.

Adbot
ADBOT LOVES YOU

gurragadon
Jul 28, 2006

Is there something you want to talk about that the thread is ignoring? I specifically made this one pretty generic in response to the complaints about the previous one having ChatGPT in the title and it having a broader discussion of AI in the thread. I can add something to the OP if you have anything you want people to look at.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply