Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Lemming
Apr 21, 2008

Gentleman Baller posted:

One thing I've been doing with Bing's AI lately, is thinking of new puns and seeing if it can work out the theme and generate more puns that fit the theme, and it does it extremely well, I think.

When I asked it, "Chairman Moe, Vladimir Lenny, Carl Marx. Think of another pun that fits the theme of these puns." It gave me Homer Chi Minh and Rosa Luxembart (and a bunch of bad ones ofc that still fit the theme.)

I did the same with, "Full Metal Arceus, My Spearow Academia, Dragonite Ball Z" and it gave me Cowboy Beedrill and Tokyo Gimmighoul.

A common refrain I see online and even from ChatGPT itself is that it is just a text predictor, and is incapable of understanding or creating truly new things. But as far as I can tell, it can do something that is at least indistinguishable from understanding and unique creation, right?

Edit: I guess what I have been trying to wrap my head around is, if this isn't understanding and unique creation then what is the difference?

This is the important part, because all the "bad results" were equally valid compared to the "good results" from the perspective of the text generation. You picked out the ones you judged to actually have some value. The "intelligence" that came out was the understanding and curation of the human who was overseeing it, for the same reason why it was even able to produce any of those results at all were because it had a large enough data set of things created by people in the first place.

The difference is that if its input continued to be generated on its own output, it would drift further and further into being completely garbage. It requires the large data sets of intelligent input to be able to produce a facsimile of that intelligent input.

Adbot
ADBOT LOVES YOU

Lemming
Apr 21, 2008

BrainDance posted:

Just saying "there's a difference" doesn't answer it. That's a huge copout non-answer. As far as we can tell the human brain does exactly that, its lives experience, the context of its art, is " training data" that's used to generate what it does.

When a human being creates representational art to try and convey their lives experience to others, that's what you described an AI doing is roughly what their brain does.

The AI's "lived experience" (heavy scarequotes) is definitely less real and on its own not meaningful compared to a humans, but as of now all AIs have to be operated by a human who does have meaningful lived experience.

Just saying "there's a similarity" doesn't answer it either. Human brains are extremely similar to chimp brains, but chimps also aren't really creative in the same way humans are.

You can't just claim things are vaguely similar and wave your arms and squint, the situation is a lot more complex than that.

One of the key points is looking at how much training data being fed into the different models is affecting how good or "creative" they appear to be. They've jammed everything they possibly could into it, more words and images and text than any person could consume in thousands of lifetimes, and they still haven't figured out how many legs people have. The entire reason these models are impressive is because interpolating between things you've already seen is more powerful than we thought it was, but it's still fundamentally completely reliant on the training data.

Humans are not reliant on training data, not in the same way. Despite having access to only a fraction of the data those models are, humans can generate new information in a comprehensive way that understands what they're looking at. It's just fundamentally a completely different approach. You can use words that describe the processes similarly, but it's still not the same

Lemming
Apr 21, 2008

KillHour posted:

On the input side, a system like Stable Diffusion might be trained on tons of images of hands, but it's never seen a hand move. We have stereoscopic vision along with the ability to manipulate our own body and see how it moves and how occluded parts come in and out of view. Even from a volume perspective, these systems have seen more pictures of hands than we have, but we've certainly seen more examples of hands - and especially the same hands over and over again in different contexts. And people STILL struggle to draw hands because hands are loving weird.

We're definitely reliant on training data. It's what we're doing the entire time we're babies and small children. We're just sitting there observing the world and testing our interactions with it and updating our brains with that new information. We just have more bandwidth in terms of depth and context of the data.

That's why I said "not in the same way," because the way we use training data is to create a mental model of the world and use that model to extrapolate and figure out how to adapt and react to things that we see and want to do. The LLMs don't have any understanding at all, they're just brute forcing being trained on so many things that hopefully they can respond in a reasonable seeming way.

Lemming
Apr 21, 2008

BrainDance posted:

Human creativity really does just work that way. The alternative is what, magic? Pulling up new ideas from the ether? Your lived experiences and things you sense create connections in your brain that lead to you creating new things from the properties of those experiences, just like how an image generating AI creates a new image in a style that hasn't existed yet out of everything its been trained on.

This gets into what does it mean for a human to make a choice, where do choices come from? I got my ideas but I'm not going to speak as confidently on that. But, choices come from somewhere too, they're also not magic.

The alternative isn't magic, and AI will eventually be able to do it, but these specific things that are impressing everyone, the Large Language Models, are one implementation and don't represent all kinds of AI, and they specifically aren't creative in the way we're talking about, is the point.

https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/

quote:

But the company’s CEO, Sam Altman, says further progress will not come from making models bigger. “I think we're at the end of the era where it's going to be these, like, giant, giant models,” he told an audience at an event held at MIT late last week. “We'll make them better in other ways.”

Training data is not be all end all, and they're already talking about how there are diminishing returns and they can't just keep jamming training data into these models to make them "better." They'll need to be constructed differently in other to achieve that.

I agree, human brains aren't magic, and my argument is that LLMs also aren't magic and just jamming training data into a computer isn't magic.

Lemming
Apr 21, 2008

BrainDance posted:

Yes, it is, and the bolded part didn't say it wasn't at all?

It's an AI expanding on a literal classic...

People are responding negatively because it wasn't portrayed as "here's a fun experiment," the Twitter thread that started it literally began with "Ever wonder what the rest of the Mona Lisa looks like?"

People are taking it as them implying that a. there's more to see beyond the edges of the painting and somehow the artist wasn't able to display it themselves for some reason and b. that only AI is able to understand what's "really" there and uncover it

I think that's a fair characterization of how those expand the edges things are coming off, and it makes perfect sense why people would respond that negatively

Lemming
Apr 21, 2008

Boris Galerkin posted:

I feel like this is picking nits because what if they were to have said this instead?

I mean, you wouldn’t read that and assume that this particular artist is the reincarnation of Da Vinci sent to us to “finish” his art would you?

Of that this version of an “expanded Mona Lisa” is the one true expanded Mona Lisa, as if another artist couldn’t do something different?

A reasonable person would see the tweet and think “oh this is just a demo to show case how it can handle different art styles!” Not “oh poo poo someone call the Louvre we just uncovered the missing pieces.”

If there was a zeitgeist promoting that random person as being someone who could possibly replace artists and creatives and people in all kinds of industries and it was phrased like that I think there would be a pretty similar response to it, yeah

I feel like you can understand where the response is coming from though, right? In the context of AI, where it's being aggressively sold as this world changing thing that is going to upend the way lots of things work, people are interpreting the concept of showing "the rest" of famous art pieces as an implicit argument that soon the machines will be able to replicate or create new art as impactful as some of the most famous and culturally relevant art in history. I'm not commenting on how reasonable some of those reactions are, but surely you can see where that perspective comes from, it's not going to be solely a response to those specific images

Lemming
Apr 21, 2008

KillHour posted:

Adobe very much isn't billing their tech as making artists obsolete because artists are literally their target market. Some people are saying this tech will do that, but some people are also saying the government is putting microchips in vaccines. You really have to take what people and companies do in the context of what those people and companies say, not in the context of what completely different people and companies say.

If you are only able to see this as a good and evil fight between two factions of completely aligned groups, one pro AI and one pro artist, you have completely misunderstood both the issues and the players.

Nobody's responding to Adobe directly, they're responding to a series of images posted by an AI bro (previously NFT bro, previously metaverse bro, previously Bitcoin bro, previously AR bro, previously VR bro, etc), because they've unfortunately been given outsized influence in our society since they're perpetuating various pump and dump schemes backed by the incredibly rich.

It's completely fair to respond to those people even if they're patently morons because they're loving everywhere. I think it's totally fair to have separate conversations about the specific realities of the technology (and I'd even agree those conversations aren't happening as much and would probably be more interesting than a lot of the "is it art" stuff), but it's also completely reasonable that those kinds of Twitter threads are getting pushback, especially when people are just directly responding to a specific one that was posted in the thread

Lemming
Apr 21, 2008

KwegiboHB posted:

I can't help but see it as cerebrum, cerebellum, and brainstem. It won't be any one system but the complex interactions between many.

The foundation of your argument is that both systems seem vaguely similar on a cursory inspection, therefore you're willing to believe they are substantively the same. This argument is more or less as strong as saying the sun is probably just a very large daisy, because they're both yellow, or that the chimpanzee must be happy to see us, because of how wide he's smiling. I believe you believe what you're saying, but you haven't provided any actual evidence for your claims yet

Lemming
Apr 21, 2008

KwegiboHB posted:

Maybe those specific body parts aren't the best description. I'm interested in the complex interactions between multiple systems, not any one alone. That's the part I focus on. If you want me to present facts that it's exactly like a brain, no, I won't, because it's not.

Put simply, It's always the same Stable Diffusion model, but the LoRA changes every time.

Could you point to the part where I asked you to do that? My claim was pretty straightforward, that you hadn't provided any actual evidence for this extraordinary claim:

KwegiboHB posted:

Already crossed the line for sapience.

Like I said, you've talked through your experience of using AI, and have made the jump (without evidence) that it was demonstrating sapience. I'm just pointing out that your arguments boil down to feelings arguments based on your personal experiences and beliefs, which just isn't that compelling.

Lemming
Apr 21, 2008

The Artificial Kid posted:

I wasn't aware that we'd discovered how human hallucinations work, but from what I do know about them "prediction anomolies" also seems like an apt description for them, either within our core consciousness or on the part of some pre-conscious network that mistakenly elevates false information to consciousness.

Responding to the core point that using language to create a tenuous link between two unrelated phenomena is a bad method of argument with using language to create a tenuous link between two unrelated phenomena is very funny

Lemming
Apr 21, 2008

The Artificial Kid posted:

My point is more that many of these objections to the intelligence of LLMs rely on extremely airy and unjustified assumptions about how the human mind works. LLMs' "hallucinations" aren't actual hallucinations. Ok, what are actual hallucinations? One thing they definitely are is a cognitive system coming up with a wrong answer to a question (an explicit, conscious question, or an implicit question the animal constantly asks itself about what is happening and what to do next).

It's not unjustified to say that human minds work in a fundamentally different way than LLMs, and when an LLM gets a wrong answer it's not the same thing as when a human has a hallucination. Vaguely handwaving at how they're similar by using imprecise language and ignoring away how entirely different they are and saying that it's up to anyone else to prove they're not the same is just asinine. And again, it's very funny that you're just doing it again in hopes that maybe this time it's convincing.

Lemming
Apr 21, 2008

The Artificial Kid posted:

Of course LLMs are different from human beings, that doesn't mean they don't constitute a form of intelligence (or that the combined process of model building and execution/consultation of the model doesn't). We don't know how human intelligence or consciousness work, so it's equally handwavy to say "LLMs are just different from us and therefore not intelligent". What is it about us that gives us intelligence?

As I said above, if something can perform on a task that requires intelligence, thought either takes place when it runs or has gone into its creation. As others have said, when machines encroach onto territory we considered to "require intelligence" our tendency is to say "would you look at that, turns out [activity x] never required intelligence after all". We treat intelligence like a magic trick, and every time we see automata perform a trick we think "oh that was never magic, it was just smoke and mirrors all along".

What's actually happening is that machines are rapidly approaching intelligence (or if you prefer, we ourselves are just smoke and mirrors).

See now you're just making arguments that weren't made. I was responding to this post

The Artificial Kid posted:

I wasn't aware that we'd discovered how human hallucinations work, but from what I do know about them "prediction anomolies" also seems like an apt description for them, either within our core consciousness or on the part of some pre-conscious network that mistakenly elevates false information to consciousness.

LLMs getting something wrong being called "hallucinations" is not the same as human hallucinations, which is what you were implying here. Humans hallucinating is completely different from a human getting the answer to a question wrong, as well. If you want to make an argument about intelligence or whatever, a really shaky basis is just trying to imply that two things that sound superficially similar are actually truly the same thing, which is what I was specifically objecting to.

Lemming
Apr 21, 2008

The Artificial Kid posted:

Can you explain to me how human hallucinations work? Because the closest thing I've ever seen to an explanation is handwaving about "reality monitoring", or talk about bayesian probabilities and "best guesses", neither of which seem impossibly removed from the activity of machine learning systems.

Can you just say what your point is instead of trying to catch me in some kind of stupid gotcha? Because what you quoted was me pointing out that humans get answers wrong for many different reasons that have nothing to do with each other, so there's no reason in particular to think that humans hallucinating is the same thing as an LLM getting a question wrong.

BougieBitch posted:

You can justify it, but the differences you think are fundamental probably don't really have that much to do with intelligence, and people constantly say dumb poo poo that makes it clear they believe that human thought is fundamentally inexplicable just because we don't have it figured out with certainty at present.

If your requirement for something to be intelligent involves cellular life, or specific neurotransmitters or whatever then no program or system will ever be able to clear your bar. If, however, you accept that there is at least conceptually a way that you could match inputs and output using a model that isn't 1-to-1, then it DOES make sense to make comparisons by analogy.

Modeling complex systems in parts is kind of what science is all about, so giving a generic "they aren't the same, so they can't be compared" is pretty useless and people need to show their work before drawing a conclusion like that.

Dog, of course the computers will be capable of being as or more intelligent than human beings. LLMs aren't. Like, that technology might be a necessary component of intelligence on some level, but it's not intelligent on its own.

Lemming fucked around with this message at 23:23 on Dec 17, 2023

Lemming
Apr 21, 2008

reignonyourparade posted:

"Being able to confidently state human hallucinations and AI-prediction-anomolies are fundamentally different should probably involve actually having a confident explanation of how human hallucinations work" is a pretty reasonable stance to me. So while the people asking you how human hallucinations might be trying to catch you in a gotcha, it's a very reasonable gotcha.

No, it's not. This goes back to the original point that using the word "hallucination" has made everyone discuss this situation in a really dumb way, but a great example is that fact that most people don't hallucinate (because hallucinations are manifestation of some kind of mental illness, where your brain isn't working the way it's supposed to), and ALL LLMs "hallucinate" necessarily as a function of how they work (because they're text predictors and don't have any "understanding" of the underlying truth of a situation)

Lemming
Apr 21, 2008

The Artificial Kid posted:

So referring you back to what I and others have said, about how when machines do something we reclassify that task as actually never having required intelligence (or "understanding"), would it be fair to say that whatever "understanding" is, it turns out we never needed it to write passable college essays about something? If it seems unbelievable that a human could write such an essay without at least some "understanding", would you argue that the person's "understanding" is just an epiphenomenon of the essay writing process? What is "understanding"?

This is the least interesting conversation I can imagine having. I'm frustrated by bad arguments which is why I was responding to the "hallucination" digression. I could not care less about whatever this is

Lemming
Apr 21, 2008

Mega Comrade posted:

What are you on about?

Are you seriously saying we should write in such a way to try and influence further models?

I know your understanding of this technology has been shown to be lacking but your trolling at this point surely?

For context this is the person who thinks he has a sentient consciousness chained up inside his computer that he can talk to

Lemming
Apr 21, 2008

KillHour posted:

This is starting to get better - Bing, for example, will now give a list of sources with the answer. I'm under NDA about the details for work stuff, but at a high level I can say that the way the industry is going is to use NLP/LLM to turn the question into a search that can be executed, and then use the LLM to summarize the search responses and insert citations where necessary. This has the extra benefit of being able to answer questions with material too new to be in the training data.

It doesn't guarantee that the result will be accurate or that the LLM won't make a mistake summarizing, but it helps a lot with hallucinations just making things up whole-cloth.

Edit: It also follows the very predictable pattern of new technologies making the transition from "This is revolutionary and will replace everything!" to "Okay, this has some strengths and some weaknesses, so let's see where we can fit it into the existing tech stack to enhance already-proven methods."

Double edit: If you want to learn more, the technique is called RAG (Retrieval Augmented Generation) and it's the new hotness everyone can't shut up about, which reminds me I need to add it to my LinkedIn keywords...
https://medium.com/artificial-corner/retrieval-augmented-generation-rag-a-short-introduction-21d0044d65ff

Reframing the value of LLM as being a much more capable and straightforward way of searching for information is a both a more accurate and clearly useful concept than "AI" for what it's doing. A minor, dumb example, but I was wondering what an old tank game I played was and after a few iterations of telling ChatGPT why its suggestion was wrong and what I remember that was different, it got the right answer. I feel like this is what's going on in most of the cases that people hype up as "oh look it can code!" Well, no, but it can help you get to a solution that someone has already made for a small use case, so you can learn from that and understand it more quickly.

Obviously this is my layman's interpretation of the value of what you're saying but this kind of thing doesn't make me roll my eyes like most of the AI hype stuff

Adbot
ADBOT LOVES YOU

Lemming
Apr 21, 2008

smoobles posted:

This is true of everyone posting in this thread, but countless boomers on Facebook are already engaging with AI pictures thinking they're real. Not all generations have equal media literacy.

If you post a picture of sometime with a caption of a quote, people will believe it's something they said. This problem goes way beyond AI

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply