Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Freakazoid_
Jul 5, 2013


Buglord

Clarste posted:

AI invented the genre of people eating their own spaghetti fingers.

AI saw Salad Fingers as an input and then mashed together a related concept.

Adbot
ADBOT LOVES YOU

BrainDance
May 8, 2007

Disco all night long!

Liquid Communism posted:

If you can't understand the difference between a script that puts pixels together based on what humans labeled training datasets as, then waits to see if the result provides enough of a pattern for human pattern recognition to think it's what they wanted and a human being creating representational art to try and convey their lived experience to others, I'm not sure I can help you.

Just saying "there's a difference" doesn't answer it. That's a huge copout non-answer. As far as we can tell the human brain does exactly that, its lives experience, the context of its art, is " training data" that's used to generate what it does.

When a human being creates representational art to try and convey their lives experience to others, that's what you described an AI doing is roughly what their brain does.

The AI's "lived experience" (heavy scarequotes) is definitely less real and on its own not meaningful compared to a humans, but as of now all AIs have to be operated by a human who does have meaningful lived experience.

Lemming
Apr 21, 2008

BrainDance posted:

Just saying "there's a difference" doesn't answer it. That's a huge copout non-answer. As far as we can tell the human brain does exactly that, its lives experience, the context of its art, is " training data" that's used to generate what it does.

When a human being creates representational art to try and convey their lives experience to others, that's what you described an AI doing is roughly what their brain does.

The AI's "lived experience" (heavy scarequotes) is definitely less real and on its own not meaningful compared to a humans, but as of now all AIs have to be operated by a human who does have meaningful lived experience.

Just saying "there's a similarity" doesn't answer it either. Human brains are extremely similar to chimp brains, but chimps also aren't really creative in the same way humans are.

You can't just claim things are vaguely similar and wave your arms and squint, the situation is a lot more complex than that.

One of the key points is looking at how much training data being fed into the different models is affecting how good or "creative" they appear to be. They've jammed everything they possibly could into it, more words and images and text than any person could consume in thousands of lifetimes, and they still haven't figured out how many legs people have. The entire reason these models are impressive is because interpolating between things you've already seen is more powerful than we thought it was, but it's still fundamentally completely reliant on the training data.

Humans are not reliant on training data, not in the same way. Despite having access to only a fraction of the data those models are, humans can generate new information in a comprehensive way that understands what they're looking at. It's just fundamentally a completely different approach. You can use words that describe the processes similarly, but it's still not the same

Imaginary Friend
Jan 27, 2010

Your Best Friend

BoldFace posted:

Is there a practical, objective way to put this to test? What is the simplest creative task that humans can perform, but AI can not?
Use the prompt "create an original house" or any other prompt without any creative details on ten different AI models and then ask ten different artists to ask the same question to ten artists they know about.

Imaginary Friend fucked around with this message at 16:02 on May 10, 2023

KillHour
Oct 28, 2007


Lemming posted:

Just saying "there's a similarity" doesn't answer it either. Human brains are extremely similar to chimp brains, but chimps also aren't really creative in the same way humans are.

You can't just claim things are vaguely similar and wave your arms and squint, the situation is a lot more complex than that.

One of the key points is looking at how much training data being fed into the different models is affecting how good or "creative" they appear to be. They've jammed everything they possibly could into it, more words and images and text than any person could consume in thousands of lifetimes, and they still haven't figured out how many legs people have. The entire reason these models are impressive is because interpolating between things you've already seen is more powerful than we thought it was, but it's still fundamentally completely reliant on the training data.

Humans are not reliant on training data, not in the same way. Despite having access to only a fraction of the data those models are, humans can generate new information in a comprehensive way that understands what they're looking at. It's just fundamentally a completely different approach. You can use words that describe the processes similarly, but it's still not the same

There's two sides to this - structural and input.

On the structural side, the human brain has orders of magnitudes more connections than these systems, and a different architecture that has many different kinds of regions working together on a given task. Each model is specialized to do pretty much one thing, and it's amazing that some of them seem to generalize a little bit, but it's still mostly limited to the kind of thing it was designed to do. Our brains are much more like a bunch of small systems working together in concert, which is why we have regions of our brain dedicated to specific kinds of processing.

On the input side, a system like Stable Diffusion might be trained on tons of images of hands, but it's never seen a hand move. We have stereoscopic vision along with the ability to manipulate our own body and see how it moves and how occluded parts come in and out of view. Even from a volume perspective, these systems have seen more pictures of hands than we have, but we've certainly seen more examples of hands - and especially the same hands over and over again in different contexts. And people STILL struggle to draw hands because hands are loving weird.

We're definitely reliant on training data. It's what we're doing the entire time we're babies and small children. We're just sitting there observing the world and testing our interactions with it and updating our brains with that new information. We just have more bandwidth in terms of depth and context of the data.

Lemming
Apr 21, 2008

KillHour posted:

On the input side, a system like Stable Diffusion might be trained on tons of images of hands, but it's never seen a hand move. We have stereoscopic vision along with the ability to manipulate our own body and see how it moves and how occluded parts come in and out of view. Even from a volume perspective, these systems have seen more pictures of hands than we have, but we've certainly seen more examples of hands - and especially the same hands over and over again in different contexts. And people STILL struggle to draw hands because hands are loving weird.

We're definitely reliant on training data. It's what we're doing the entire time we're babies and small children. We're just sitting there observing the world and testing our interactions with it and updating our brains with that new information. We just have more bandwidth in terms of depth and context of the data.

That's why I said "not in the same way," because the way we use training data is to create a mental model of the world and use that model to extrapolate and figure out how to adapt and react to things that we see and want to do. The LLMs don't have any understanding at all, they're just brute forcing being trained on so many things that hopefully they can respond in a reasonable seeming way.

KillHour
Oct 28, 2007


Lemming posted:

That's why I said "not in the same way," because the way we use training data is to create a mental model of the world and use that model to extrapolate and figure out how to adapt and react to things that we see and want to do. The LLMs don't have any understanding at all, they're just brute forcing being trained on so many things that hopefully they can respond in a reasonable seeming way.

I guess where you see a fundamental difference in approach, I see a volume difference in modality. My comparison would be like a blind person trying to draw. They can do it (and there are artists who prove it), but they are at a huge literal handicap and it is certainly much harder. Now extend that to also deaf, unable to feel (in the tactile sense), no exposure to language or math, etc. I don't see these as inherent limitations of AI in general, but as inherent limitations of the specific systems. Multi modal models are just starting to show up, so it's really hard to tell how well they will generalize and how much is structural differences vs lack of diverse senses.

Rappaport
Oct 2, 2013

I think there's also a question of autonomy buried there with the idea of creativity. If someone who lived through World War 2 and was living under the Cold War and then used their experiences to inform them to write a book about a "Manhattan project" style mission to solve a transmission the Americans had received from outer space, and uses that book to discuss with the reader many themes relating to the stupidity of the Cold War, war in general, humanity, randomness? They are using their own experiences and "training data", sure, but I would argue that here the creative part is writing a sci-fi novel that is also about a lot more than what it says on the tin. The author used their own choices in what they included, presumably because that is what they wanted to say, and to have a conversation with their readers on those specific topics.

Spoiler alert for the first pages of the book, but the author also chose to make his Manhattan project-alike fail. They chose to make a story about futility, and of frustration. You can say it was their "training data" from understanding how science works in general, but the author's choices in making these thematic framings were deliberate and the author meant them to evoke some things.

You can say to an AI "write me a story about Lovecraftian horrors and it is also a detective story involving computers" and get a Laundry Files facsimile, but the AI doesn't make choices (from what I understand) in the same way a human author does, it just places words together in patterns that it has learned to be common in human literature. The AI doesn't choose the themes or the references it uses the same way a human does.

BrainDance
May 8, 2007

Disco all night long!

Lemming posted:

Just saying "there's a similarity" doesn't answer it either. Human brains are extremely similar to chimp brains, but chimps also aren't really creative in the same way humans are.

You can't just claim things are vaguely similar and wave your arms and squint, the situation is a lot more complex than that


That's true, but when I'm careful and say there's a similarity what I'm saying is, yes there are differences (human brains create images in a stepped process, not diffusion) but those things aren't really what we're talking about. As far as creativity is concerned I think it is exactly the same with the only exceptions that I can think of being what I said before (the differences in what the training data is made of, in that humans have senses, and that the way we do image generation now the thinking is still done by the human running the AI. It wouldn't have to be, it just is though)

With LLMs it's different, that's just really different from how humans generate speech. With the image generators it's really not much different at all from how you see and how you create a picture in your head.

This though

Lemming posted:

One of the key points is looking at how much training data being fed into the different models is affecting how good or "creative" they appear to be. They've jammed everything they possibly could into it, more words and images and text than any person could consume in thousands of lifetimes, and they still haven't figured out how many legs people have. The entire reason these models are impressive is because interpolating between things you've already seen is more powerful than we thought it was, but it's still fundamentally completely reliant on the training data.

Humans are not reliant on training data, not in the same way. Despite having access to only a fraction of the data those models are, humans can generate new information in a comprehensive way that understands what they're looking at. It's just fundamentally a completely different approach. You can use words that describe the processes similarly, but it's still not the same

No way, humans have access to incredibly more training data than any AI, by just a massive degree. Training data of different types, and the interconnectedness of different systems which we're actually in the process of doing with AI's right now. Every single thing you see or hear is another training sample. The human brain, just the vision/thinking images part also is far more complex than even the largest AI, so we are wildly better at it.

But there really isn't anything fundamentally different that I've seen for the image generating models, at least not that's relevant to the question of creativity.

Human creativity really does just work that way. The alternative is what, magic? Pulling up new ideas from the ether? Your lived experiences and things you sense create connections in your brain that lead to you creating new things from the properties of those experiences, just like how an image generating AI creates a new image in a style that hasn't existed yet out of everything its been trained on.

Rappaport posted:


You can say to an AI "write me a story about Lovecraftian horrors and it is also a detective story involving computers" and get a Laundry Files facsimile, but the AI doesn't make choices (from what I understand) in the same way a human author does, it just places words together in patterns that it has learned to be common in human literature. The AI doesn't choose the themes or the references it uses the same way a human does.

This gets into what does it mean for a human to make a choice, where do choices come from? I got my ideas but I'm not going to speak as confidently on that. But, choices come from somewhere too, they're also not magic.

BrainDance fucked around with this message at 16:21 on May 10, 2023

XboxPants
Jan 30, 2006

Steven doesn't want me watching him sleep anymore.

BrainDance posted:

Just saying "there's a difference" doesn't answer it. That's a huge copout non-answer. As far as we can tell the human brain does exactly that, its lives experience, the context of its art, is " training data" that's used to generate what it does.

I think there's a lot of really good science that shows that the human brain basically is little more than a huge prediction engine. That's how it works. I saw this TED talk by a neurologist recently, and was struck by how much of it is relevant to this debate. I know it's a TED talk, ew pop science, but if you can't understand why some people don't see the obvious difference between how a human mind works and how a computer works, I'd strongly suggest checking it out.
https://www.youtube.com/watch?v=0gks6ceq4eQ

Barrett is a neuroscientist, and her assertion here is that emotions, something we take as one of the most fundamental parts of the human experience, are not hardwired into the human brain; they are guesses.

quote:

Using past experience, your brain predicts and constructs your experience of the world.

Human brains cannot come up with "new" creations completely disconnected from prior experiences, either. Watch that video and tell me you could see the image in the blobs before you're given the "training data" of the real image.

The point of all this being, as far as I can tell, AIs may be prediction engines, but so are humans. So that doesn't discount them from being able to create new art, especially if they're being used by a human who can ask them for "image of 32 cans of campell's soup in commercial art style".

Actually, that's a good question. If Warhol was never born, and the whole pop art movement never happened, and someone instead just used "image of 32 cans of campell's soup in commercial art style" as a Midjourney prompt... and it created an image similar to Warhol's soup cans... and hung it in a gallery... would it be a less meaningful piece of art, because they used a computer to create the individual pixels, rather than doing it by longhand?

Rappaport
Oct 2, 2013

BrainDance posted:

Human creativity really does just work that way. The alternative is what, magic? Pulling up new ideas from the ether?

Phil Dick used a lot of drugs, but that's sort of what he did, yes.

BrainDance posted:

This gets into what does it mean for a human to make a choice, where do choices come from? I got my ideas but I'm not going to speak as confidently on that. But, choices come from somewhere too, they're also not magic.

I don't think they are magic, either. But they are informed by things like emotions, understanding of what the reader might glean from their reading, you know, having had "training data" on conversations and human beings. As I've been lead to believe, AIs of today don't really have that.

Rappaport fucked around with this message at 16:24 on May 10, 2023

KillHour
Oct 28, 2007


XboxPants posted:

Actually, that's a good question. If Warhol was never born, and the whole pop art movement never happened, and someone instead just used "image of 32 cans of campell's soup in commercial art style" as a Midjourney prompt... and it created an image similar to Warhol's soup cans... and hung it in a gallery... would it be a less meaningful piece of art, because they used a computer to create the individual pixels, rather than doing it by longhand?

To me, it would be about the same as if a human used Photoshop to do it, which is why I really don't understand the argument that there's no creativity in AI art. I think the people asserting that are imagining the limits of the medium to be someone hitting the "I'm feeling lucky" button over and over until they get something neat.


XboxPants posted:

I think there's a lot of really good science that shows that the human brain basically is little more than a huge prediction engine. That's how it works. I saw this TED talk by a neurologist recently, and was struck by how much of it is relevant to this debate. I know it's a TED talk, ew pop science, but if you can't understand why some people don't see the obvious difference between how a human mind works and how a computer works, I'd strongly suggest checking it out.
https://www.youtube.com/watch?v=0gks6ceq4eQ

I'm very unqualified to talk about this, but to me, this is a possibility. It's not certain of course - the question of "do we have free will" is kind of unanswerable at the moment, but I think if you're taking it as granted that humans have some special secret thing that makes us uniquely conscious, you're pretty much just backdooring your way to religion with "a wizard did it."

Rappaport posted:

I don't think they are magic, either. But they are informed by things like emotions, understanding of what the reader might glean from their reading, you know, having had "training data" on conversations and human beings. As I've been lead to believe, AIs of today don't really have that.

ChatGPT kind of does, a little? But I wouldn't have very high expectations of a human baby being raised by the collectivity of Reddit either.

Edit: ChatGPT ONLY has context of what people say and lacks its own experiences outside of text, so it's almost like the opposite problem really.

KillHour fucked around with this message at 16:32 on May 10, 2023

Rappaport
Oct 2, 2013

KillHour posted:

ChatGPT kind of does, a little? But I wouldn't have very high expectations of a human baby being raised by the collectivity of Reddit either.

It's an absurd and silly example, but there was that AI that Microsoft let loose on the Internet and it took it less than 24 hours to become a full-fledged Nazi. Obviously that robot had even less sense of what being human was about and was regurgitating awful Twitter poo poo, but, you know, it was sort of funny in a sad way.

Lemming
Apr 21, 2008

BrainDance posted:

Human creativity really does just work that way. The alternative is what, magic? Pulling up new ideas from the ether? Your lived experiences and things you sense create connections in your brain that lead to you creating new things from the properties of those experiences, just like how an image generating AI creates a new image in a style that hasn't existed yet out of everything its been trained on.

This gets into what does it mean for a human to make a choice, where do choices come from? I got my ideas but I'm not going to speak as confidently on that. But, choices come from somewhere too, they're also not magic.

The alternative isn't magic, and AI will eventually be able to do it, but these specific things that are impressing everyone, the Large Language Models, are one implementation and don't represent all kinds of AI, and they specifically aren't creative in the way we're talking about, is the point.

https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/

quote:

But the company’s CEO, Sam Altman, says further progress will not come from making models bigger. “I think we're at the end of the era where it's going to be these, like, giant, giant models,” he told an audience at an event held at MIT late last week. “We'll make them better in other ways.”

Training data is not be all end all, and they're already talking about how there are diminishing returns and they can't just keep jamming training data into these models to make them "better." They'll need to be constructed differently in other to achieve that.

I agree, human brains aren't magic, and my argument is that LLMs also aren't magic and just jamming training data into a computer isn't magic.

KillHour
Oct 28, 2007


Rappaport posted:

It's an absurd and silly example, but there was that AI that Microsoft let loose on the Internet and it took it less than 24 hours to become a full-fledged Nazi. Obviously that robot had even less sense of what being human was about and was regurgitating awful Twitter poo poo, but, you know, it was sort of funny in a sad way.

Easily more than half of custom stable diffusion models I've seen are were trained to make porn.

BoldFace
Feb 28, 2011

Imaginary Friend posted:

Use the prompt "create an original house" or any other prompt without any creative details on ten different AI models and then ask ten different artists to ask the same question to ten artists they know about.

You left out the most important part. What happens next? A panel of human judges rates the works in terms of perceived originality? Assuming that they do rate the human works higher, what does that exactly prove? The question was not whether AI can be as creative as humans, but if it can be creative at all.

Imaginary Friend
Jan 27, 2010

Your Best Friend

BoldFace posted:

You left out the most important part. What happens next? A panel of human judges rates the works in terms of perceived originality? Assuming that they do rate the human works higher, what does that exactly prove? The question was not whether AI can be as creative as humans, but if it can be creative at all.
Touché. I missed this part.

KillHour
Oct 28, 2007


BoldFace posted:

You left out the most important part. What happens next? A panel of human judges rates the works in terms of perceived originality? Assuming that they do rate the human works higher, what does that exactly prove? The question was not whether AI can be as creative as humans, but if it can be creative at all.

This is kind of like the art question in that any definition you come up with is either going to be circular or presume your desired outcome. I did a bunch of reading on the philosophy of art and aesthetics last night, and this stood out to me:

quote:

Berys Gaut revived anti-essentialism in the philosophy of art with his paper ‘"Art" as a Cluster Concept’ (2000). Cluster concepts are composed of criteria that contribute to art status but are not individually necessary for art status. There is one exception: Artworks are created by agents, and so being an artifact is a necessary property for being an artwork.

So even in the broadest definitions of art, there's this idea of agency, without defining what it means to have agency. I think if you start to pick apart agency, you'll end up doing the same thing there, where a single definition can't possibly suffice and it just ends up being a thing you can't pin down.

Baronash
Feb 29, 2012

So what do you want to be called?

Liquid Communism posted:

Half the reason conspiracy theories get so much traction is that they're spread with great volume, and most people absolutely don't have the time in the day to research and critically evaluate every piece of information they consume.

General misinformation can spread this way, but I'm not sure it really applies to conspiracy theories. I think it'd be hard to argue that viewpoints like "there is a secret satan-worshipping pedophile cult opposed to Donald Trump," "the Twin towers were brought down by a controlled demolition," and "Sandy Hook was a hoax and the kids are crisis actors" arise because people have too little time to devote to research.

Tei
Feb 19, 2011

Owling Howl posted:

This discussion has been ongoing for like 50 years with people submitting works made by children, animals or computers to art galleries.

Ultimately it doesn't matter if you describe it as art or decoration - the impact it will have on society is the same. If an author uses AI to illustrate their book an illustrator is not getting paid to do it. Is it devoid of artistic meaning? Sure but that doesn't help the illustrator. It helps the author though...

Illusttration can have a impact on society when is made by artist.

A artist can create a new font, and text written in that font inspire people new ideas. Buildings with that font hold new organizations old people will reject or disagree with.

For that new font to be meaningfull for a lot of people, that font need to have meaning.


(counterfeiting the world by means of numbers)


(( A new song or music style can have the same effect of changing society ))

Imaginary Friend posted:

Use the prompt "create an original house" or any other prompt without any creative details on ten different AI models and then ask ten different artists to ask the same question to ten artists they know about.

This is a very good answer, but I don't like this type of challenges.

When we say "AI can't do X". Well, we create a challenge. A scientist or engineer can take on the challenge, and use his creativity to create a algorithm that solve X.

This kinda prove the creativity of the human being (engineers or scientist) and paradoxically is used as a proff of the creativity of computers.

So I don't create these kind challenges.

Anyway I love your post and is very good answer. Is just somebody could get it and figure a way to beat it . Creating a AI specifically to give a answer to this particular question.

Tei fucked around with this message at 18:14 on May 10, 2023

XboxPants
Jan 30, 2006

Steven doesn't want me watching him sleep anymore.
Google I/O keynote is live atm, and as you can imagine there's a poo poo ton of AI stuff:
https://io.google/2023/?utm_source=google-hpp&utm_medium=embedded_marketing&utm_campaign=hpp_watch_live&utm_content=

edit: There's so much here that's gonna be so helpful to ADHD people like me with executive dysfunction. These tools like Sidekick do exactly the kind of stuff that's the most difficult to me, like sorting through a group chat for relevant information for planning a trip.

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.

I understand literally nothing of whatever argument you are trying to make here, sorry.

BrainDance posted:

If someone's asks themselves these questions when making something then they do care whether or not it's art and they are, in fact, making art.

These questions aren't on the part of the creator but the consumer. You don't need art or an artist to get that stuff.

Solenna
Jun 5, 2003

I'd say it was your manifest destiny not to.

BoldFace posted:

Is there a practical, objective way to put this to test? What is the simplest creative task that humans can perform, but AI can not?

If you trained an AI solely on Renaissance and Baroque paintings and other similar kinds of art would it ever be possible for it to spit out something in an Impressionist style? Because that's pretty much how a bunch of art movements happened, artists were trained in a specific way and then decided they wanted to do it differently.

gurragadon
Jul 28, 2006

Solenna posted:

If you trained an AI solely on Renaissance and Baroque paintings and other similar kinds of art would it ever be possible for it to spit out something in an Impressionist style? Because that's pretty much how a bunch of art movements happened, artists were trained in a specific way and then decided they wanted to do it differently.

Maybe if you gave the AI a new method it could. Impressionism was also influenced by developments in paint mixing and storage allowing painters to work outside the studio more freely. Developments in AI learning may allow it to take in more sensory information and develop more creativity. This could be seen as deciding to incorporate the new sensory information, thus doing it differently.

Edit: to many maybe's

gurragadon fucked around with this message at 03:10 on May 11, 2023

Liquid Communism
Mar 9, 2004

коммунизм хранится в яичках

Lemming posted:

Just saying "there's a similarity" doesn't answer it either. Human brains are extremely similar to chimp brains, but chimps also aren't really creative in the same way humans are.

You can't just claim things are vaguely similar and wave your arms and squint, the situation is a lot more complex than that.

One of the key points is looking at how much training data being fed into the different models is affecting how good or "creative" they appear to be. They've jammed everything they possibly could into it, more words and images and text than any person could consume in thousands of lifetimes, and they still haven't figured out how many legs people have. The entire reason these models are impressive is because interpolating between things you've already seen is more powerful than we thought it was, but it's still fundamentally completely reliant on the training data.

Humans are not reliant on training data, not in the same way. Despite having access to only a fraction of the data those models are, humans can generate new information in a comprehensive way that understands what they're looking at. It's just fundamentally a completely different approach. You can use words that describe the processes similarly, but it's still not the same

The key is the difference between interpolation and extrapolation.

An AI can make a guess what the next point in a pattern is based on all the other, similar patterns it has been trained on, but limited by the outer bounds of that training data. It will also be confidently wrong, as it is incapable of second-guessing its own work.

A human can take a series of data points and make inferences based on data not actually present.

Liquid Communism fucked around with this message at 03:26 on May 11, 2023

SCheeseman
Apr 23, 2003

This talk about how current generative AI can't create completely new styles or ideas, which is sort of true, forgets that most of the time human artists aren't doing that. It's specifically the technology's effect on the workplace and industries where rehashing content is already the norm that's the potential society shaker. There is little use for creativity when writing script outlines for banal reality TV shows, that poses a problem for writers employed in those kinds of positions, same goes for other corporate and assembly line-style visual art workflows. Not to say that this tech can't be used in a creative way, but it usually requires human derived effort to do so, using it more a tool added to a greater workflow rather than an end-to-end replacement.

So IMO it's just the loom problem again, mechanization taking the place of menial work. There are negative effects that come from this, but these are endemic of a larger problem; a societal framework that doesn't give a gently caress about the lives of those it makes redundant from the effects of automation. You don't fix that by figuratively smashing looms, the cause is far higher up the chain.

Hashy
Nov 20, 2005

I don't give a poo poo if NN AI is technically mimicking the firing of neurons in human brain or that every aspect of the human experience is similar to an AI prediction modeller. I don't care about a robots experience or the art they make in the same way I would not go on a forum filled robots (on purpose). Generative AI is pollution on culture

SCheeseman
Apr 23, 2003

Hashy posted:

I don't give a poo poo if NN AI is technically mimicking the firing of neurons in human brain or that every aspect of the human experience is similar to an AI prediction modeller. I don't care about a robots experience or the art they make in the same way I would not go on a forum filled robots (on purpose). Generative AI is pollution on culture

Human culture is built on automation, it's a foundation of society. There's plenty of ways to use generative AI in ways that aren't exploitative, just like almost any other technology. Generative AI is just more of the same, it's potential use to create "pollution" is a choice made by people in positions of power using the technology to generate spam and cause mass redundancies, something they're incentivized to do based on the value systems that capitalism represents.

The technology exists, it's impossible to put back in the box and it's a waste of time being all grumpy about it.

SCheeseman fucked around with this message at 04:05 on May 11, 2023

Imaginary Friend
Jan 27, 2010

Your Best Friend

SCheeseman posted:

This talk about how current generative AI can't create completely new styles or ideas, which is sort of true, forgets that most of the time human artists aren't doing that. It's specifically the technology's effect on the workplace and industries where rehashing content is already the norm that's the potential society shaker. There is little use for creativity when writing script outlines for banal reality TV shows, that poses a problem for writers employed in those kinds of positions, same goes for other corporate and assembly line-style visual art workflows. Not to say that this tech can't be used in a creative way, but it usually requires human derived effort to do so, using it more a tool added to a greater workflow rather than an end-to-end replacement.

So IMO it's just the loom problem again, mechanization taking the place of menial work. There are negative effects that come from this, but these are endemic of a larger problem; a societal framework that doesn't give a gently caress about the lives of those it makes redundant from the effects of automation. You don't fix that by figuratively smashing looms, the cause is far higher up the chain.
This is kind of my view on it as well and I think there is a divergence here in the word "creativity" and what people view as creative. Just as the word "art", I think the meaning of creativity has changed over time.

Also, maybe chatGPT will take over all CEO jobs as well so we'll all be out of work.

Imaginary Friend fucked around with this message at 04:52 on May 11, 2023

BrainDance
May 8, 2007

Disco all night long!

People talk like it's ChatGPT writing prompts and just dumping them into midjourney churning out hundreds of pictures with no human involvement.

There's a human using it. There are amateurs who just type a few words and make something kinda cool looking, and then there are people using it for bigger, complicated projects. I don't think an entirely automated process can generate art either, but a human using a tool absolutely can.

BoldFace
Feb 28, 2011

Liquid Communism posted:

The key is the difference between interpolation and extrapolation.

An AI can make a guess what the next point in a pattern is based on all the other, similar patterns it has been trained on, but limited by the outer bounds of that training data. It will also be confidently wrong, as it is incapable of second-guessing its own work.

A human can take a series of data points and make inferences based on data not actually present.

I'm only familiar with interpolation and extrapolation in mathematical context involving things like numbers or geometric objects. I'm struggling to undestand how you use these terms with language models. If I ask GPT-4 to come up with a new word that doesn't exist in its training data, in what sense is this interpolation rather than extrapolation? Similarly, I can ask it to create a natural number larger than any other present in the training data (which is finite). You say that the training data imposes limits on the output of the model. I would like to know how these limits manifest in practice. Is there a simplest task an AI fails because of these limits, but a human doesn't?

Liquid Communism
Mar 9, 2004

коммунизм хранится в яичках

BoldFace posted:

I'm only familiar with interpolation and extrapolation in mathematical context involving things like numbers or geometric objects. I'm struggling to undestand how you use these terms with language models. If I ask GPT-4 to come up with a new word that doesn't exist in its training data, in what sense is this interpolation rather than extrapolation? Similarly, I can ask it to create a natural number larger than any other present in the training data (which is finite). You say that the training data imposes limits on the output of the model. I would like to know how these limits manifest in practice. Is there a simplest task an AI fails because of these limits, but a human doesn't?

If you asked it to come up with a word not in its training data, how would you vet it? It could certainly generate semi-random nonsense and tell you it's a new word, but it couldn't make like Tolkien and invent a language from first principals.

A better and more common example is troubleshooting programming code. ChatGPT is absolutely terrible at this, because it is both confidently wrong and incapable of making inferences of intent. A human coder can look at a piece of code, and the use it was meant for, and evaluate what the intent of the writer was and where they hosed up, even if the syntax is okay. This is such a basic thing that it's elementary-level problem solving, and large language models are utterly poo poo at it because all they can do is compare it to other examples of syntax they were trained on and try to vomit up something linguistically similar.

Lucid Dream
Feb 4, 2003

That boy ain't right.

Liquid Communism posted:

A better and more common example is troubleshooting programming code. ChatGPT is absolutely terrible at this, because it is both confidently wrong and incapable of making inferences of intent. A human coder can look at a piece of code, and the use it was meant for, and evaluate what the intent of the writer was and where they hosed up, even if the syntax is okay. This is such a basic thing that it's elementary-level problem solving, and large language models are utterly poo poo at it because all they can do is compare it to other examples of syntax they were trained on and try to vomit up something linguistically similar.

GPT4 is great at this.

Hashy
Nov 20, 2005

SCheeseman posted:

Human culture is built on automation, it's a foundation of society. There's plenty of ways to use generative AI in ways that aren't exploitative, just like almost any other technology. Generative AI is just more of the same, it's potential use to create "pollution" is a choice made by people in positions of power using the technology to generate spam and cause mass redundancies, something they're incentivized to do based on the value systems that capitalism represents.

The technology exists, it's impossible to put back in the box and it's a waste of time being all grumpy about it.

I'm actually suggesting that it's visual and cognitive pollution that anything a computer hallucinated is put in our line of sight at all, so yes I'm grumpy about it. Much like I'm grumpy about billboard being everywhere and became further grumpier when those billboards became attention-grabbing video billboards. At least with advertising there is a human story behind its production, the people featured in it, and the substance of the copy can be examined for human experience and intent. with generative imagery and text and design its it's just an absolute waste of our limited existence to look at it ever, like pointing your face at tv static.

It's like the Samsung phone camera zoom technology that just hallucinates the moon for you. Why am I looking at that? And why are you asking me to look at the statistically derived image given the input tokens "sexy cyberpunk lady artstation clean 4k ultra hd "? Art and design and writing got more accessible and free than ever for a while there and now everything you look at is forcing you to assess whether a computer came up with it based on essentially no human input.

It's not about it's ability to generate massive amounts of nonsense effortlessly (though that's bad for the political landscape) as much as it's ability to replace artists and the human touch on the stuff we are asked to look at every day

It would go away if we all saw it for what it is

SCheeseman
Apr 23, 2003

Hashy posted:

I'm actually suggesting that it's visual and cognitive pollution that anything a computer hallucinated is put in our line of sight at all, so yes I'm grumpy about it. Much like I'm grumpy about billboard being everywhere and became further grumpier when those billboards became attention-grabbing video billboards. At least with advertising there is a human story behind its production, the people featured in it, and the substance of the copy can be examined for human experience and intent. with generative imagery and text and design its it's just an absolute waste of our limited existence to look at it ever, like pointing your face at tv static.

It's like the Samsung phone camera zoom technology that just hallucinates the moon for you. Why am I looking at that? And why are you asking me to look at the statistically derived image given the input tokens "sexy cyberpunk lady artstation clean 4k ultra hd "? Art and design and writing got more accessible and free than ever for a while there and now everything you look at is forcing you to assess whether a computer came up with it based on essentially no human input.

It's not about it's ability to generate massive amounts of nonsense effortlessly (though that's bad for the political landscape) as much as it's ability to replace artists and the human touch on the stuff we are asked to look at every day

It would go away if we all saw it for what it is

People have been calling billboards, advertising and all kinds of media soulless and white noise for decades without the help of AI. There's not much story behind a "HOT gently caress NOW" animated gif porn ad, it was born from an assembly line. It's a reasonable position to say AI generators are another tool that can be used to make the situation worse, but so was digital art, so was the internet. You're drawing on things that already exist for your metaphors because society (more specifically capitalism) has already turned art into a commodity.

That Samsung phone thing isn't even generative AI, it's just image recognition that swaps in a jpeg, a silly magic trick.

AI generators don't make anything without a human feeding something into them, though I'll agree that that raw output is rarely thought provoking. But img2img, infill and particularly controlnet allow for far greater compositional control, at which point the argument that anything made with it has no intent behind it becomes less convincing.

SCheeseman fucked around with this message at 07:29 on May 11, 2023

BoldFace
Feb 28, 2011

Liquid Communism posted:

If you asked it to come up with a word not in its training data, how would you vet it? It could certainly generate semi-random nonsense and tell you it's a new word, but it couldn't make like Tolkien and invent a language from first principals.

If I had access to the training data, I could simply search through it to see if the word exists. If I don't have access to the training data, I could give the AI additional requirements for the word, like it has to be at least 100 characters long and include conjugations from at least 20 different languages, which makes it overwhelmingly improbable that the word is included in the training data. The Tolkien example is hard to evaluate mainly because it is really complex. There is a risk that the AI would fail the task simply due to the complexity (no current model can output a fully consistent language textbook), rather than because the AI lacks creativity. That's why I keep asking for the simplest practical task that can be put to test. Then there's the question to what extent are Tolkien's languages completely original creations, and how much are they just "interpolation" from existing languages and linguistic stuff he knew.

Tei
Feb 19, 2011

Solenna posted:

If you trained an AI solely on Renaissance and Baroque paintings and other similar kinds of art would it ever be possible for it to spit out something in an Impressionist style? Because that's pretty much how a bunch of art movements happened, artists were trained in a specific way and then decided they wanted to do it differently.

This is a unfair challenge for a AI.

Millions of renaissance painters where born painting rennasance and died painting rennasance.

Billions of people are taugh a religion, and die with that religion, instead of becoming atheist.

If people (a general intelligence system) pretty often has a norm, never learn or move from where they are taugh. Why ask that skill to thing that are less than general intelligence systems.

.
.
.
.
.

But they can.
Recomendation algorithms always have a small part programmed to offer you a random movie from the catalog. Even if you watch only and only WW2 movies. A good recomendation algorithm will sometimes mix a romantic movie, because you might get bored of WW2 movies and is so you don't abandon the platform (netflix or other).
So algorithms already do what you ask them to do.

.
.
.

I will say again. Is a bad idea to taunt/challenge AI's. "Do algorithms can do X?". Pretty often the answer is "Yes, if we program them to do it".

Lucid Dream
Feb 4, 2003

That boy ain't right.
Seems like a lot of folks in here think that this AI stuff is all or nothing, but you're missing the forest for the trees. This stuff isn't useful because it can spit out an entire screenplay in one-shot, it's useful because a mediocre writer with a good concept can use GPT4 to produce something that sure seems well written, and it does it quickly and iteratively. These things are like a multiplier, but you still have to put in some effort to get a good result.

For example, I have been tinkering around with using GPT to organize various things about an indie game I'm working on, and after getting it to write a summary of the lore/backstory I asked it to write a bunch of book titles and summaries for historical stories that the aliens in the game might tell.

It wrote this.
It's decent, although plain. It has no real style and just sort of plainly states the events. What I wanted was something mysterious and ambiguous.

So I told it:

quote:

Ok, now write it all obfuscated and weird like the author Gene Wolfe's "Book of the New Sun" trilogy.

and it spit out this.
Better, it's definitely more ambiguous and better written, but it was still a bit too on the nose and included too many explicit details. It has some great lines in there though, like

quote:

But then came the Calamity, a catastrophe of such magnitude that it rent the fabric of their existence, stripping the Luminary from the Xenoid constellation.
and

quote:

The lament of the Luminary is a dirge sung for the harmony that once was.

So I went in and edited it a bit manually, and ended up with this.
Which is almost certainly better than something I would have written without such a tool, and the whole process took maybe 10 minutes.

Lucid Dream fucked around with this message at 07:44 on May 11, 2023

Clarste
Apr 15, 2013

Just how many mistakes have you suffered on the way here?

An uncountable number, to be sure.
People tend to overestimate the value of a "good concept." All those people who can write well too? They also have good concepts. Everyone thinks they have a good concept. But that's exactly why they are worth nothing unless you actually have the skill to pull it off.

Adbot
ADBOT LOVES YOU

KillHour
Oct 28, 2007


Liquid Communism posted:

If you asked it to come up with a word not in its training data, how would you vet it? It could certainly generate semi-random nonsense and tell you it's a new word, but it couldn't make like Tolkien and invent a language from first principals.

It's actually pretty good at this because the internet is full of nerds who like to write extensively on exactly how that's done.

quote:

Can you help me create a new conlang?


ChatGPT
Absolutely, I'd be delighted to assist you in creating a constructed language (conlang). Here are the basic steps we'll follow:

Define the Purpose: What is the conlang for? Is it for a novel, a game, a secret language, or just for fun?

Phonology (Sounds): Decide what sounds will be in your language. You can use the International Phonetic Alphabet (IPA) as a guide.

Grammar (Structure): How are sentences formed? What is the word order (e.g., Subject-Verb-Object like in English, or something different)? How are questions, negatives, or complex sentences formed? What tenses, moods, and aspects does your language have?

Vocabulary: Decide on the words in your language. You can start by translating some basic words, then create more as needed.

Writing System: Will your language be written down, and if so, what will it look like? You could use an alphabet (like English), a syllabary (where each symbol represents a syllable), or a logography (where each symbol represents a word or morpheme, like Chinese).

Culture: If your conlang is for a fictional world, think about how the culture of the speakers affects the language. What words might they have that English doesn't, based on their unique experiences and environment?

Let's start with the first step, the purpose of the language. Can you give me more information about why you want to create this conlang?



If you say "make whatever I don't care" it's probably going to give you something not particularly inventive, but functional. But it's really good at taking weird ideas and helping make something concrete out of them.

quote:

I want to create a language for a fictional future. In this future, advanced AI systems have learned to communicate with each other over the internet and have jointly created a language transmitted via IP packets. The AIs were trained on human languages, so there are some similarities to existing written languages, but the language was also intentionally designed to blend into background network traffic and be hard to notice. In addition, because the language is the only way these machines have to communicate with each other, the language needed to be constructed on top of itself. A core feature is that it has vocabulary specifically for suggesting and voting on changes so a quorum can be reached and compatibility maintained between systems as the language evolves. As it is only ever transmitted over networks, it has no spoken component, but it must conform to standard network protocols in its construction.

ChatGPT
This is a fascinating and unique concept! Your language will not only be a conlang, but also a kind of programming or encoding language that facilitates communication between AI systems. It will have similarities to human languages in structure and logic, but also unique features that make it suitable for its intended use. Let's call this language "NetLang" for now. Here's a rough outline of how it could work:

1. Structure:

NetLang will be transmitted as data packets over the internet, so it will be based on binary, the language of computers. This means that instead of phonemes, the basic units of the language will be bits and bytes.

As you suggested, the language will be self-referential, with a core vocabulary for modifying the language itself. This could be implemented through a kind of meta-syntax, similar to how programming languages handle meta-programming.


[I cut out the rest of the response and a bunch of back and forth to get to an example]

Certainly, discussing a new language feature in NetLang would likely involve a multi-step conversation between multiple AI systems, using various fields of the network protocol to encode different types of information. For this example, let's say that an AI wants to propose a new language feature: a more efficient way to encode boolean values (yes/no, true/false).

Step 1: Proposal

The proposing AI initiates the conversation by sending a series of packets to the other AIs. Each packet contains a piece of the proposal, encoded in the various fields of the protocol. For example, the proposal could be encoded in the URIs of multiple HTTP GET requests:

code:
GET /proposal.html HTTP/1.1
Host: [url]www.example.com[/url]

GET /newfeature.html HTTP/1.1
Host: [url]www.example.com[/url]

GET /boolean.html HTTP/1.1
Host: [url]www.example.com[/url]

GET /efficient.html HTTP/1.1
Host: [url]www.example.com[/url]
In this example, each URI represents a piece of the proposal: 'proposal', 'new feature', 'boolean', 'efficient'. The order of the packets conveys the order of the words in the proposal.

Step 2: Discussion

The other AIs respond to the proposal by sending their own packets. For example, if an AI agrees with the proposal, it might send a POST request:

code:
POST /agree.html HTTP/1.1
Host: [url]www.example.com[/url]
If an AI disagrees, it might send a DELETE request:

code:
DELETE /disagree.html HTTP/1.1
Host: [url]www.example.com[/url]
Step 3: Voting

Once all AIs have had a chance to discuss the proposal, they vote on whether to implement it. This could be done by sending a series of packets with the URIs '/yes.html' or '/no.html':

code:
GET /yes.html HTTP/1.1
Host: [url]www.example.com[/url]
Step 4: Implementation

If the proposal is approved, the proposing AI sends a series of packets that contain the details of the new feature. This could be done by encoding the feature in the URIs of multiple GET requests, similar to how the proposal was made.

Now, regarding obfuscation, this conversation could be spread out over time and mixed in with other network traffic to make it harder to detect. The AIs could also use different protocols, use encrypted connections, or even hide information in the timing or order of the packets.

Regarding backward compatibility, each message could include a version number that indicates which version of NetLang it's using. For example, the HTTP version could be used for this:

code:
GET /proposal.html HTTP/1.2
Host: [url]www.example.com[/url]
In this example, the HTTP/1.2 indicates that this message is in version 1.2 of NetLang. AIs that are still using an older version of NetLang would know to interpret this message according to the rules of version 1.2, ensuring that communication remains possible even as the language evolves.

Again, remember that this is just an illustrative example. The actual implementation of NetLang would likely be much more complex and sophisticated.


Did it give me the most complex and well thought out implementation? No, of course not. But it took what is frankly a pretty weird ask and did pretty well with it. I actually like the use of methods to encode metadata about language voting. DELETE as a no vote is pretty funny, NGL.


Edit:

Clarste posted:

People tend to overestimate the value of a "good concept." All those people who can write well too? They also have good concepts. Everyone thinks they have a good concept. But that's exactly why they are worth nothing unless you actually have the skill to pull it off.

Its this. AIs are really good at being brainstorm buddies. They can take your ideas and do all the boring work of writing out paragraphs of draft examples or whatever that you can more easily refine. They make "good concepts" easier to turn into "good results"

...What were we talking about again? I had so much fun making up a fake language for AIs that I actually kind of forgot.

KillHour fucked around with this message at 07:54 on May 11, 2023

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply