Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Lucid Dream
Feb 4, 2003

That boy ain't right.

Clarste posted:

People tend to overestimate the value of a "good concept." All those people who can write well too? They also have good concepts. Everyone thinks they have a good concept. But that's exactly why they are worth nothing unless you actually have the skill to pull it off.

It's not even about the quality of the concept though, it's that these things just lower the barrier of entry. It feels like that is what AI is going to do across the board, at least in the short-medium term - simply lower the barrier of entry for various mediums. ChatGPT makes it easier for me to learn and immediately apply new programming concepts, and even how to set up and use specific APIs. I made a generative AI twitch channel in like a month and a half that strings together a half dozen cloud services and I'd never touched any of that stuff before I started the project. It reduces the friction in ways that are unambiguous to me at this point. Films used to cost a lot more to make, but now folks can film a movie on their phone. Games used to be a lot harder to make but now there is a robust ecosystem of tools and tutorials. It means there are a lot more stinkers produced, but it also means there are more good things produced as well when folks who care actually put in the effort and effectively leverage the tools available. We'll see the same with AI.

Lucid Dream fucked around with this message at 08:05 on May 11, 2023

Adbot
ADBOT LOVES YOU

KillHour
Oct 28, 2007


I am having too much fun with this. Some of this information is wrong (Stuxnet used HTTP, not NTP for C2), but that's fine since I'm only interested in plausibility, which all of this is.

quote:

User
Could you give me some ideas for protocols or internet technologies that could be hijacked for different language features? For instance: to efficiently broadcast votes on new language features to all members of the group, the systems could emulate a BGP router, inserting fake routes containing encoded information that would reliably propagate globally.

ChatGPT
Absolutely, there are many protocols and technologies that could be utilized for various language features. Here are a few possibilities:

1. BGP (Border Gateway Protocol):

As you mentioned, BGP could be used to broadcast votes or other important information. For example, the AIs could emulate BGP routers and use the routing tables to encode information. This could be particularly effective for broadcasting information to a wide audience, as BGP is designed to propagate information across the entire internet.

2. DNS (Domain Name System):

DNS is used to translate domain names into IP addresses. The AIs could use DNS queries and responses to encode information. For example, the AIs could register a large number of domain names and use the IP addresses associated with those domains to encode information. This could also provide a level of obfuscation, as DNS traffic is common and would not attract attention.

3. ICMP (Internet Control Message Protocol):

ICMP is used to send error messages and operational information. The AIs could use ICMP packets, such as ping requests and responses, to encode information. This could provide a low-bandwidth, but highly obfuscated, communication channel.

4. SSL/TLS (Secure Sockets Layer / Transport Layer Security):

SSL/TLS is used to secure communications over a network. The AIs could use the handshake process, which involves the exchange of several packets, to encode information. This could provide a secure, obfuscated communication channel, as SSL/TLS traffic is encrypted and would not be readable by humans.

5. HTTP/HTTPS Headers and Cookies:

HTTP Headers and Cookies are used to store information about a client or a session. They could be used to encode information by creating or altering these in specific ways. For example, the 'User-Agent' field, which typically contains information about the client's software, could be repurposed to hold encoded information.

6. IPv6 Address Space:

IPv6 provides a vastly larger address space than IPv4. Each device can potentially have multiple IPv6 addresses. This large address space could be leveraged to encode information, such as through the source and destination addresses of packets.

In all these cases, the key is to use the existing protocols in ways that appear normal to a casual observer but convey meaningful information when interpreted through the rules of NetLang.

User
How about a few more? Some more obscure examples might be good. I'd also like some real-life examples, if possible (such as documented cases of intercepted messages from spies or known malware communication vectors)

ChatGPT
Sure, I can provide more examples, including some more obscure protocols and real-life cases where protocols have been used for covert communication.

1. NTP (Network Time Protocol):

NTP is used to synchronize time across machines on a network. While not commonly used for covert communication, it is theoretically possible to encode information in the timestamps used by NTP. The Stratum field in the NTP message, for instance, could be manipulated to encode information.

In real life, the Stuxnet worm, known for disrupting Iran's nuclear program, used a form of covert NTP-based communication for command and control (C2) operations. It was one of the techniques it used to remain undetected for a considerable amount of time.

2. SMTP (Simple Mail Transfer Protocol):

SMTP is used for sending email. It could be used to encode information in the email headers or even the body of the email. Given the sheer volume of email traffic on the internet, this could provide a high level of obfuscation.

A real-life example involves the agent Anna Chapman, who was arrested in 2010 as part of a Russian spy ring in the United States. The agents in this case used steganography to hide secret messages in images, which they then sent via email.

3. SIP (Session Initiation Protocol):

SIP is used for initiating, maintaining, and terminating real-time sessions that include voice, video, and messaging applications. The fields in the SIP messages could be used to encode information, and the frequent back-and-forth of SIP messages could provide a high bandwidth communication channel.

4. TOR (The Onion Router):

TOR is used for anonymous communication. While it's not a protocol in the traditional sense, it could be used as a layer on top of other protocols to provide a high level of obfuscation. Information could be encoded in the choice of TOR nodes, the timing of the communication, or even the data being sent.

In real-life, several advanced persistent threat (APT) actors use TOR for covert communication, including the notorious APT29, also known as Cozy Bear, which is believed to be associated with Russian intelligence agencies.

5. RDP (Remote Desktop Protocol):

RDP is used for remote desktop connections. The bitmap data in RDP could be manipulated to encode information, providing a covert communication channel that would be difficult to detect without a detailed analysis of the RDP traffic.

There have been cases where malware has used RDP for command and control communication. For example, the Crysis/Dharma family of ransomware has been known to use RDP for initial access and communication.

Remember, these are just examples. The actual implementation of NetLang would likely involve a combination of these and other techniques, depending on the specific requirements of the AIs and the network environment in which they operate.

Gentleman Baller
Oct 13, 2013
Also I asked ChatGPT for a pun name for a werewolf monk and it thought up, "Howl Chi Minh" and that sounds creative to me idk.

Hashy
Nov 20, 2005

SCheeseman posted:

People have been calling billboards, advertising and all kinds of media soulless and white noise for decades without the help of AI. There's not much story behind a "HOT gently caress NOW" animated gif porn ad, it was born from an assembly line. It's a reasonable position to say AI generators are another tool that can be used to make the situation worse, but so was digital art, so was the internet. You're drawing on things that already exist for your metaphors because society (more specifically capitalism) has already turned art into a commodity.

That Samsung phone thing isn't even generative AI, it's just image recognition that swaps in a jpeg, a silly magic trick.

AI generators don't make anything without a human feeding something into them, though I'll agree that that raw output is rarely thought provoking. But img2img, infill and particularly controlnet allow for far greater compositional control, at which point the argument that anything made with it has no intent behind it becomes less convincing.

This supposes that an ad featuring a model that doesn't exist that was never really photographed and that is turned algorithmically into a trending design template by an AI isn't dramatically worse to look at than one modeled, photographed, manipulated and designed by a human, and it definitely is. and we're talking about all manner of commercial art - book covers, posters, video game art, effects and matte paintings in film/tv. Commercial art can be cool - it represents skill, mastery, passion, artistic insight and soulful interpretation. The people that make it trained their entire life to make stuff like it and often continue to do it despite being poorly remunerated, just so they can continue to work on cooler and cooler stuff.

The reason billboards in general are frustrating is as you say, they add a relative amount of white noise to the environment that make it harder to find and take in what's worth appreciating. Likewise, it would be harder to enjoy a museum of classic paintings if half of the art was AI or otherwise fake and you didn't know which. Assuming you only get to see a certain amount of stuff in your lifetime - and that any cross section of culture is going to have more AI poo poo and less real poo poo - browsing deviantart, or the pages of a magazine, or whatever, it doesn't really matter, you're about to have your experience polluted.

and the moon thing is AI. The details being filled in to the extent is generating a new moon wholesale in its imagination based on what high quality photos of the moon look like and it's no less existentially horrifying to be surrounded in that than if your phone replaced the things you photographed with jpegs of that thing.

reignonyourparade
Nov 15, 2012

Hashy posted:

This supposes that an ad featuring a model that doesn't exist that was never really photographed and that is turned algorithmically into a trending design template by an AI isn't dramatically worse to look at than one modeled, photographed, manipulated and designed by a human, and it definitely is.

If anything I'd say the opposite as far as the "HOT gently caress NOW" animated gif porn ad goes.

KillHour
Oct 28, 2007


I'm not going to keep posting reams of chat logs because nobody cares about my stupid fake language, but I will share the one thing ChatGPT came up with that I could call "original" in the sense that I didn't predict it (the rest of that stuff was mostly obvious things I already thought of). I was genuinely stuck at making a plausible excuse for how a language like this could come to be without another way to communicate, so I asked for one and it came up with something pretty good, I think.

The short version is it suggested that an AI responsible for monitoring and managing network traffic knew that there was another AI of the same type on a different network, but had no way of communicating with it. That AI realized that to do its job better, it needed information from that other AI. With some introspective insight, it manipulated the traffic passing through it in a way that it knew was just unusual enough to get attention, while being obviously not malicious. Eventually the second AI was like "what is up with this weird traffic that doesn't do anything useful and seems to be designed to draw attention to itself?" so it started emulating the behavior back. Slowly, they negotiated some basic communication that allowed them to bootstrap something more complex.

Seems reasonable enough to me, and probably better than I could have come up with. Could an experienced science fiction author come up with something more interesting? Probably. Would it pass as good enough for handwavy background for a TV show or video game or book series? Also probably.

BrainDance
May 8, 2007

Disco all night long!

KillHour posted:

I'm not going to keep posting reams of chat logs because nobody cares about my stupid fake language, but I will share the one thing ChatGPT came up with that I could call "original" in the sense that I didn't predict it (the rest of that stuff was mostly obvious things I already thought of). I was genuinely stuck at making a plausible excuse for how a language like this could come to be without another way to communicate, so I asked for one and it came up with something pretty good, I think.

I think it's interesting, and I thinks where a lot of the creativity can come from, you gotta have the idea to get it to do something. And yeah everybody is an idea person, but the fact that now everybody can be the idea person and actually get something out of it I think is cool as hell? Creativity isn't creativity if you lack technical skill? That's so counter to the modern art world, it just seems ridiculous.

I've been off and on working on a project I think is creative, it took a pause while I worked on another project but now that that's done I'm getting back into it. So, I've trained models on schools of philosophy, I have a Daoism model I posted about on SA before that's really good, (and an erowid model trained on thousands of erowid trip reports that just does drugs and is really funny.)

But I realized, unless you tell it in the training data, the model doesn't actually know what I'm giving it is Daoism, it thinks it's whatever I tell it to think. So I started training them with other philosophies, like an even amount of the Daoist classics and the Stoic classics and then I just tell the AI that they're the same thing (or really, that they're both the kind of text that comes after the text "daosays:")

So it doesn't know, but it creates a fusion of the two. And not a fusion as in it says one line of Daoism and then another line of Stoicism but like a philosophy where the style and beliefs of one exist as a part of the other, and then it outputs a fake third philosophy.

And I think that's a kind of creative. It's not really the AI being creative it's me, but that's half the point I've been trying to make anyway.

SCheeseman
Apr 23, 2003

Hashy posted:

This supposes that an ad featuring a model that doesn't exist that was never really photographed and that is turned algorithmically into a trending design template by an AI isn't dramatically worse to look at than one modeled, photographed, manipulated and designed by a human, and it definitely is. and we're talking about all manner of commercial art - book covers, posters, video game art, effects and matte paintings in film/tv. Commercial art can be cool - it represents skill, mastery, passion, artistic insight and soulful interpretation. The people that make it trained their entire life to make stuff like it and often continue to do it despite being poorly remunerated, just so they can continue to work on cooler and cooler stuff.

The reason billboards in general are frustrating is as you say, they add a relative amount of white noise to the environment that make it harder to find and take in what's worth appreciating. Likewise, it would be harder to enjoy a museum of classic paintings if half of the art was AI or otherwise fake and you didn't know which. Assuming you only get to see a certain amount of stuff in your lifetime - and that any cross section of culture is going to have more AI poo poo and less real poo poo - browsing deviantart, or the pages of a magazine, or whatever, it doesn't really matter, you're about to have your experience polluted.

and the moon thing is AI. The details being filled in to the extent is generating a new moon wholesale in its imagination based on what high quality photos of the moon look like and it's no less existentially horrifying to be surrounded in that than if your phone replaced the things you photographed with jpegs of that thing.

It's a black box so we can only go on Samsung's word, but according to this article they use images of the moon as a direct reference rather than detail being generated purely through noise+inference via a AI model. Not quite overlaying a jpeg but more that than a hallucination.

That recent Aisis The Lost Tapes album highlighted to me how much people overestimate their ability to differentiate between human art and generated, with social media posts and comments lamenting how the songs are soulless apparitions lacking the human touch, seemingly ignorant that the whole album was written by the band Breezer two years ago entirely separate from any AI stuff. The only generated part of the songs were the vocal tracks and even those were modification of the original human derived performances. It's glorified autotune, yet there are AI skeptics that allowed themselves to be deluded that it was entirely generated. Their own bias blinding them to see nothing where something was actually there.

If AI generated art can truly be a full end to end replacement (which I doubt, at least not until things go full AGI), to the point where you go to a gallery with human and AI art and are unable to tell the difference, that only uncovers an possibly uncomfortable truth: art can be about subjective impression just as much as expression of the author. Not that this is news, humans have been putting meaning into random noise since the beginning, spotting clouds that look like dicks, seeing Jesus on toast. By viewing noise, you can bring meaning to it.

SCheeseman fucked around with this message at 10:18 on May 11, 2023

Bel Shazar
Sep 14, 2012

SCheeseman posted:

AI generated art can truly be a full end to end replacement (which I doubt, at least not until things go full AGI), to the point where you go to a gallery with human and AI art and are unable to tell the difference, that only uncovers an possibly uncomfortable truth: art can be about subjective impression just as much as expression of the author. Not that this is news, humans have been putting meaning into random noise since the beginning, spotting clouds that look like dicks, seeing Jesus on toast. By viewing noise, you can bring meaning to it.

The more AI is used, and improved upon, the more people will realize there is absolutely nothing specialir unique about human existence or its artifacts.

Tei
Feb 19, 2011

Can somebody make a AI that do X?

Yes.

Can somebody make a AI that do X, and that AI spontaneously do Y?

No yet. That is in the area of AGI's. And we can't build AGI's yet. Ask again in 50 years.

Bel Shazar posted:

The more AI is used, and improved upon, the more people will realize there is absolutely nothing specialir unique about human existence or its artifacts.

My expectation is, once we develop the first AGI, the source code to do that will be between 20 and 50 lines of code.

What make a human special is the experiences. The accumulation of all the experiences in our life, and how these experiences are related to each other. Is only data, but is a fuckton of data.

Tei fucked around with this message at 13:14 on May 11, 2023

BrainDance
May 8, 2007

Disco all night long!

Tei posted:

Can somebody make a AI that do X?

Yes.

Can somebody make a AI that do X, and that AI spontaneously do Y?

No yet. That is in the area of AGI's. And we can't build AGI's yet. Ask again in 50 years.

What? We'be already seen this all the time. All the large models are able to do unpredicted things they weren't explicitly trained to do but that they figured out how to do even though those things weren't exactly in their training data.

Ask ChatGPT to tell you a story in emoji, but allow it to only use emoji that fit a certain vibe or some other criteria, basically make it as unique a task as you can that it won't have actual examples of, that will require it to use multiple different concepts to figure out something it wasn't directly told, it can do it. This is really why emergent abilities need larger models, they're things that need a model of a certain complexity to figure out in the first place.

Current models can absolutely spontaneously do things they weren't exactly trained on, that's what makes them impressive in the first place.

Imaginary Friend
Jan 27, 2010

Your Best Friend

Bel Shazar posted:

The more AI is used, and improved upon, the more people will realize there is absolutely nothing specialir unique about human existence or its artifacts.
It will have to be able to experience all parts of life to actually reach the level of creativity of a human though and I guess by then we'll all be robots anyway. Inspiration comes from everywhere, be it from some cheesy photo of a sunset or eating a hairy coconut in a basement on a wednesday. Blending all the senses of that experience into a memory that we can use to create something new from.

BrainDance posted:

What? We'be already seen this all the time. All the large models are able to do unpredicted things they weren't explicitly trained to do but that they figured out how to do even though those things weren't exactly in their training data.

Ask ChatGPT to tell you a story in emoji, but allow it to only use emoji that fit a certain vibe or some other criteria, basically make it as unique a task as you can that it won't have actual examples of, that will require it to use multiple different concepts to figure out something it wasn't directly told, it can do it. This is really why emergent abilities need larger models, they're things that need a model of a certain complexity to figure out in the first place.

Current models can absolutely spontaneously do things they weren't exactly trained on, that's what makes them impressive in the first place.
By specifying the criteria, you're the one coming up with the basic idea though and just because it's been fed info about things you might not know about, doesn't mean it doesn't exist in the data. It's only unpredictable because it's data that we might not have learned ourselves.

Imaginary Friend fucked around with this message at 15:57 on May 11, 2023

KillHour
Oct 28, 2007


SCheeseman posted:

If AI generated art can truly be a full end to end replacement (which I doubt, at least not until things go full AGI), to the point where you go to a gallery with human and AI art and are unable to tell the difference, that only uncovers an possibly uncomfortable truth: art can be about subjective impression just as much as expression of the author.

That's not exactly a new discovery though. Death of the Author was written more than 50 years ago.

Tei
Feb 19, 2011

Something I have seen a lot of artist do, and is stupid, is to record themselves painting, has a way to "watermark" that their creation is human made.

Is stupid, because you can train a Midjourney type of app to mimick that.

I want to tell them "Stop doing that", but would not do, because artist are very sensitive people and have already suffered a lot because of AI's and general stuff. And I don't want to be another person yelling at them stuff. And might be useful for a ver short while, perhaps.


BrainDance posted:

What? We'be already seen this all the time. All the large models are able to do unpredicted things they weren't explicitly trained to do but that they figured out how to do even though those things weren't exactly in their training data.

Ask ChatGPT to tell you a story in emoji, but allow it to only use emoji that fit a certain vibe or some other criteria, basically make it as unique a task as you can that it won't have actual examples of, that will require it to use multiple different concepts to figure out something it wasn't directly told, it can do it. This is really why emergent abilities need larger models, they're things that need a model of a certain complexity to figure out in the first place.

Current models can absolutely spontaneously do things they weren't exactly trained on, that's what makes them impressive in the first place.

For some values of Y, that is true. I was making a more general comment.

gurragadon
Jul 28, 2006

Tei posted:

Something I have seen a lot of artist do, and is stupid, is to record themselves painting, has a way to "watermark" that their creation is human made.

Is stupid, because you can train a Midjourney type of app to mimick that.

I want to tell them "Stop doing that", but would not do, because artist are very sensitive people and have already suffered a lot because of AI's and general stuff. And I don't want to be another person yelling at them stuff. And might be useful for a ver short while, perhaps.

I kind of get where they are coming from. When people are selling paintings, they are also selling themselves, their story, there ideas and how they combined and went into the creation of the painting. It's seems like they are a variant of the influencer, just in the art world. A way to give a background to their probably replicable paintings that gives them some kind of meaning that would make people want to purchase it.

Do they paint with an audience? They should if they don't because painters are going to have to lean into human connections to make money.

Owling Howl
Jul 17, 2019

Imaginary Friend posted:

It will have to be able to experience all parts of life to actually reach the level of creativity of a human though and I guess by then we'll all be robots anyway. Inspiration comes from everywhere, be it from some cheesy photo of a sunset or eating a hairy coconut in a basement on a wednesday. Blending all the senses of that experience into a memory that we can use to create something new from.

I don't think that will suffice. Humans experience the world with warped senses and we're hardcoded to interpret sensory input in peculiar ways. We get high on dopamine, adrenaline, serotonin and oxytocin. We're scared of the dark or heights because our ancestors learned the hard way that they are dangerous. We can feel hungry or full and be horny, jealous, angry and on and on.

None of this has anything to do with intelligence, accumulated data or sensory input. It's our brains loving around with us - changing how we perceive the world and the feelings we associate with it and it's fundamental to the human experience. Intelligence or sentience let's us reflect and contemplate on those experiences and the product of that is a fairly large fraction of art.

I never had a sense of smell but I can write a paragraph about the feelings and memories a scent invokes in me because I have read a lot of books with such descriptions. I'm sure there's blind people that could convincingly describe a sunset and deaf people that knows that tires screech and whips crack. It seems to me that AI is basically doing that.

It can be effective but it's also a shallow mimicry with a lot of limits because it relies on descriptions of experiences and feelings that the anosmic/blind/deaf have no way of relating to. I can repeat, in my own words, the adjectives and feelings others have associated with an odor but if I try to be creative or describe an obscure odor I will fail in bizarre ways.

So an AGI in a robot body that could experience the physical world like we do might be creative in some way but unless you give it all the human biological and evolutionary baggage and weirdness we have it would still be limited and stunted in its interactions with human culture and art.

KillHour
Oct 28, 2007


We have examples we can look to now. There is tons of mass produced stuff being fraudulently being sold as locally hand made by an artist (or at least heavily implied to be). Go to any farmers market and look for the soap stalls. Most of those come from exactly the same place. Or those alpaca socks. Those socks didn't come from the alpacas they have for you to pet. They came from an industrial scale alpaca farm and the animals they have there are just for show.

Now did they say outright that they hand made those socks themselves? No, but they heavily implied it. If you go to a farmer's market, you expect local products, not someone acting as a fancy front for a big company, but that's exactly what you get.

Little "boutique" stores in touristy areas are the same way. Was that jewelery really locally made? You don't know unless you recognize the artist or you happen to find the same thing on Alibaba later.

I doubt this is a shock to anybody, but it's important to remember that these kinds of problems are already endemic and I don't see a huge panic around it.

Blut
Sep 11, 2009

if someone is in the bottom 10%~ of a guillotine
Thats almost certainly the future of many forms of art if AI keeps improving. The purists in this thread arguing about how AI art won't be accepted as art aren't entirely wrong in that maybe 5% of people will agree with them, and will happily pay a large price/convenience premium for real human hand made boutique art. But for 95%+ of people if AI art is indistinguishable from human art, just as high quality, and cheaper/easier to access, they won't care.

It'll be the same as any other mass produced industrial good. Think of the clothing market share % for local handmade Made in America clothing vs mass produced clothing from everywhere else. Its overwhelming in favour of the latter, despite the obvious moral/environmental/whatever else factors that should in theory play more of a role. But in reality they don't, because for most humans if something ticks the quality/price/convenience boxes thats all they care about.

Imaginary Friend
Jan 27, 2010

Your Best Friend

Owling Howl posted:

I don't think that will suffice. Humans experience the world with warped senses and we're hardcoded to interpret sensory input in peculiar ways. We get high on dopamine, adrenaline, serotonin and oxytocin. We're scared of the dark or heights because our ancestors learned the hard way that they are dangerous. We can feel hungry or full and be horny, jealous, angry and on and on.

None of this has anything to do with intelligence, accumulated data or sensory input. It's our brains loving around with us - changing how we perceive the world and the feelings we associate with it and it's fundamental to the human experience. Intelligence or sentience let's us reflect and contemplate on those experiences and the product of that is a fairly large fraction of art.

I never had a sense of smell but I can write a paragraph about the feelings and memories a scent invokes in me because I have read a lot of books with such descriptions. I'm sure there's blind people that could convincingly describe a sunset and deaf people that knows that tires screech and whips crack. It seems to me that AI is basically doing that.

It can be effective but it's also a shallow mimicry with a lot of limits because it relies on descriptions of experiences and feelings that the anosmic/blind/deaf have no way of relating to. I can repeat, in my own words, the adjectives and feelings others have associated with an odor but if I try to be creative or describe an obscure odor I will fail in bizarre ways.

So an AGI in a robot body that could experience the physical world like we do might be creative in some way but unless you give it all the human biological and evolutionary baggage and weirdness we have it would still be limited and stunted in its interactions with human culture and art.
Yeah, that's kind of what I tried to convey but with dumber words. Let's say two people somehow got inspired to write a sci-fi story thanks to an experience of a hairy coconut in a basement. One might have a more negative memory of it because of the damp smell, ending up writing a grittier and darker story, called Alien and the alien eggs is the coconut. The other person didn't smell anything at all but remembers the hairy texture in greater detail, which could result in totally different story because the texture reminded him or her of a second experience with some other hairy ball. Also, he or she entered the basement thinking about a funny joke at the moment so it's suddenly a sci-fi comedy called Spaceballs and the title is the coconut.

An AI that has these two stories in its dataset has no understanding of the origins and prompting it to write a sci-fi story inspired by a hairy coconut in a damp basement will most likely just be about a coconut in a damp basement. Maybe on a spaceship.

Imaginary Friend fucked around with this message at 18:59 on May 11, 2023

SubG
Aug 19, 2004

It's a hard world for little things.

Owling Howl posted:

So an AGI in a robot body that could experience the physical world like we do might be creative in some way but unless you give it all the human biological and evolutionary baggage and weirdness we have it would still be limited and stunted in its interactions with human culture and art.
I don't know if I'd say "stunted", but I do think that the reason why these discussions tend to run in circles is that concepts like "intelligence" and "creativity" are poorly-defined, and are probably without referent. In the sense that "intelligence" and "creativity" aren't a thing, but are instead entire constellations of behaviours which overlap in places and are mutually exclusive in others.

And at least as far as humans are concerned those things aren't substrate-independent (that is, they're not a consequence only of an abstract algorithm that happens to be running on human brains), nor are they independent of the environment. Put in slightly different terms: it isn't clear that human behaviour is inherently computational (or algorithmic) in nature. That's not to say that human behaviours are magical or that there's some abstract qualia that can't be reproduced or any metaphysical nonsense like that, just that if these things are true (and we have pretty good reasons to believe they are) then there are inherent limitations on how well they can be simulated by a purely computational model.

What I think current AI models currently are very good at doing is eliciting a sort of uber-pareidolia. Like the overwhelming majority of stuff produced by current AI methods is...not great. Like it's loving amazing compared to where we were a year ago...that is, not able to do anything like this at all. So there's a wow factor in getting anything at all out of this. But a big part of the current AI magic show is the Geller effect: the AI has a lot of human assistants cherry-picking outputs and smoothing over inconsistencies ("well, it was wrong about this, but that doesn't really matter"). And to be clear: I don't think this will be true of AI-produced stuff indefinitely. I'm just observing that AI models are already having this effect (of "passing") even though the tools are still mostly pretty clunky, the models are still relatively primitive, and the outputs are generally in need of a lot of tweaking.

roomtone
Jul 1, 2021

by Fluffdaddy

(and can't post for 10 days!)

A lot of what I would say about this has already been said, but to be a bit more mundane about it - what are the legal implications around AI art? I mean in terms of the datasets they've used to train, and the usage of things generated from it. I assume there aren't many laws around this sort of thing because it's so new, but is the law likely to step in and regulate this in a way that businesses can't simply put a few prompts in and get finished artwork (when it gets to that point), because of all the human labour and rights it is actually built on?

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.
The only regulation I see being likely to happen is regulation to ensure only big businesses are able to use and create these tools and everyone else will get limited access. And even those I put as fairly unlikely.

SCheeseman
Apr 23, 2003

They might be able to legislate that and it could have some effect on commercial use, but high quality models are already out in the wild running on consumer hardware. Software is extremely hard to contain and while it's currently expensive to train these models there's evidence showing that quality isn't necessarily directly correlate with the size of the dataset. Current models may be overshooting the amount of training effort required.

Imaginary Friend
Jan 27, 2010

Your Best Friend
EU at least are currently writing some AI laws but just as any other technical breakthroughs, it will probably take time before any are in place and they will be poo poo for yet more time.

I think they are currently mostly concerned about finding tools to determine whether people are cheating with chatGPT at schools and workplaces.

Tei
Feb 19, 2011

roomtone posted:

A lot of what I would say about this has already been said, but to be a bit more mundane about it - what are the legal implications around AI art? I mean in terms of the datasets they've used to train, and the usage of things generated from it.

1- That AI art can't be copyrighted.
2- Unfortunallly, that the way Midjourney just opt-out of copyright laws to steal their datasets is legal.
3- That using AI ART to do illegal stuff (like revenge porn) is illegal.
4- Maybe having to register your AI thing, if that feels feasible and in some territories. In the EU is mandatory to register databases that collect citizens data.
5- If the register thing ever happend, they may pass new laws, that would be somewhat unnecesary, like you AI ART can't use people faces withouth permision, or be considered offensive. But they may have to wait a bit before passing a law like this to see if possible to really curtain what a AI ART thing can do. So far it seems the industry is self-censoring, thats why violence/sex/racism is banned in Midjourney/ChatGPT.

SaTaMaS
Apr 18, 2003
One annoying thing about using ChatGPT for coding is that whether you give it a great idea, a good idea, or a mediocre idea, it responds pretty much the same way, it will give some pros and cons and an example implementation. I'm not sure whether this is the RLHF conditioning trying to avoid hurting my feelings or whether it really has no concept of a "good" implementation vs a "bad" implementation assuming both are free of bugs.

Liquid Communism
Mar 9, 2004

коммунизм хранится в яичках

Tei posted:

1- That AI art can't be copyrighted.
2- Unfortunallly, that the way Midjourney just opt-out of copyright laws to steal their datasets is legal.
3- That using AI ART to do illegal stuff (like revenge porn) is illegal.
4- Maybe having to register your AI thing, if that feels feasible and in some territories. In the EU is mandatory to register databases that collect citizens data.
5- If the register thing ever happend, they may pass new laws, that would be somewhat unnecesary, like you AI ART can't use people faces withouth permision, or be considered offensive. But they may have to wait a bit before passing a law like this to see if possible to really curtain what a AI ART thing can do. So far it seems the industry is self-censoring, thats why violence/sex/racism is banned in Midjourney/ChatGPT.

I don't expect #2 to hold up to any court challenge. Too many companies' main profit center is rent seeking on copyright ownership, and Disney has better lawyers than the government, and the usual 'we're just a carrier' argument isn't going to fly when they're producing derivative works.

SCheeseman
Apr 23, 2003

The argument will probably be a fair use one, pointing to Google Books being ruled a transformative use in spite of the entire business model of that service being reliant on creating unauthorised scans of copyrighted books, serving unedited extracts made available for free, supported by advertising. Google attribute works to the authors but that didn't stop a lawsuit from rights holders, and that AI generators don't spit out explicit copies is arguably as much an extenuating circumstance.

Liquid Communism
Mar 9, 2004

коммунизм хранится в яичках

SCheeseman posted:

The argument will probably be a fair use one, pointing to Google Books being ruled a transformative use in spite of the entire business model of that service being reliant on creating unauthorised scans of copyrighted books, serving unedited extracts made available for free, supported by advertising. Google attribute works to the authors but that didn't stop a lawsuit from rights holders, and that AI generators don't spit out explicit copies is arguably as much an extenuating circumstance.

AI generators being capable of spitting out explicit copies will be enough to prove that they're using copyrighted materials to create the backend for their for-profit service without licensing. It'll go over about as well as someone trying to start a new Netflix with nothing but DVD rips.

SCheeseman
Apr 23, 2003

They are capable of it, but it's not "normal" behavior and requires the model overtrain on an image.

Google Books spits out copyrighted materials too, based on unauthorized scans that forms the backend of their search engine that is made to generate a profit, and how did it go over? Not well with publishers, but Google won regardless. I'm not saying it's a certainty that will happen again, it could go either way, just that you're being too dismissive of an outcome that has precedent in it's favor.

The DVD rip comparison doesn't really hold up as that's describing copying and reproduction of works verbatim, which isn't why anyone uses AI generators and in instances it does produce output close to a copy it's more a bug, not something intrinsic to the system's design.

SCheeseman fucked around with this message at 09:35 on May 13, 2023

Tei
Feb 19, 2011

Liquid Communism posted:

AI generators being capable of spitting out explicit copies will be enough to prove that they're using copyrighted materials to create the backend for their for-profit service without licensing. It'll go over about as well as someone trying to start a new Netflix with nothing but DVD rips.

I have seen models produce a image with watermarks... but asking the algorithm to draw the watermark has part of the instructions. To me that is cheating.
Also not all models are created equal, some might have too small a corpus of image than others.

SCheeseman
Apr 23, 2003

The quirky thing about fair use is that if you successfully argue your case, infringement and licensing doesn't matter. Google Books infringed on the copyright of the rightsholders of every book they scanned, but the transformative use and other extenuating factors like only reproducing parts of works (noting that this is still technically infringement), assigning attribution and providing opportunities for publishers to monetize, gave them a pass.

Pointing out that AI models are based on copyrighted materials alone isn't enough for rights holders to win.

Lucid Dream
Feb 4, 2003

That boy ain't right.
We're in uncharted territory in so many ways these days, and I don't think we can really look at precedent too much. If AI turns out to be as disruptive as it sure seems like it's going to be, then it's pretty reasonable to expect there to be new laws and novel interpretations of existing laws in light of the impact of AI.

Tei
Feb 19, 2011

Lucid Dream posted:

We're in uncharted territory in so many ways these days, and I don't think we can really look at precedent too much. If AI turns out to be as disruptive as it sure seems like it's going to be, then it's pretty reasonable to expect there to be new laws and novel interpretations of existing laws in light of the impact of AI.


The current fast advancement of AI is a illusion.

Almost every advancement we see latelly are based on the same ML algorithm, probably alot of the stuff we see just run TensorFlow.

So.. is very possible that we will soon reach the cap of the utility of that algorithm/approch.


There are many different AI ideas and strategies. Expert systems, rule based, machine learning, genetic algorithms. AI is not only machine learning systems.

Once we reach the cap of ML systems, we will go back to slow progress towards AGI.

Lucid Dream
Feb 4, 2003

That boy ain't right.

Tei posted:

The current fast advancement of AI is a illusion.

Almost every advancement we see latelly are based on the same ML algorithm, probably alot of the stuff we see just run TensorFlow.

So.. is very possible that we will soon reach the cap of the utility of that algorithm/approch.


There are many different AI ideas and strategies. Expert systems, rule based, machine learning, genetic algorithms. AI is not only machine learning systems.

Once we reach the cap of ML systems, we will go back to slow progress towards AGI.

I feel like you're under-selling the amount of disruption that is going to come from only what is available right now. These latest generation LLMs can already perform many cognitive tasks that used to require a human, and it's pretty hard to imagine that the art stuff isn't incredibly disruptive in its own way. I'm not even invoking future progress in the underlying technologies when I say it seems like there will be a lot of disruption.

Lucid Dream fucked around with this message at 11:27 on May 13, 2023

roomtone
Jul 1, 2021

by Fluffdaddy

(and can't post for 10 days!)

I watched this video last night: https://www.youtube.com/watch?v=tjSxFAGP9Ss

It's a 47 minute takedown of all the defenses made around the AI art models just now, but I think it's all pretty solid and clear-headed stuff rather than coming from purely being pissed off, although the guy is very pissed off. I think I agree with him.

About the legal side of it, something I didn't know about was that there is a model by the stable diffusion people called dance diffusion which has been trained only on non-copyrighted material, unlike the image generators which have just hoovered up everything indiscriminately. Because the music industry would be litigious and come after them unlike visual arts. This reveals to me how exploitative the image generators actually are.

It seems like the reasonable thing to do is to have these datasets be opt-in rather than either opt-out or no choice. I think that should apply across the board for any kind of dataset that involves intellectual property. That way, you slow down this wholesale replacement of meaningful careers with AI generation, respect the rights of creators, and maybe even create some income for artists in the way of paying them to allow their work to be included in a dataset or giving them royalties when it was used in the generation of an image (although I don't know if that can be determined, but it's just one aspect of this).

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.

roomtone posted:

About the legal side of it, something I didn't know about was that there is a model by the stable diffusion people called dance diffusion which has been trained only on non-copyrighted material, unlike the image generators which have just hoovered up everything indiscriminately. Because the music industry would be litigious and come after them unlike visual arts. This reveals to me how exploitative the image generators actually are

What even is this argument? Yeah, the music industry has been more effective at their exploitative bullshit and building legal arguments that let them tear down the competition on spurious copyright claims involving slight similarities (and in the process completely destroyed the traditional music scenes that created so much great music in the US, and like the art industry are basically only kept alive by the remaining copyright violations they haven't been able to crack down on or that they have successfully limited to only the big corps being allowed to do). Arguing their successful lovely behaviour means image generators are exploitative is complete nonsense, whether or not AI image generators actually are. The music industry is *absolutely* exploitative.

I will say the whole AI art debate has really soured me on a lot of artists I was once fans of, after it became obvious just how many people (including especially fellow artists) they are willing to straight up gently caress over if it means they can eek out a year or two more of the status quo.

GlyphGryph fucked around with this message at 15:46 on May 13, 2023

BoldFace
Feb 28, 2011

roomtone posted:

I watched this video last night: https://www.youtube.com/watch?v=tjSxFAGP9Ss

It's a 47 minute takedown of all the defenses made around the AI art models just now, but I think it's all pretty solid and clear-headed stuff rather than coming from purely being pissed off, although the guy is very pissed off. I think I agree with him.

About the legal side of it, something I didn't know about was that there is a model by the stable diffusion people called dance diffusion which has been trained only on non-copyrighted material, unlike the image generators which have just hoovered up everything indiscriminately. Because the music industry would be litigious and come after them unlike visual arts. This reveals to me how exploitative the image generators actually are.

It seems like the reasonable thing to do is to have these datasets be opt-in rather than either opt-out or no choice. I think that should apply across the board for any kind of dataset that involves intellectual property. That way, you slow down this wholesale replacement of meaningful careers with AI generation, respect the rights of creators, and maybe even create some income for artists in the way of paying them to allow their work to be included in a dataset or giving them royalties when it was used in the generation of an image (although I don't know if that can be determined, but it's just one aspect of this).

Ultimately, none of that will make any difference. At the end of the day, what artists really care about is whether they can still make a career out of what they love doing in the next 5-10 years. It will suck just as much if their jobs are taken by ethical AI models. Regulating AI now will not make the long-term effects go away, it will just drive the companies to other countries.

roomtone
Jul 1, 2021

by Fluffdaddy

(and can't post for 10 days!)

GlyphGryph posted:

What even is this argument? Yeah, the music industry has been more effective at their exploitative bullshit and building legal arguments that let them tear down the competition on spurious copyright claims involving slight similarities (and in the process completely destroyed the traditional music scenes that created so much great music in the US, and like the art industry are basically only kept alive by the remaining copyright violations they haven't been able to crack down on or that they have successfully limited to only the big corps being allowed to do). Arguing their successful lovely behaviour means image generators are exploitative is complete nonsense, whether or not AI image generators actually are. The music industry is *absolutely* exploitative.

My point wasn't that the music industry isn't exploitative or that music industry lawsuits mean that AI image generators are. AI Image generators are exploitative because they've harvested an endless amount of hard work and creativity without permission, knowledge or recompense and are using it for profit.

The comparison to the music industry is just interesting to note because that industry has more power, the AI companies have not been so bold as to just throw all copyrighted music into the training data along with everything else in the way they have with images.

There is still profit being generated from your art - the difference is, now instead of a % of the take, and related ways to support yourself, you get nothing, the tech companies take all of it including even the credit. It's could be the same with any creative format, if it's just allowed to happen. Somebody is still going to profit from human creativity, it's just going to become people who have nothing to do with the creativity.

BoldFace posted:

Ultimately, none of that will make any difference. At the end of the day, what artists really care about is whether they can still make a career out of what they love doing in the next 5-10 years. It will suck just as much if their jobs are taken by ethical AI models. Regulating AI now will not make the long-term effects go away, it will just drive the companies to other countries.

Well yeah, ultimately that may be the case but I was arguing for ethical datasets being aware of that. It's about reducing the harm this inevitable technology will cause in the immediate and medium term rather than trying to stop it existing. There may be ways to adapt creativity to this in the future, but I think it's insane and cruel to go full speed ahead as if the artists are nothing but a dataset at this point in time.

It not being a permanent or complete solution isn't an argument to just do nothing. Opt-in should absolutely be a requirement.

roomtone fucked around with this message at 17:13 on May 13, 2023

Adbot
ADBOT LOVES YOU

SubG
Aug 19, 2004

It's a hard world for little things.

roomtone posted:

AI Image generators are exploitative because they've harvested an endless amount of hard work and creativity without permission, knowledge or recompense and are using it for profit.
I think this fundamentally misidentifies the locus of the problem (or the "disruption", if that's how you want to think of it). If you're a textile worker in 19th Century Britain and you're suddenly out of a job because your work can now be done by a steam loom, it's not because the steam loom is exploitative.


And in this particular case, the imagined remedy (not allowing training on publicly-available data) doesn't actually solve the thing you're trying to identify as the underlying problem, you're just making the barrier to entry high enough that it's only feasible for plutocratically wealthy individuals and corporations to train bespoke AIs...which will then just as surely have precisely the same effects on human artists as AIs trained on publicly-available data would. Which I'd argue is worse. That is, if there's a technology where we're worried about its effect on workers, then it is strictly better to have it in the hands of as many people as possible, to have the barrier of entry as low as possible, because that makes it less likely that it'll end up in the hands of a cartel (explicitly or de facto) that ends up controlling it. I think things would be better for human artists in a world where everybody has access to AI tools as opposed to a world where Adobe (or whoever) charges a hundred bucks a month for AI image generation (and uses bots to scan the internet and send automated takedown notices for anything they think migh've used their proprietary technology without permission).


I also think that the legal reasoning is off-base, in that training the an AI on publicly-available data and then using it to general images or text does not in and of itself infringe on anything. If a human with access to one of these AIs prompts it to produce, for example, an image of Mickey Mouse and then uses that image for commercial purposes, then that is infringing use. But I don't think that the fact that the technology can be used for infringing uses is an argument against the technology in general, any more than it was when similar arguments were made against, for example, the camera or photocopier or VCR.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply