Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BrainDance
May 8, 2007

Disco all night long!

I don't know too much about being a lawyer besides Better Call Saul and stuff this woman I went to high school with who went to the worst law school in America posts online, but I imagine there has to be a point where lawyers just realize it's useful at some point in the process, right?

It doesn't have to be an AI siting there in court arguing for the defendant, but if one lawyer realizes an AI can do a ton of work in the process or construct arguments as good as or better than a human before trial, they're gonna use it. Because not using it then would be a disadvantage.

Adbot
ADBOT LOVES YOU

BrainDance
May 8, 2007

Disco all night long!


There are a bunch of these emergent skills it taught itself.

And it's very cool, yeah some people are gonna blow themselves up following chatgpt's guide to super meth. But then it'll also potentially teach itself biology and pharmaceutical chemistry and create a new cancer drug with synthesis instructions that night actually work.

BrainDance
May 8, 2007

Disco all night long!

archduke.iago posted:

This isn't going to happen. The rate limiting steps in pharmaceutical research are all related to experimental validation of efficacy, synthesis, and safety, none of which can be inferred by memorizing and regurgitating textbooks, or even papers.

It's an example to illustrate a point, Christ.

And it's literally already happening
https://www.bcg.com/publications/2022/adopting-ai-in-pharmaceutical-discovery

https://www.technologyreview.com/2023/02/15/1067904/ai-automation-drug-development/

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7577280/

I am regurgitating (and not very seriously) my childhood friend's thoughts who does exactly that, doctorate in pharmaceutical chemistry who looks for novel cancer drugs at UofM. I'm not literally saying ChatGPT will cure cancer, but that emergent properties of AIs can allow them to discover patterns from the information we give it that we otherwise wouldn't for the creation of new things like potential medicines. Where, for a lot of it, the discovery itself is open to new suggestions.

BrainDance fucked around with this message at 13:18 on Mar 27, 2023

BrainDance
May 8, 2007

Disco all night long!

archduke.iago posted:

Did you actually read the articles you posted? Or did you just Google "ai in drug discovery" and pick three hits on the first page? Nothing in them comes close to your example of an AI proposed novel molecule with a synthesis to boot, nor does the description from your friend. We *already* have far more proposed drugs then we have the capacity to test and approve, which is a problem that "AI" does nothing to solve.

The constant breathless misrepresentation of the capabilities of AI systems does nothing but further the interests of the programmers who develop them. If the computers are scary, dangerous, and capable, the only one who can reign them in are the caste of AI priests at the top.

The "synthesis" was my own guess in my quick one off example that I typed literally as an example of potential capabilities of AI, not as me stating this is the thing that AI will definitely do in exactly this way. But as an example to show the kinds of things emergent abilities in AIs lead to which you took incredibly literally and as a complete thesis for some reason.

The articles are from my bookmarks, and absolutely do show the potential for AI in drug development among many, many other things. A quote from one itself, "AI can be used effectively in different parts of drug discovery, including drug design, chemical synthesis, drug screening, polypharmacology, and drug repurposing."

So, you can go ahead and argue that AIs wont do a thing I literally never said they would do. We may have plenty more drugs than we can currently test, but that doesn't mean help in identifying new pharmaceutical pathways to targeting different diseases is unhelpful or unwanted, (otherwise, what are they even paying my friend for?)

"nor does the description from your friend" I didn't even tell you what his description was.

BrainDance
May 8, 2007

Disco all night long!

Main Paineframe posted:

I'm sure that "AI" will have some role to play in drug discovery, but that's very different from "ChatGPT" playing a role.

ChatGPT itself likely won't, but it's hard to say what even larger language models will be capable of because, like I was saying, we've seen a bunch of unexpected emergent abilities appear from them as they get larger.

What I was getting at from the start is this, it's not a surprise it's learned to do chemistry better than gpt 3.5, even if it wasn't really intentionally trained to https://www.jasonwei.net/blog/emergence

Just making the models larger makes them more capable of doing things, sometimes some things that weren't even explicitly trained to know or even expected to know. Even when it comes to being factual, GPT4 still hallucinates a lot, but you look at very small models and they're not coherent enough to be accurate, you look at larger models and they lie a lot and are wildly off base, larger and they're more accurate, larger still, more accurate, etc.

A thing Ive been saying since I sorta stumbled into finetuning an incredibly small model (1.3B) into being roughly as good as GPT3 on one specific task (but only that specific task) is that I think transformers can potentially be very useful for a huge variety of things we haven't really tried them on yet if they're purpose trained for that specific thing. GPT is very general, so it's going to be a limited jack of all trades. But a model that follows the same principles and is only trained to do one specific task but of the same overall complexity might be very, very capable and very useful at that one task.

BrainDance fucked around with this message at 16:21 on Mar 27, 2023

BrainDance
May 8, 2007

Disco all night long!

Main Paineframe posted:

And what I'm getting at is that there's no real evidence that ChatGPT is capable of "doing chemistry" (a phrase that, by itself, really deserves to be specifically defined in this context), outside of a senator having an :awesomelon: moment.

It's like a couple of you guys are trying to take things extremely literally and completely miss the point. Yes, I am aware it can't really do chemistry. You need arms to do chemistry. ChatGPT is in some sense immaterial and has no real corporeal form which you also need to do chemistry.

The whole point was that emergency abilities appear in AIs as they get larger, sometimes unexpectedly. Language models at a certain complexity are sometimes able to create a kind of new information from the things they were trained on. That was literally it. AI in a broader sense has applications in many areas. I believe LLMs as they get more complex, non-general ones, may be able to do some of this (not literally gpt4 chatgpt.) What this will actually be in the future? It's not entirely predictable.

Edit:

Main Paineframe posted:

We just went from someone talking about ChatGPT doing chemistry to someone linking papers about ML drug discovery models as proof that it's plausible. That's a real apples-and-oranges comparison.

That is not at all what happened. And I can't see a way you could read the conversation and come to that conclusion. A person said AI wouldn't be used that way because we don't need assistance discovering drugs, we have too many drugs we can't test already. And those papers were cited to show, we already use AI for this, there is already a drug discover niche for AI.

BrainDance fucked around with this message at 18:30 on Mar 27, 2023

BrainDance
May 8, 2007

Disco all night long!

Main Paineframe posted:

I'm not just trying to do an inane "well it isn't actually doing physical actions" thing. There's also other questions like "is this actually a task that requires novel reasoning abilities" or "has any expert validated the output to confirm that ChatGPT isn't just 'hallucinating' the results". For example, plenty of people have gotten ChatGPT3 to play chess, only to find it throwing in illegal moves halfway through.

I know, I was taking you extremely literally there. I thought that would be obvious given that I was interpreting what you said in an absurd way and down to just nitpicking.

That's what I mean, it's obvious you're not being that literal, and it's insane to take it that literally, that was clearly the wrong way to take what you were saying which is what you were doing with my posts.

Maybe this is the problem with an AI thread, that was kind of mentioned in one of the other threads. People have very strong opinions and beliefs about it that they're going to project onto other people whether that's what the person is saying or not. See post above with a person wildly misinterpreting a person's attempt at simplifying a feature of ChatGPT and BingAI.

BrainDance fucked around with this message at 01:20 on Mar 28, 2023

BrainDance
May 8, 2007

Disco all night long!

Good Dumplings posted:


The current generators infamously have problems consistently answering 1+1 = ? correctly, and that's the most obvious sign that pattern-matching is categorically different from reasoning: you can't add processing power or more examples of math and be able to be 100% sure that it won't answer fundamental problems like that wrong. You can be 99% sure, but then that 1% of the time comes up in an engineering spec, or a legal contract, and suddenly the theoretical difference between those concepts is painfully clear.

It's not that I disagree (I'm not sure) but I'm not really sure this is all that important. Language models aren't ever going to be a kind of copy of a whole brain, and brains don't work that way either. They're complexly interconnected parts with some parts capable of some things and others capable of other things.

You probably do just know what 1+1 is without having to calculate it, but that's from exposure to the answer countless times so it's just a fact to you. But, other calculations that are more complex, the part of you that knows your name or facts or how to speak doesn't really know it either, it gets it from another place. That's why people can have things like dyscalculia and function otherwise completely fine.

I made a script that passes a prompt to W|A before for dice rolls (which is more than you need for that, but it wasn't a serious project and was more just a proof of concept) and then pass the results to GPT-Neo and that worked. That seems to be the idea OpenAI has with the plugin things too. And that's basically how actual brains do it so there's nothing too weird about doing it that way. And if the AI is able to interpret the input from another system or AI correctly than, I mean it is what it is then right? If it works it works. I guess what I'm saying is, being bad at doing math isn't a sign of not having the capacity to reason, and if another system can integrate with the AI to give it that ability that doesn't show us y can't reason either.

BrainDance
May 8, 2007

Disco all night long!

Carp posted:

BrainDance, I meant to ask this earlier: which small model were you fine-tuning? GPT-3 is open source and uses TensorFlow and other common libraries, so it could be fun to translate it from Python to C#. However, that's likely laborious. Alternatively, I could try fine-tuning a small model for a specific task, as you suggested, and spend less time coding.

By the way, have you come across the paper that discusses fine-tuning, or maybe training with supervised learning, a small model using the GPT-3 API? The paper claimed that it performed almost as well as GPT-3.

I was finetuning GPT-Neo, mostly 1.3B, because anything larger needed a lot of ram (deepspeed offloads some of the vram stuff to normal ram) and if you're using swap instead the training time jumps from hours to days. LLaMA got a lot of people paying attention to this though, and now we can use LoRA with the language models so I've been doing that with LLaMA.

Didn't see that paper but if you find it again let me know.

BrainDance
May 8, 2007

Disco all night long!

StratGoatCom posted:

A human was controlling the mouse.

And a human is running the AI

But you're somewhat right that, currently, purely AI generated stuff doesn't seem to be copyrightable because it doesn't have enough human creative input. Though if it can have enough, and where that line is, is currently unknown (but being tested right now and I suspect we will find out sooner than later now that adobes implemented AI stuff into Photoshop)

Like, if I draw an image and dump it into img2img, is that copyrightable? We don't know. My Deforum videos that rely heavily on math to direct the AI into animating things? We have no idea. Etc.

There is a controlnet picture up for application right now I think, so we'll see how that goes

And it's not because an AI can't copyright anything. That was the justification in the Thaler case because he refused at every step of the way to acknowledge a human was at all involved. He wasnt trying to actually get a copyright but to make a statement and push copyright law to its limits like he had done with some other things. And in that ruling they explicitly said "we could be testing what level of human involvement is necessary but we're not because he wants the AI to be listed as the owner of the work and, so, non-human things just can't do that."

In all other, real AI copyright scenarios the person running the AI is trying to get copyright, not the AI, and so the nonhuman nature of AI doesn't matter.

BrainDance
May 8, 2007

Disco all night long!

StratGoatCom posted:

So? It's no different then throwing dice, and we don't allow copyright of stuff from that either. Procedures are not copyrightable.

Well, like I said, that's why we can't just say AI generated stuff is uncopyrightable on the grounds that an AI is not a human. That's not the argument of the USCO for the vast majority of copyright registration attempts and that's the reason there is probably some line of human interaction where it is copyrightable.

Like the example of the current thing up for copyright, using controlnet is not throwing dice but will it be non-random enough? The only thing we can actually say right now is "we don't know"

BrainDance
May 8, 2007

Disco all night long!

reignonyourparade posted:

Not 'probably' there's some line where it's copyrightable, we already know there is in fact a line where it's copywritable because it's already established to be somewhere below 'arrangement into the panels of a comic book,' it's just the exact place of the line that's in dispute.

Well, in that case the actual images in the panels are not copyrighted. Which I think a lot of people wouldn't consider a big enough protection, at least for commercial use.

I can still take the Zarya of the Dawn images and do what I want with them because they are the part of that that didn't get copyright protection

BrainDance
May 8, 2007

Disco all night long!

StratGoatCom posted:

Because it is useful enough to be ignored to the original writer. This is blatant violation of significant international law that is only happening because the authorities are lagging, not some 'fair use case to liberate the poor oppressed computer touchers from the tyranny of the pen and paper users'.

What do you mean useful enough to be ignored? The Google books thing wasn't ignored, it turned into a very large lawsuit which google won.

BrainDance
May 8, 2007

Disco all night long!

There's EleutherAI and, while gpt-neox is behind now because of the LLaMA models coming out of nowhere and blowing everyone away, is still a significant group with their open source models and the pile dataset.

BrainDance
May 8, 2007

Disco all night long!

woops, wrong thread

BrainDance
May 8, 2007

Disco all night long!

Mega Comrade posted:

Well we got to 8 pages. A good run.

It's not an absurd thing, the absurd thing would be thinking we know anything one way or another. We don't even know a solid way to tell if anything else is conscious other than "we're conscious and it has brains like us so good chance it's conscious"

If you're a panpsychist (which is as good an explanation as any) then it's not even special for it to be sentient. And, David Chalmers, from what I remembered, doesn't think AIs are conscious now (but definitely sentient I guess, because everything is to some degree to him) but believes they will be.

Though I don't know what an experience without memory from its experiences would really be like, definitely not conscious in the way we are. Or if it was if it would even have "bad" experiences, though I guess that just raises what makes our bad experiences feel bad. But the memory things not even necessarily a road block outside of it taking intense hardware to fine-tune large models.

BrainDance
May 8, 2007

Disco all night long!

There was this really cool artificial life game series called Creatures back in the day. It was really what got me into the early internet (it had a scripting language to create objects, the creatures had artificial DNA and could evolve so you could export them and share them. It was actually incredibly cool and way better than I'm explaining it.) Like a really complex tamagotchi.

But it really kinda mirrored some of this. Some people took it very seriously. And then one guy, Antinorn, started a website "tortured norns" that was exactly what it sounds like. poo poo hit the fan and the community was divided. But what I remember is him getting a poo poo ton of death threats and stuff.

I guess this is a stupid story and not interesting at all. I have no real idea why Antinorn did it, but I think it's just a thing people are gonna do with anything that looks alive but they know isn't actually alive. Maybe because it feels kinda taboo?

BrainDance
May 8, 2007

Disco all night long!

reignonyourparade posted:

By my understanding LAION only ever itself contained a record of what the images were and where they were stored, not a copy of any of the individual images themselves.

Common Crawl (which is what LAION was based on) did actually contain copies of everything. But, Common Crawl uses a fair use justification for it the same way archive.org does, and they actually do have a DMCA takedown process (that they probably don't legally have to do, but they do anyway) and they identify themselves when they scrape.

BrainDance
May 8, 2007

Disco all night long!

Ridiculous compression like the paqs do exist, and actually use neural networks in some way (though I don't know how, I never looked deeply into them) and have specific formats for compressed jpegs.

But, yeah, it's exactly that. When something takes days to decompress it's not super practical

BrainDance
May 8, 2007

Disco all night long!

I know it's reddit so of course it's insane, but it's still a lot of people. Maybe it came up in this thread, but go check out the subreddit for Paradot to see a whole lot of people very seriously anthropomorphize AI in a way that's not really just funny.

I think whatever Paradot is, they're not even very complex models since there was something about them advertising "millions to billions of parameters" which is not as exciting if you know how large most models are. But, regardless there are people taking it very seriously. As far as I can tell it's just a role play thing for some people but there were enough people where, I'm not completely sure, but I think it wasn't just that.

And I wasn't so much thinking "What if the AI tells them to kill themselves?" before, but more like, what if the company pulls the plug and they've made a really messed up emotional attachment to this thing like it was real and now it's just gone? Or what if they change the model in some way that ruins it for a bunch of people? Or, start heavily monetizing things that they need to continue their "relationship?"

Like I'm not saying "you better not take these dude's AI they fell in love with!" I think that shouldn't be a thing that's happening (but I don't know a way to keep it from happening) but I just think it could be really bad when that happens.

BrainDance
May 8, 2007

Disco all night long!

duck monster posted:

Although that limitation would appear to be an artefact of design. It cant remember poo poo. Orthodox Transformers arent really *supposed* to learn. But its not hard to imagine a fairly trivial update to the design to feed its buffers back into its training in real timel.

I'm not disagreeing with you (this is a thing you could do now, literally with just a bunch of cron jobs or something to fine-tune it every night on what it learned that day)

But the hardware requirements are what makes that not practical with the very large models. Like, who's gonna pay for all those a100s? Cuz it'd be a lot, like a lot a lot given how much it'd have to learn. Though we've gotten it down with LoRAs (which might be safer, if it learns something the wrong way that makes it worse, with LoRA, just yank that day out) but, still, that's intense for something like GPT4, I don't even know how long it would take to do that even with OpenAI type setups, the training time might still be over a day.

I think the more practical way is a change in how transformers work, like an updated more efficiently finrtunable form of model, maybe one designed with a very intentional "memory spot" that can be added to quickly and the model knows exactly what it is. Or, just really upping the token limit to an insane degree so that you really can fit everything it's learned in with the prompt.

BrainDance
May 8, 2007

Disco all night long!

Clarste posted:

I assume the reason they immediately jumped to Intelligent Design is that saying humans are "designed," even in scare quotes, tends to neglect how arbitrary a lot of biological processes are. It presumes a goal that these systems strive for, rather than it simply being an accident that these things made themselves more likely to exist.

That's a thing, and yeah that's completely true. But jumping to that kind of conclusion ignores the context of the post. It was obviously not meant to be design with an intentional designer and not just some processes that work towards some sometimes arbitrary thing (like I get, fitness to environment for evolution.)

GlyphGryph posted:

I genuinely don't think the problem is the words people are using at this point, I think it's the people who insist on interpreting them in the most insane possible way that are the problem.

I said pretty much that earlier. A part of it is just, that's SA, but I really think it's something more special to AI. Maybe that people immediately have strong opinions on it, but a lot of the AI questions are actually not things you can be super confident on yet, there are just things that havent been established or aren't all that knowable now. But that doesn't sit right with people.

BrainDance
May 8, 2007

Disco all night long!

Tei posted:

AI ART is not creating art for people that are not artist. Is creating "Cool looking images". Artist could create art with "AI ART" tools, but only a minority are interested, many are put off by projects like Midjourney stealing the images, ignoring the copyright of these images, taking them in bulk, with metadata like "created by <name of author>" so people can write "make foo in the style of <name of author>".

Yeah I heard the same thing about digital art in the mid 2000s and back then it was my job to deal with that question. It was true then (and now) there was a ton of stuff just showing off cool looking but hollow things people could do with photoshop.

Amateur art is for "cool looking images" , the stuff that floods reddit is, the stuff people just messing around with a medium is. The internet is flooded with art that is just very cool but nothing else.

It's a new medium like any other medium and can be used to do all kinds of things. I actually don't think most professional artists are put off by it, some are that I've seen but the majority are just sorta neutral and kinda have the attitude that it's not their medium so whatever. But since it's a thing that's existed in any real way for less than a year we haven't even seen what artists will come from it.

GlyphGryph posted:

Yeah but when people talk about AI art they don't care if its actually art.

Is it beautiful? Does it represent what I want to see to see in a pleasing way? Does it generate the right response in the viewer?

If someone's asks themselves these questions when making something then they do care whether or not it's art and they are, in fact, making art.

BrainDance
May 8, 2007

Disco all night long!

Owling Howl posted:

This discussion has been ongoing for like 50 years with people submitting works made by children, animals or computers to art galleries.

Way longer than that, that was like the whole lesson of the 20th century for art. Art can't be contained by really stiff rules.

I've been to outsider art/folk art/intuitive art exhibits that are absolutely, unquestionably art. Meaningful art. But stuff that wouldn't actually fit in a lot of people's definitions of art they give to exclude AI.

The art world at least is pretty unanimous in the artistic validity of appropriation even when it's just 'I changed the context for this commercial thing by calling it art so now it's art"

BrainDance fucked around with this message at 04:12 on May 10, 2023

BrainDance
May 8, 2007

Disco all night long!

Liquid Communism posted:

The bigger point to me is that it's a force of stagnation. AI can't create, it can only interpolate from its training set.


But, how is that different from what humans do?

If it comes down to creativity, creativity isn't magically creating something from nothing. The only difference I can see is that for humans our "training data" includes our senses and AIs don't (yet) have that, but I guess that's its training data. Though it's not for much of a technical reason, there's nothing stopping someone from sticking a webcam somewhere, having it all get captions with CLIP, then finetuning Stable Diffusion with it.

Anyone who thinks their creativity can do more than that, they should try to create a new color not connected to any colors we've already seen.

AI can absolutely create things that, while being connected to what it was trained on (or many, many things it was trained on), are conceptually and stylistically new. Especially when guided by a human who has an understanding of how to bring certain things out of latent space.

Warhol was a weird pick because his style could definitely be created with AI even if it wasn't trained at all on any pop art.

BrainDance fucked around with this message at 11:53 on May 10, 2023

BrainDance
May 8, 2007

Disco all night long!

Liquid Communism posted:

If you can't understand the difference between a script that puts pixels together based on what humans labeled training datasets as, then waits to see if the result provides enough of a pattern for human pattern recognition to think it's what they wanted and a human being creating representational art to try and convey their lived experience to others, I'm not sure I can help you.

Just saying "there's a difference" doesn't answer it. That's a huge copout non-answer. As far as we can tell the human brain does exactly that, its lives experience, the context of its art, is " training data" that's used to generate what it does.

When a human being creates representational art to try and convey their lives experience to others, that's what you described an AI doing is roughly what their brain does.

The AI's "lived experience" (heavy scarequotes) is definitely less real and on its own not meaningful compared to a humans, but as of now all AIs have to be operated by a human who does have meaningful lived experience.

BrainDance
May 8, 2007

Disco all night long!

Lemming posted:

Just saying "there's a similarity" doesn't answer it either. Human brains are extremely similar to chimp brains, but chimps also aren't really creative in the same way humans are.

You can't just claim things are vaguely similar and wave your arms and squint, the situation is a lot more complex than that


That's true, but when I'm careful and say there's a similarity what I'm saying is, yes there are differences (human brains create images in a stepped process, not diffusion) but those things aren't really what we're talking about. As far as creativity is concerned I think it is exactly the same with the only exceptions that I can think of being what I said before (the differences in what the training data is made of, in that humans have senses, and that the way we do image generation now the thinking is still done by the human running the AI. It wouldn't have to be, it just is though)

With LLMs it's different, that's just really different from how humans generate speech. With the image generators it's really not much different at all from how you see and how you create a picture in your head.

This though

Lemming posted:

One of the key points is looking at how much training data being fed into the different models is affecting how good or "creative" they appear to be. They've jammed everything they possibly could into it, more words and images and text than any person could consume in thousands of lifetimes, and they still haven't figured out how many legs people have. The entire reason these models are impressive is because interpolating between things you've already seen is more powerful than we thought it was, but it's still fundamentally completely reliant on the training data.

Humans are not reliant on training data, not in the same way. Despite having access to only a fraction of the data those models are, humans can generate new information in a comprehensive way that understands what they're looking at. It's just fundamentally a completely different approach. You can use words that describe the processes similarly, but it's still not the same

No way, humans have access to incredibly more training data than any AI, by just a massive degree. Training data of different types, and the interconnectedness of different systems which we're actually in the process of doing with AI's right now. Every single thing you see or hear is another training sample. The human brain, just the vision/thinking images part also is far more complex than even the largest AI, so we are wildly better at it.

But there really isn't anything fundamentally different that I've seen for the image generating models, at least not that's relevant to the question of creativity.

Human creativity really does just work that way. The alternative is what, magic? Pulling up new ideas from the ether? Your lived experiences and things you sense create connections in your brain that lead to you creating new things from the properties of those experiences, just like how an image generating AI creates a new image in a style that hasn't existed yet out of everything its been trained on.

Rappaport posted:


You can say to an AI "write me a story about Lovecraftian horrors and it is also a detective story involving computers" and get a Laundry Files facsimile, but the AI doesn't make choices (from what I understand) in the same way a human author does, it just places words together in patterns that it has learned to be common in human literature. The AI doesn't choose the themes or the references it uses the same way a human does.

This gets into what does it mean for a human to make a choice, where do choices come from? I got my ideas but I'm not going to speak as confidently on that. But, choices come from somewhere too, they're also not magic.

BrainDance fucked around with this message at 16:21 on May 10, 2023

BrainDance
May 8, 2007

Disco all night long!

People talk like it's ChatGPT writing prompts and just dumping them into midjourney churning out hundreds of pictures with no human involvement.

There's a human using it. There are amateurs who just type a few words and make something kinda cool looking, and then there are people using it for bigger, complicated projects. I don't think an entirely automated process can generate art either, but a human using a tool absolutely can.

BrainDance
May 8, 2007

Disco all night long!

KillHour posted:

I'm not going to keep posting reams of chat logs because nobody cares about my stupid fake language, but I will share the one thing ChatGPT came up with that I could call "original" in the sense that I didn't predict it (the rest of that stuff was mostly obvious things I already thought of). I was genuinely stuck at making a plausible excuse for how a language like this could come to be without another way to communicate, so I asked for one and it came up with something pretty good, I think.

I think it's interesting, and I thinks where a lot of the creativity can come from, you gotta have the idea to get it to do something. And yeah everybody is an idea person, but the fact that now everybody can be the idea person and actually get something out of it I think is cool as hell? Creativity isn't creativity if you lack technical skill? That's so counter to the modern art world, it just seems ridiculous.

I've been off and on working on a project I think is creative, it took a pause while I worked on another project but now that that's done I'm getting back into it. So, I've trained models on schools of philosophy, I have a Daoism model I posted about on SA before that's really good, (and an erowid model trained on thousands of erowid trip reports that just does drugs and is really funny.)

But I realized, unless you tell it in the training data, the model doesn't actually know what I'm giving it is Daoism, it thinks it's whatever I tell it to think. So I started training them with other philosophies, like an even amount of the Daoist classics and the Stoic classics and then I just tell the AI that they're the same thing (or really, that they're both the kind of text that comes after the text "daosays:")

So it doesn't know, but it creates a fusion of the two. And not a fusion as in it says one line of Daoism and then another line of Stoicism but like a philosophy where the style and beliefs of one exist as a part of the other, and then it outputs a fake third philosophy.

And I think that's a kind of creative. It's not really the AI being creative it's me, but that's half the point I've been trying to make anyway.

BrainDance
May 8, 2007

Disco all night long!

Tei posted:

Can somebody make a AI that do X?

Yes.

Can somebody make a AI that do X, and that AI spontaneously do Y?

No yet. That is in the area of AGI's. And we can't build AGI's yet. Ask again in 50 years.

What? We'be already seen this all the time. All the large models are able to do unpredicted things they weren't explicitly trained to do but that they figured out how to do even though those things weren't exactly in their training data.

Ask ChatGPT to tell you a story in emoji, but allow it to only use emoji that fit a certain vibe or some other criteria, basically make it as unique a task as you can that it won't have actual examples of, that will require it to use multiple different concepts to figure out something it wasn't directly told, it can do it. This is really why emergent abilities need larger models, they're things that need a model of a certain complexity to figure out in the first place.

Current models can absolutely spontaneously do things they weren't exactly trained on, that's what makes them impressive in the first place.

BrainDance
May 8, 2007

Disco all night long!

Tei posted:


poo poo like this:

You do not need to do any of this to fine-tune a model and almost no one does. You can, it's how I do it (I use the JoePenna repo for dreambooth because I have a 4090 dammit and I'm gonna use it) but the vast majority of people making dreambooth models now don't write the little scripts, which actually aren't even hard because you're just copying and pasting then changing a couple things.


Most dreambooth training now is done in a little gui in auto1111, though there are a bunch of others, some standalone, which involve just pressing some buttons and are no harder than any other graphic design tool and much easier than learning how to use Photoshop.

If you want to talk confidently about AI you really should go learn more about where things are actually at in the AI world and how this stuff actually works.

I even wrote some guides on all this stuff that are on my website and some I posted on SA that are just long "mostly just copy and paste what I do I'm holding your hand through the complicated stuff" to get people training their own language models (which has become infinitely easier since I wrote my guide actually, but I still think it's important to understand the nuts and bolts if you can)

And there are tons of other people doing the same. All the crazy nerd stuff? The nerds are doing it for you, because we all recognized how massively important accessibility is for the open source models.

BrainDance fucked around with this message at 23:51 on May 18, 2023

BrainDance
May 8, 2007

Disco all night long!

Liquid Communism posted:

The AI's entire 'memory' consists of its training set. Hence why you cannot remove something from said training set without retraining the AI, or it will continue to use what has been indexed.

It is incapable of creativity. It is simply pulling elements from training data that is tagged similarly to the prompt given.

This is a large part of why the EU is looking at it sideways, as present designs cannot comply with the GDPR both in proving they do not contain PII, or obeying right to be forgotten.

Literally every single detail about this is wrong.

I'm not trying to be a jerk, it just is. As others have mentioned, you don't seem to understand how AIs actually work, but you also don't seem to understand how human creativity works either. Human creativity is pulling things from our senses and memory to manipulate them into something new.

I don't paint, but I write and have had a good amount of creative works published. All of that comes entirely from things outside of me. Things I've read before, things I've seen, things I've heard, etc. I made something new out of them, but it was that, out of them not without them. Someone is just as capable of doing that with an AI, and we've seen that already. We've seen people have ideas for something new that they then create with the AI.

Like all my projects, the AI language models I've trained by training the model on two kinds of philosophy and not differentiating them to create something that fuses them. That is creativity and I used the AI to accomplish that. The foundation for that creativity comes from the two philosophies themselves and, I dunno, life?

This should be very obvious because the alternative would be either a completely random process or would be just magic, stuff coming from nowhere.

BrainDance
May 8, 2007

Disco all night long!

StratGoatCom posted:

You put in IP, it makes IP very much like it without paying the author. That's all that really needs to be known for this poo poo as far as regulation goes, anything else is being drawn into the weeds.

So then what about this?



This is by Sam Does Arts, the very anti-AI professional youtuber who believes similarly.

I actually think he should be able to make and sell artworks like this, but if the criteria is "put in IP, it makes IP" in the vaguest sense possible then it seems like this is off the table.

BrainDance
May 8, 2007

Disco all night long!

Tree Reformat posted:

If no legal action was taken, that implies either a licensing agreement was made, or the SG rights holders either don't know about it or don't care enough to pursue legal action.

I am almost certain he doesn't, unless he kept it very secret and, while he's decently popular on YouTube he's not that level. Squid Game though, Netflix actually has been protecting the IP for it more than most things (though, mostly the name and logo.) Not enough that they're going after every fan artist, but enough to know where they stand.

Tree Reformat posted:

I think that specific example actually does qualify as infringement, since so much is similar in the two compositions.

I don't think it would, but drat is it close. And with the Warhol thing who knows anymore. Composition factors into whether infringement happened and fair use is a case by case type of thing but I actually think the complete change in style makes it transformative enough. We've seen far less transformative works be protected.

But, yeah again, the Warhol thing. Before I'd say the bar for transformative is incredibly low. Pop art appropriation where just the context is changed had, apparently, been enough before. I think this is still more transformative than that though.

I think it should be considered transformative. Like that's my belief in what's right. There's just an incredible irony that this is the work of a person who is one of the louder "AI is plagiarism because the models are derived from other artwork" people in the popular anti-AI culture, and he's even profiting from it.

Regardless, it's definitely far less transformative than the use of a huge number of pictures to create an algorithm based on denoising into averages of them and then creating something that contains no pixels or lines from those artworks but a general sense of them when asked for it (in the vast majority of cases, except in about 1% of the cases of overfitting which is considered a failure of the model and a thing to be fixed, which even then technically doesn't contain any actual pictures of the original image.)


If it is infringing because of the Warhol thing, something that was expected to affect AI too (though I actually don't think it does very much, besides just leave more questions unanswered) then, that is a really good example of how decisions that affect AI can also negatively affect other artists. I wouldn't say collateral damage here because this was aimed at traditional art, not AI, but many of the things people are proposing would definitely include them as collateral damage. That would be very hard to avoid.

BrainDance fucked around with this message at 00:41 on May 23, 2023

BrainDance
May 8, 2007

Disco all night long!

SCheeseman posted:

The bar for consciousness is pretty low, given the dictionary definition they pass it. Hell, trees might too.

They are very, very unlikely to be and I don't know many people who would say they were. Consciousness is super fuzzy, but usually needs some level of self awareness which ants really likely don't have.

They are probably sentient though, which is just there is an experience of what it's like to be an ant.

BrainDance
May 8, 2007

Disco all night long!

Bel Shazar posted:

Plants act with intention and present some amount problem solving capacity. Whatever consciousness is, everything living appears to have a modicum of consciousness.

But a modicum of consciousness is not consciousness, that's why words like sentience or protoconscious exist.

Plants do generate action potentials though so I dunno. Though I don't think it matters and I'd take it even further because I think this is our best explanation for why things have experiece. As much as anyone can think panpsychism is true, like Chalmers even says it's kinda a "I believe this cuz I don't really believe all the other stuff so whatever" kind of thing.

Yeah I know the panpsychists talk about everything being conscious, I think that's really sloppy language because it contradicts the major definition of what conscious is. They're arguing for universal sentience, and they should say that. A microexperience is not self aware.

BrainDance fucked around with this message at 12:08 on May 24, 2023

BrainDance
May 8, 2007

Disco all night long!

StratGoatCom posted:

That you will get away with, but be bloody careful that you keep away from uses that infringe.

Infringing in this hypothetical future where the laws have changed/been decided to make it infringement?

I don't see this going the way you seem to (I can see some restrictions, some regulation, but not "delete all SD 1.5s from every hard drive immediately!")

It's already a moot point as Adobe's model wasnt trained with any images they didn't have a license to, and at this point you're just making statements about some hypothetical future that we have no real sign of it coming other than it being what you want.

Which, then, what's even the point?

Yeah you better watch out in the future where it's illegal not to use AI.

That's a pointless argument. That's basically just yours.

BrainDance
May 8, 2007

Disco all night long!

Jaxyon posted:

LOL what is this poo poo.

Read the whole thing.

That's his argument, it's stupid on purpose. He's just making up a future that won't happen and then acting like that's just how it's gonna be. His argument is stupid in the same way that one is.

You can make any argument that starts with "in the future assume x" and then make any kind of conclusion you want. Except for, that's stupid and not a real argument.

BrainDance
May 8, 2007

Disco all night long!

Bar Ran Dun posted:

That’s amazing to me. These models are only copyrighted? They aren’t patenting these models?

They're all the same technology, variations of the same thing, transformer models/diffusion models. So there really isn't any new method to patent.

Adbot
ADBOT LOVES YOU

BrainDance
May 8, 2007

Disco all night long!

StratGoatCom posted:

If your model ate someone's stuff and it emulates it, you are not covered under fair use.

Capice?

Why?

Like, legally

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply