Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Gentleman Baller
Oct 13, 2013
One thing I've been doing with Bing's AI lately, is thinking of new puns and seeing if it can work out the theme and generate more puns that fit the theme, and it does it extremely well, I think.

When I asked it, "Chairman Moe, Vladimir Lenny, Carl Marx. Think of another pun that fits the theme of these puns." It gave me Homer Chi Minh and Rosa Luxembart (and a bunch of bad ones ofc that still fit the theme.)

I did the same with, "Full Metal Arceus, My Spearow Academia, Dragonite Ball Z" and it gave me Cowboy Beedrill and Tokyo Gimmighoul.

A common refrain I see online and even from ChatGPT itself is that it is just a text predictor, and is incapable of understanding or creating truly new things. But as far as I can tell, it can do something that is at least indistinguishable from understanding and unique creation, right?

Edit: I guess what I have been trying to wrap my head around is, if this isn't understanding and unique creation then what is the difference?

Gentleman Baller fucked around with this message at 23:58 on Mar 27, 2023

Adbot
ADBOT LOVES YOU

Gentleman Baller
Oct 13, 2013

Lemming posted:

This is the important part, because all the "bad results" were equally valid compared to the "good results" from the perspective of the text generation. You picked out the ones you judged to actually have some value. The "intelligence" that came out was the understanding and curation of the human who was overseeing it, for the same reason why it was even able to produce any of those results at all were because it had a large enough data set of things created by people in the first place.

The difference is that if its input continued to be generated on its own output, it would drift further and further into being completely garbage. It requires the large data sets of intelligent input to be able to produce a facsimile of that intelligent input.

The bad results were still correct, just less enjoyable as puns, because they used less famous references or sounded a bit more forced when spoken aloud. No different to the sort of thing you'd see in a pun spit balling session, from my experience.

In all examples the AI was capable of figuring out the correct formula for my puns without me explaining it, and applied it accurately to create words or phrases that certainly aren't in its dataset. That is the understanding and creation I am referring to. Not trying to imply punsmiths are out of the job already or anything.

Gentleman Baller fucked around with this message at 00:24 on Mar 29, 2023

Gentleman Baller
Oct 13, 2013

StratGoatCom posted:

AI content CANNOT be copyrighted, do you not understand? Nonhuman processes cannot create copyrightable images under the current framework. You can copyright human amended AI output, but as the base stuff is for all intents and purposes open domain, if someone gets rid of the human stuff, it's free. The inputs do not change this. Useless for anyone with content to defend. And it would be a singularly stupid idea to change that, because it would allow a DDOS in essence against the copyright system and bring it to a halt under a wave of copyright trolling.

One thing I don't really understand here, is what the legal difference is between text interpreted computer generated works and mouse click interpreted generated works?

My zero legal education impression is that I could open Paint, tell it I want a 400x600 picture with a star in each corner and the text, "Barry's Mowing Service" written in Comic sans in the middle and I could secure a copyright on that image. If I was to ask a home-ran image generator to do the same thing, using incredibly precise language rather than mouse clicks to tell it what to draw and what font to use, would that be okay under current law? Is it based on the precision of the human input? The randomness of the AI input? Or something else?

Gentleman Baller
Oct 13, 2013

StratGoatCom posted:

A human was controlling the mouse.

Yeah, but my question is, what is the current legal difference between that and a human controlling the keyboard, if the human controlling the keyboard provided the exact hex code of every colour, text size, pixel location if every star etc. Would that be the same under current law or what?

Gentleman Baller
Oct 13, 2013

KwegiboHB posted:

I wrote a brief piece on this in the GBS AI Art thread.

Nice, exactly what I was wondering about, thanks. Very interesting.

Gentleman Baller
Oct 13, 2013
Also I asked ChatGPT for a pun name for a werewolf monk and it thought up, "Howl Chi Minh" and that sounds creative to me idk.

Gentleman Baller
Oct 13, 2013
Genuine question for people who think Dall-E2 and Midjourney are violating copyright as it stands right now:

If stable infusion was never invented, and Open AI used publicly shown images to create an image-to-text bot, where people could draw unique, new drawings and have it accurately describe them like, "a courtroom sketch of Kermit the Frog." would that be fair use in your opinion?

Gentleman Baller
Oct 13, 2013

Reveilled posted:

Models which don’t use copyrighted data exist, but merely asserting a solution exists and actually implementing that solution are very different things. Reassuring artists that their own work won’t put them out of a job, and reassuring AI developers that there’s a way to build their models which won’t fall foul of some future regulation might go a long way to bridging the divide and divert people on both sides away from extreme positions.

I think my problem is that imo you're actually describing the more extreme position. Artists put out of work anyway, but only companies like Adobe and Disney will have access to the very models theoretically good enough to end their careers. Why would I want to encourage people towards that nightmare solution over one that obviously stings more, but allows more normal humans access to these theoretical future amazing art tools?

Gentleman Baller
Oct 13, 2013

Reveilled posted:

Is it the case then that these models trained on non-copyrighted content are uniformly worse than the ones trained on the copyrighted content? If that’s so it does seem to imply that the copyrighted content does provide direct commercial benefit to the models which use them, in which case it seems very reasonable to at least discuss whether these models should pay to license them.

Not uniformly, but from playing around with it when I could, Adobe's non-"fair use" AI has many things that it does very noticeably worse than openAI based models. It did a lot better than I expected though, but to be clear, this was Adobe using their massive pile of stock images. This isn't a model that you or I could train in the world you're asking for.

The thing is, once these big companies pay starving artists to create a plethora of images to plug their shortcomings, that is gone. A modest investment (to them) to decimate their ongoing costs. And their competition won't even be the widespread, openAI based models, that anyone currently could download and use, as those would now violate copyright.

Gentleman Baller
Oct 13, 2013

Reveilled posted:

Isn’t openAI a company with billionaire investors? Why could adobe pay but they can’t?

Adobe paid for and owned those images well before the new AI stuff came out as part of their stock images collection, and is a company with a market cap of 190 billion dollars. I have no idea if openAI could pay or not, but if they had to pay for it I'm sure the model wouldn't be available to people like you and me.

Gentleman Baller
Oct 13, 2013

Reveilled posted:

Fair enough.

That doesn’t mean there are no other solutions, though. Right now it seems the only options being offered are this one, or banning AI image generation (either literally or in effect through some mechanism that makes them unusable for most purposes), or just telling artists who are going to lose their jobs “yeah, you will”. And if the only option AI advocates are willing to put forward is the last one, is there any reason for artists not to line up behind larger copyright holders and do everything in their power to spite you and bring you down with them?

I mean, look at the solution the EU is proposing, is that the nightmare scenario? If so, how should we prevent that solution becoming the one adopted worldwide? Telling artists to just deal with it doesn’t seem to have brought them onside.

Sure. I'm absolutely in favour of other solutions and deeply hope people smarter than me think them up. I, of course, don't want artists to feel so abandoned they're compelled by spite to hurt future artists who would like to compete with companies using this potential AI so powerful it ends most current artists jobs.

I don't know enough about training a new model from scratch to say if the proposed EU law is a nightmare scenario. I suspect it isn't, but it also doesn't actually seem like a solution. Keeping a list of images that have copyright doesn't seem like it would save a single artist's job, and it would still come down to whether it's fair use, and anyone could train an AI, or it isn't and only big companies could do it.

Gentleman Baller
Oct 13, 2013

StratGoatCom posted:

They were not necessarily licensed for this use. Given how this crapola skyhooks off lots of money and rampantly plays games with IP law, I think you plagiarists are massively overestimating yourselves.

If you have any evidence or legal analysis that points to this I would love to read it.

Gentleman Baller
Oct 13, 2013

Jaxyon posted:

Even more fun, they won't tell us what they used for training.

Seems legit.

Yeah... naked speculation, so fun.

Just to be clear, Adobe says Firefly is trained on, "Adobe Stock images, openly licensed content and public domain content, where copyright has expired." And from however many people have been messing around with it these past few months, it's pretty hard to think that's a lie. It just has these enormous knowledge gaps around modern, IP related stuff that other AIs don't, where as when you ask it for things like, a cat, or ancient Greek architecture, it's brilliant.

Gentleman Baller
Oct 13, 2013

Charlz Guybon posted:

I post about literal military AI rebellion and the thread just keepa on nattering on avout ChaptGPT.

Disgusting

It's funny but I'm sure everyone doing AI in the military knows about incentive alignment. If its not a fake story, it was probably some really basic early test to see if the AI could identify targets properly, or if it could ask for confirmation. Something like that.

Gentleman Baller
Oct 13, 2013

It's not too important to the discussion, but the tweet about the army AI drone targeting its own operator was deleted because the Col. quoted has clarified that it was "just a thought experiment" and not an actual simulation.

https://twitter.com/ArmandDoma/status/1664600937564893185

Gentleman Baller
Oct 13, 2013

StratGoatCom posted:

And stop falling for the open source doddle, much of that scene was essentially subverted by the companies a long time ago as a way to get free coding.

What do you even mean by this? Open source projects are open source. You can download and play with eleutherAI's projects right now if you want. Explain the subversion for free coding.

Gentleman Baller fucked around with this message at 03:50 on Jun 3, 2023

Gentleman Baller
Oct 13, 2013
Just checking, if open source developers genuinely don't want people to profit from their work without paying coders they could just publish their work under a licence that restricts commercial use, right?

Gentleman Baller fucked around with this message at 11:20 on Jun 3, 2023

Adbot
ADBOT LOVES YOU

Gentleman Baller
Oct 13, 2013

Tei posted:

I have not mentioned EleutherAI, and don't know the status of EleutherAI.

I am open source developer myself, so obviusly I am I support and defend open source.


Edit:
Theres nothing bad about open sourcing AI. Open Source algorithms is very good for society as a whole, including corporations and random guys and artists (despite artist favouring products pretty often).

Theres nothing bad about scientist building corpus of data and distributing that with a free license.

What is bad is when these "scientist" are jus a facade to build that corpus using copyrighted images.

Are you suggesting any specific open source developers are operating a facade for corporate interests or are you just speaking theoretically?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply