Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


Kavros posted:

I expect very little good from AI because of what it is being purpose-built towards. it's not going to be here to make our lives easier, it will be here to better extract profit for the benefit of a specific class of investor or owner-of-many-things.

One thing I expect it will be really good at in the meantime is further destroying "community," since so much of community is online and it will be even more numbingly difficult to navigate seas of inauthentic communication and signal-to-noise ratios blown into bits even more efficiently than it was in an age of mere botting

It is a technology well tuned to atomize and disenfranchise; and bear in mind, the false image generation abilities are getting pretty good and 2024 is a year and change away. This technology is a societal disaster.

Adbot
ADBOT LOVES YOU

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


Insanite posted:

I'm already seeing job postings for senior writers that mention using generative AI to augment productivity, and the tech appears good enough to do that--no need for a technical editor, nor junior writers to help with scut work. Machine translation already annihilated the technical translation labor market, and what's left there seems like it'll disappear almost completely.


I would avoid those people, because generative AI is poison for IP.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


Main Paineframe posted:

I think you're exaggerating the Copyright Office's decision a bit.

I'd say the Zarya of the Dawn outcome is of little consequence to your average closed-source software company. Even if the actual code itself isn't copyrightable, the code isn't usually being made available in the first place. And, to quote the Zarya decision, even if AI-generated material itself is uncopyrightable, the "selection, coordination, and arrangement" of that material by humans is still copyrightable, which is a big part of software development. Moreover, even the uncopyrightable parts can still become copyrightable if sufficiently edited by humans.

When you say "poison", it makes me think of "viral" IP issues like the GPL that will spread and "infect" anything they're mixed with, but the Copyright Office was pretty clear that the uncopyrightable status of generated material is quite limited and doesn't spread like that.

Nope, it is very long standing doctrine that machine output cannot be copyrighted, as attempts at brute force of the copyright system would be anticipatable since Orwellian book kaleoscopes. Machine or animal generated, versus stuff touched up with is is not going to be allowed.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


Yes, but if you wiped away the human stuff - something that will likely be in the pipeline for both AI detection and further training of models - the assets themselves are free. That is a big problem for defense, and something no one with sense will touch

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


The correct response is to flatly ban it, because epistemic pollution is an existential risk.

https://twitter.com/stealcase/status/1642019617609506816

Typical. This poo poo won't work long term any longer then crypto, but the damage is spectacular.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


Bar Ran Dun posted:

I agree epistemic pollution is an existential risk but we already got that without AI as long as social media, maybe media at all, exists.

Self-targeting AI is the methane to the current media's CO2.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


Char posted:

Main Paineframe expressed most of my opinions, I'd rather try to reverse the question now - why should a generative model be treated closer to how a human being is treated, rather than how a factory is treated? It "speaks like a person" but is still a factory of content.

Part of the evaluation of human "training datasets" is priced according to human capabilities, would you really let a generative model add a book into its dataset for 16$ when it can write 10-100 stories per day?

Probably, not letting anything model-generated be eligible for copyright would be a good start, even if pretty much impossible technically; even then, there would be exactly zero money to be made in the art business, therefore society would need to find a way to incentivize the producion of arts, or be condemned into being fed the iterative regurgitation of previously created art.

The whole point is that this stuff should not be seen as human. No "if a person is allowed to do this, then" - it's completely off base.

There is also another strong reason not to shift from where we stand now: it would allow giving the Clarkesworld treatment to the copyright system and it would happen fairly quickly; it takes copyright trolling from problematic to existential threat to the function of the system. Hell, there is a drat good argument for the least charitable copyright interpretations vis a vis AI specifically to avert this risk.

StratGoatCom fucked around with this message at 21:33 on Apr 4, 2023

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


gurragadon posted:


But if you made OpenAI get permission from every content creator to create the ChatGPT it would be prohibitively expensive in time and money. I mean you're right, that does read as me complaining it would be too inconvenient to pay people for their work. ChatGPT needs so much text, and it needs to be diverse if it going to be useful, that I think it is valid. It just seems to be fundamentally unworkable with the way AI technology works currently for one company to be able to pay copyright fees on every piece of writing ChatGPT saw.


Good.

If it can't operate within data law and basic ethics, then it shouldn't at all.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


IShallRiseAgain posted:

I think AI falls under the fair use category, its pretty hard to argue that what it is doing isn't transformative. There are super rare instances when AI can produce near copies, but this isn't something desirable and stems from an image being over-represented in the training data.

Also, the requirement to have a "copyright-safe" dataset just means media companies like Disney or governments will have control over the technology and nobody else will be able to compete. They have the rights to more than enough copyrighted content that they could build a pretty good model without paying a cent to artists. The only people that will be screwed over is the general public.

Although hopefully since the models are out there, no government will be able to contain AIs even if they crack down hard on them.

AI content CANNOT be copyrighted, do you not understand? Nonhuman processes cannot create copyrightable images under the current framework. You can copyright human amended AI output, but as the base stuff is for all intents and purposes open domain, if someone gets rid of the human stuff, it's free. The inputs do not change this. Useless for anyone with content to defend. And it would be a singularly stupid idea to change that, because it would allow a DDOS in essence against the copyright system and bring it to a halt under a wave of copyright trolling.

Stop listening to the AI hype dingdongs, they're as bad as the buttcoiners.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


IShallRiseAgain posted:

I'm not saying anything about it being copyrighted? I'm saying that AI generated content probably doesn't violate copyright because its transformative. The only reason it wouldn't is because fair use is so poorly defined.


Machines should not get the considerations people do, especially billion dollar company backed ones. And don't bring 'Fair use' into this, it's an anglo wierdness, not found elsewhere. If you can't pay to honestly compete - though not really meaningful, considering the lack of copyrightability consigns this tech to bubble status - then you can get hosed, it is really that simple.

StratGoatCom fucked around with this message at 00:11 on Apr 5, 2023

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


IShallRiseAgain posted:

Its people using AI to produce content though. Like I said before its not an issue for large corporations or governments. They already have access to the images to make their own dataset without having to pay a cent to artists. Its the general public that will suffer not the corporations.


Art is a luxury, one whose manufacture often keeps some very marginalized folks in house and home. I don't give a poo poo, especially as the companies have no real use for it, and indeed may go to lengths to ensure they're not exposed to it.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


Owling Howl posted:

Moreover others will make these systems available no matter what. It doesn't really help an artist that Microsofts system isn't trained on his art while a Russian or Chinese system is flooding the internet with infinity "in the style of" images.

Meanwhile there is a lot of stuff that isn't copyrighted. Everything on Wikipedia, Pixabay, tonnes of stuff on Flickr etc. Everyday there's more of it and there will be people who want these systems to exist and will contribute content for them. The tech giants can probably even afford to build up their own portfolios over time. Strip out all the copyrighted stuff and you'll still end up with these systems.

That the damage was done by shithead techbros already is no argument to double down, whether it was Uber or this. Now we can at least try and stem it somewhat.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


SCheeseman posted:

Going the copyright route is less stemming the flow and more like trying to cover one of the outflows in a Y valve with a finger, you're just redirecting it to Adobe, Disney et al. How nice for them, I'm sure they'll be responsible with their monopoly.

As said before, uncopyrightable content is useless to those folks. There will always be dipshit Russian internet lawbreakers, that's not an argument against laws.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


Gentleman Baller posted:

One thing I don't really understand here, is what the legal difference is between text interpreted computer generated works and mouse click interpreted generated works?


A human was controlling the mouse.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


SCheeseman posted:


If you want works to by uncopyrightable and any works based on them uncopyrightable, I don't think there is any precedent for this.

The issue is that you can at least potentially pull that poo poo out of the human amended work, and be untouchable. It's a radioactive mess, vis a vis chain of title, and btw? It's not unprecedented, it is literally how it is for nonhuman done work already.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


Gentleman Baller posted:

Yeah, but my question is, what is the current legal difference between that and a human controlling the keyboard, if the human controlling the keyboard provided the exact hex code of every colour, text size, pixel location if every star etc. Would that be the same under current law or what?

No, because a human did it.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


SCheeseman posted:

Sure, if everyone distributed everything in PSD format with unmerged layers. Even a relatively small change is enough to turn it into a derivative work. You're way overblowing the consequences of using uncopyrightable works in a greater work, something Disney has made billions of dollars doing.

A work created by a human that includes non-human works doesn't have it's copyright invalidated by the non-human work.

By doing a lot of visual asset work and their own writing. if you're gonna process it fairly heavily to the point where you're sure it's nothing extractable of the original work for say, an AI, at that point what was even the point of using the AI in the first place?

StratGoatCom fucked around with this message at 01:26 on Apr 5, 2023

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


KwegiboHB posted:

I'm physically disabled.

I have diagnosed motor issues among other problems and draw a disability pension, I don't touch that poo poo because I am not a person who shits on the disabled who make a living there.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


SCheeseman posted:

I know. You should, though.

Then you want these scraped models nuked flat. If they can't pay or otherwise secure permission, then they can go to hell if 'fairness' is to mean anything beyond computer touchers getting cheap toys.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


SCheeseman posted:

That mass societal damage happens over and over again as a result of automation indicates to me that the problem sits higher up the chain and probably isn't solvable by using a framework created by a cartel of IP holders to further entrench their monopolies.

This isn't some ip cartel, this is enforcement of long standing rights in law in basically every legal system. The scraped material was under copyright from the instant the job was done.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


cat botherer posted:

the answer is not to increase the power of IP holders.

For the hundedth time, this isn't. This is merely using already existing rules. Allowing it to be otherwise in fact will have that effect you fear, because it makes literally anything free real estate for billionaire bandits. Indeed, the point is laundering this behavior, much as crypto was laundering for securities bs.

quote:

And a human is running the AI

So? It's no different then throwing dice, and we don't allow copyright of stuff from that either. Procedures are not copyrightable.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


SCheeseman posted:

There is no natural right to IP.

:hmmwrong:

poo poo you'd know if you knew even the slightest bit of copyright law, in america, canada or the EU in order posted:

Copyright protection automatically exists as soon as a work is created and fixed in a tangible medium of expression. Protection applies to works at any stage of creation (from initial drafts to completed works) and to published and unpublished works...

...Generally, an original work is automatically protected by copyright the moment you create it. By registering your copyright, you receive a certificate issued by the Canadian Intellectual Property Office that can be used in court as evidence that you own it...

...When you create an original literary, scientific and artistic work, such as poems, articles, films, songs or sculptures, you are protected by copyright. Nobody apart from you has the right to make the work public or reproduce it


:wrongful:

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


SCheeseman posted:

Those are legal rights.

So? By training those models, they clearly crossed long established lines on copyright law.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


SCheeseman posted:

You said IP had long standing rights in law in basically every legal system, which I took for you to mean that it's a natural law with your talk of long standing, universal rights.


So? Those rights are automatic for actual work, instead of machine output unless otherwise.

They were violated quite openly on the assumption that artists could not exercise them.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


cat botherer posted:

They haven't though. Training models on copyrighted things is nothing new. It's been going on for well over a decade. If it was crossing established lines, there would be case law on it by now. Can you actually point to evidence of your legal theories?

Given the commercial nature of these models and that they create similar outputs WITHOUT permission, no I do not think fair use harbor applies.

SCheeseman posted:

Except in cases where they aren't, given the jurisdiction and timeframe we're talking about. Right now the jurisdiction is the US in a time government is (mostly) favorable to big tech. Given court outcomes that find a fair use argument compelling, that precedent is likely to spread beyond the US' borders. You'll probably find it harder to defer to copyright law as if it's an arbiter of ethics once that has happened.

That is debatable, given the long standing treaties this would violate, not to mention things like EU law which may well be less charitable.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


SCheeseman posted:

Google Books takes books scanned without permission and makes chunks of their contents available for free, also without permission, while selling ads. You might think it shouldn't apply, but that's an opinion, one that wasn't shared by the court.

There is a difference between this and creating unauthorized derivative works. And fair use is a anglicism, it does not apply for example, in Europe where there is far less transfer of IP in for pay work.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


SCheeseman posted:

Yet Google Books is accessible in Europe, in spite of the legal challenges being fought off using a fair use argument.

Because it is useful enough to be ignored to the original writer. This is blatant violation of significant international law that is only happening because the authorities are lagging, not some 'fair use case to liberate the poor oppressed computer touchers from the tyranny of the pen and paper users'.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


Main Paineframe posted:



The sticker price of ChatGPT is irrelevant. It's a for-profit service run by a for-profit corporation that's currently valued at about 20 billion US dollars. Stability AI is fundraising at a $4 billion valuation. Midjourney is valued at around a billion bucks. There's no plucky underdog in the AI industry, no poor helpless do-gooders being crushed by the weight of government regulation. It's just Uber all over again - ignore the law completely and hope that they'll be able to hold off the lawyers long enough to convince a big customer base to lobby for the laws to be changed.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


SCheeseman posted:

Those were just refutations to arguments no one here has made.
No, they are rather complete refutations to your nonsense about transformative work here; the book indexing is tolerable because it among other things will drive business to the holder. This is plainly an attack on copyright holders with little individual recourse.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


SCheeseman posted:

lmao, no they're not? It's griping about how terrible all the players are, though the singling out of the AI startups makes me think they should open their scope a bit. I agree that they are all terrible.

The book indexing wasn't tolerated, Google won and set precedent that you can take a bunch of copyright works without permission, run them through a machine and have it spit out bits of it piecemeal without paying the owners a dime and call it a transformative work under fair use when challenged in court. Driving business to the holder is a big ol' maybe, sometimes you just need to pull the quote, then suddenly it's competing with the original work. It's all commercial too and is used outside of the US' legal jurisdiction.

I would refer you to Main Paineframe's post above.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


SCheeseman posted:

It's a pretty good summary, notable in that it makes this out to be not so clear cut while you've been pretty firm that there is no legal basis at all.

It would seem to be a whole lot more clear cut then that Google thing; it would seem to be me to be a classic unauthorized derived work and a blatant violation of existing laws without the compensating factors of what google did.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


XboxPants posted:

Yeah, that doesn't even seem to be the big issue to me. Let's say I'm the small artist who draws commissions for people for their original DnD or comic book or anime characters, and I'm worried how AI gen art is gonna hurt my business. Worried my customers will use an AI model instead of patronizing me. So, we can decide that we're going to treat it as a copyright infringement if a model uses art that they don't have the copyright for. That will at least keep my work from being used as part of the model.

But, that won't protect me.

Disney, as has been noted, has an incredible library of art including every Marvel comic going back 70+ years. They may not have my works to use, but they'll just train a model based on the art they have access to.

So, as that artist who was doing commissions for people, I'm just as hosed now. My old customers just get a license to use Disney's AI now, instead of using Midjourney. StratGoatCom's suggestion hasn't helped me at all.

I'm not sure Disney would ever use such a thing tbh; they don't want anything that has to be disclaimed on a copyright holding. Outside of maybe some very early stages of concepting (not least because it may make it harder to go after leakers), I don't think they'd let it in because it may make for hairy chains of title, and I suspect they'd be iffy about selling or exposing something to the public that they didn't have a rock solid control over the outputs of, and right now, USCO might make that iffy.

StratGoatCom fucked around with this message at 03:40 on Apr 5, 2023

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


KwegiboHB posted:

That means you completely missed the shady fundraiser they've been doing to get the laws changed.

Do you know, have links outside of the AI sphere?

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


Pvt. Parts posted:

To me, this extra middle-man step of "use one model to feed another model tokens" has no bearing on the ultimate effort. In this case, the two models can be treated to be acting as one in whole. You don't get to transfer responsibility by abstracting away individual steps of the causal process. We say "trespassing" for both the act of entering a protected space and the lingering therein, nothing useful is to be gained legally by considering each act separately if in the first place the act was implied to be committed by one party. You have committed the transformative of data from one form to a highly related other. Whether or not you have committed plagiarism is a function of how similar the end product is to it's zygote, not the process by which it came to be.

It's like a crypto tumbler - it exists to obfuscate the origin of what went in; basically the same, but for IP.

Frankly, both tagging and an ability to determine data set contents is inevitable, because of (a) the need to efficiently find misinfo, and (b) any other finding basically destroys the ability of the IP system to function. Granted, again, as crypto to banks, this to copyright.

Also, lmao at that open source BS; it's not gonna be good for that scene if it's used as an end run around the law, and CC specifically needs a rewrite to prevent LAION style chicanery. And the copyright infringement isn't the output, it's the model itself.

StratGoatCom fucked around with this message at 16:55 on May 22, 2023

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


cat botherer posted:

This would be a good point if these models copy images, but they don’t.

They do fairly regularly, and it is quite possible to figure out what it trained on.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


Private Speech posted:

See my post above about that, the more complicated one.

Or the Campbell soup example, that works too.

Basically you can figure out similarities in the training dataset to a given image, but it doesn't mean that image was created with that dataset.

Then you had better be able to produce a searchable dataset to prove if it can or otherwise, and if it can't, you need to train your net to avoid infringing outputs harder.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


Clarste posted:

Case law can go wherever it wants, but if people with money don't like where it went they can buy a senator or 50. All I have ever been saying is that the law can stop it if it wants to, and all these arguments about the internal workings of the machine or the nature of art are pretty irrelevant to that.

More to the point, the stuff to make AI fully functional fucks with the complex systems of laws and finance - international laws and finance - that make media work, and especially now, you're not getting those reforms.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


SubG posted:

I really don't think that "let's regulate a new technology without understanding anything about what it is or how it works" has been a winning strategy in the past.

I also think that "just make inputing copyrighted material into machines illegal" is so comically overbroad that it's impossible to even estimate the fallout of such a regulatory approach. Like are you suggesting that the betamax case was wrongly decided?

You put in IP, it makes IP very much like it without paying the author. That's all that really needs to be known for this poo poo as far as regulation goes, anything else is being drawn into the weeds.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


SubG posted:

So the capacity for infringing use is the only criteria? Is this unique to AI? Or does it also apply to cameras? Cell phone audio and video? Pencils?

AI, or very likely to have been trained on such, yes.

Adbot
ADBOT LOVES YOU

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


cat botherer posted:

It sounds like you've decided "using something as training data" is not fair use, and you're working backward from there.

Because it isn't.

GlyphGryph posted:

It can make IP very much like your IP without ever seeing your IP, is the problem. Because it turns out for 99.99% of artists, their work is composed entirely of elements that they did not create.

Does that make it okay to use? Long as it didn't look at your stuff in particular, no problem even if it gens the same stuff?

The thing is, if you don't make it very clear what's in it....

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply