|
Kavros posted:I expect very little good from AI because of what it is being purpose-built towards. it's not going to be here to make our lives easier, it will be here to better extract profit for the benefit of a specific class of investor or owner-of-many-things. It is a technology well tuned to atomize and disenfranchise; and bear in mind, the false image generation abilities are getting pretty good and 2024 is a year and change away. This technology is a societal disaster.
|
# ¿ Mar 25, 2023 14:19 |
|
|
# ¿ May 13, 2024 17:34 |
|
Insanite posted:I'm already seeing job postings for senior writers that mention using generative AI to augment productivity, and the tech appears good enough to do that--no need for a technical editor, nor junior writers to help with scut work. Machine translation already annihilated the technical translation labor market, and what's left there seems like it'll disappear almost completely. I would avoid those people, because generative AI is poison for IP.
|
# ¿ Mar 31, 2023 16:48 |
|
Main Paineframe posted:I think you're exaggerating the Copyright Office's decision a bit. Nope, it is very long standing doctrine that machine output cannot be copyrighted, as attempts at brute force of the copyright system would be anticipatable since Orwellian book kaleoscopes. Machine or animal generated, versus stuff touched up with is is not going to be allowed.
|
# ¿ Mar 31, 2023 17:29 |
|
Yes, but if you wiped away the human stuff - something that will likely be in the pipeline for both AI detection and further training of models - the assets themselves are free. That is a big problem for defense, and something no one with sense will touch
|
# ¿ Mar 31, 2023 18:37 |
|
The correct response is to flatly ban it, because epistemic pollution is an existential risk. https://twitter.com/stealcase/status/1642019617609506816 Typical. This poo poo won't work long term any longer then crypto, but the damage is spectacular.
|
# ¿ Apr 1, 2023 23:15 |
|
Bar Ran Dun posted:I agree epistemic pollution is an existential risk but we already got that without AI as long as social media, maybe media at all, exists. Self-targeting AI is the methane to the current media's CO2.
|
# ¿ Apr 1, 2023 23:28 |
|
Char posted:Main Paineframe expressed most of my opinions, I'd rather try to reverse the question now - why should a generative model be treated closer to how a human being is treated, rather than how a factory is treated? It "speaks like a person" but is still a factory of content. There is also another strong reason not to shift from where we stand now: it would allow giving the Clarkesworld treatment to the copyright system and it would happen fairly quickly; it takes copyright trolling from problematic to existential threat to the function of the system. Hell, there is a drat good argument for the least charitable copyright interpretations vis a vis AI specifically to avert this risk. StratGoatCom fucked around with this message at 21:33 on Apr 4, 2023 |
# ¿ Apr 4, 2023 21:27 |
|
gurragadon posted:
Good. If it can't operate within data law and basic ethics, then it shouldn't at all.
|
# ¿ Apr 4, 2023 21:34 |
|
IShallRiseAgain posted:I think AI falls under the fair use category, its pretty hard to argue that what it is doing isn't transformative. There are super rare instances when AI can produce near copies, but this isn't something desirable and stems from an image being over-represented in the training data. AI content CANNOT be copyrighted, do you not understand? Nonhuman processes cannot create copyrightable images under the current framework. You can copyright human amended AI output, but as the base stuff is for all intents and purposes open domain, if someone gets rid of the human stuff, it's free. The inputs do not change this. Useless for anyone with content to defend. And it would be a singularly stupid idea to change that, because it would allow a DDOS in essence against the copyright system and bring it to a halt under a wave of copyright trolling. Stop listening to the AI hype dingdongs, they're as bad as the buttcoiners.
|
# ¿ Apr 4, 2023 23:51 |
|
IShallRiseAgain posted:I'm not saying anything about it being copyrighted? I'm saying that AI generated content probably doesn't violate copyright because its transformative. The only reason it wouldn't is because fair use is so poorly defined. Machines should not get the considerations people do, especially billion dollar company backed ones. And don't bring 'Fair use' into this, it's an anglo wierdness, not found elsewhere. If you can't pay to honestly compete - though not really meaningful, considering the lack of copyrightability consigns this tech to bubble status - then you can get hosed, it is really that simple. StratGoatCom fucked around with this message at 00:11 on Apr 5, 2023 |
# ¿ Apr 5, 2023 00:06 |
|
IShallRiseAgain posted:Its people using AI to produce content though. Like I said before its not an issue for large corporations or governments. They already have access to the images to make their own dataset without having to pay a cent to artists. Its the general public that will suffer not the corporations. Art is a luxury, one whose manufacture often keeps some very marginalized folks in house and home. I don't give a poo poo, especially as the companies have no real use for it, and indeed may go to lengths to ensure they're not exposed to it.
|
# ¿ Apr 5, 2023 00:23 |
|
Owling Howl posted:Moreover others will make these systems available no matter what. It doesn't really help an artist that Microsofts system isn't trained on his art while a Russian or Chinese system is flooding the internet with infinity "in the style of" images. That the damage was done by shithead techbros already is no argument to double down, whether it was Uber or this. Now we can at least try and stem it somewhat.
|
# ¿ Apr 5, 2023 00:27 |
|
SCheeseman posted:Going the copyright route is less stemming the flow and more like trying to cover one of the outflows in a Y valve with a finger, you're just redirecting it to Adobe, Disney et al. How nice for them, I'm sure they'll be responsible with their monopoly. As said before, uncopyrightable content is useless to those folks. There will always be dipshit Russian internet lawbreakers, that's not an argument against laws.
|
# ¿ Apr 5, 2023 00:40 |
|
Gentleman Baller posted:One thing I don't really understand here, is what the legal difference is between text interpreted computer generated works and mouse click interpreted generated works? A human was controlling the mouse.
|
# ¿ Apr 5, 2023 00:54 |
|
SCheeseman posted:
The issue is that you can at least potentially pull that poo poo out of the human amended work, and be untouchable. It's a radioactive mess, vis a vis chain of title, and btw? It's not unprecedented, it is literally how it is for nonhuman done work already.
|
# ¿ Apr 5, 2023 00:58 |
|
Gentleman Baller posted:Yeah, but my question is, what is the current legal difference between that and a human controlling the keyboard, if the human controlling the keyboard provided the exact hex code of every colour, text size, pixel location if every star etc. Would that be the same under current law or what? No, because a human did it.
|
# ¿ Apr 5, 2023 01:07 |
|
SCheeseman posted:Sure, if everyone distributed everything in PSD format with unmerged layers. Even a relatively small change is enough to turn it into a derivative work. You're way overblowing the consequences of using uncopyrightable works in a greater work, something Disney has made billions of dollars doing. By doing a lot of visual asset work and their own writing. if you're gonna process it fairly heavily to the point where you're sure it's nothing extractable of the original work for say, an AI, at that point what was even the point of using the AI in the first place? StratGoatCom fucked around with this message at 01:26 on Apr 5, 2023 |
# ¿ Apr 5, 2023 01:23 |
|
KwegiboHB posted:I'm physically disabled. I have diagnosed motor issues among other problems and draw a disability pension, I don't touch that poo poo because I am not a person who shits on the disabled who make a living there.
|
# ¿ Apr 5, 2023 01:27 |
|
SCheeseman posted:I know. You should, though. Then you want these scraped models nuked flat. If they can't pay or otherwise secure permission, then they can go to hell if 'fairness' is to mean anything beyond computer touchers getting cheap toys.
|
# ¿ Apr 5, 2023 01:37 |
|
SCheeseman posted:That mass societal damage happens over and over again as a result of automation indicates to me that the problem sits higher up the chain and probably isn't solvable by using a framework created by a cartel of IP holders to further entrench their monopolies. This isn't some ip cartel, this is enforcement of long standing rights in law in basically every legal system. The scraped material was under copyright from the instant the job was done.
|
# ¿ Apr 5, 2023 01:45 |
|
cat botherer posted:the answer is not to increase the power of IP holders. For the hundedth time, this isn't. This is merely using already existing rules. Allowing it to be otherwise in fact will have that effect you fear, because it makes literally anything free real estate for billionaire bandits. Indeed, the point is laundering this behavior, much as crypto was laundering for securities bs. quote:And a human is running the AI So? It's no different then throwing dice, and we don't allow copyright of stuff from that either. Procedures are not copyrightable.
|
# ¿ Apr 5, 2023 01:51 |
|
SCheeseman posted:There is no natural right to IP. poo poo you'd know if you knew even the slightest bit of copyright law, in america, canada or the EU in order posted:Copyright protection automatically exists as soon as a work is created and fixed in a tangible medium of expression. Protection applies to works at any stage of creation (from initial drafts to completed works) and to published and unpublished works...
|
# ¿ Apr 5, 2023 02:02 |
|
SCheeseman posted:Those are legal rights. So? By training those models, they clearly crossed long established lines on copyright law.
|
# ¿ Apr 5, 2023 02:05 |
|
SCheeseman posted:You said IP had long standing rights in law in basically every legal system, which I took for you to mean that it's a natural law with your talk of long standing, universal rights. So? Those rights are automatic for actual work, instead of machine output unless otherwise. They were violated quite openly on the assumption that artists could not exercise them.
|
# ¿ Apr 5, 2023 02:17 |
|
cat botherer posted:They haven't though. Training models on copyrighted things is nothing new. It's been going on for well over a decade. If it was crossing established lines, there would be case law on it by now. Can you actually point to evidence of your legal theories? Given the commercial nature of these models and that they create similar outputs WITHOUT permission, no I do not think fair use harbor applies. SCheeseman posted:Except in cases where they aren't, given the jurisdiction and timeframe we're talking about. Right now the jurisdiction is the US in a time government is (mostly) favorable to big tech. Given court outcomes that find a fair use argument compelling, that precedent is likely to spread beyond the US' borders. You'll probably find it harder to defer to copyright law as if it's an arbiter of ethics once that has happened. That is debatable, given the long standing treaties this would violate, not to mention things like EU law which may well be less charitable.
|
# ¿ Apr 5, 2023 02:28 |
|
SCheeseman posted:Google Books takes books scanned without permission and makes chunks of their contents available for free, also without permission, while selling ads. You might think it shouldn't apply, but that's an opinion, one that wasn't shared by the court. There is a difference between this and creating unauthorized derivative works. And fair use is a anglicism, it does not apply for example, in Europe where there is far less transfer of IP in for pay work.
|
# ¿ Apr 5, 2023 02:32 |
|
SCheeseman posted:Yet Google Books is accessible in Europe, in spite of the legal challenges being fought off using a fair use argument. Because it is useful enough to be ignored to the original writer. This is blatant violation of significant international law that is only happening because the authorities are lagging, not some 'fair use case to liberate the poor oppressed computer touchers from the tyranny of the pen and paper users'.
|
# ¿ Apr 5, 2023 02:36 |
|
Main Paineframe posted:
|
# ¿ Apr 5, 2023 02:46 |
|
SCheeseman posted:Those were just refutations to arguments no one here has made.
|
# ¿ Apr 5, 2023 02:49 |
|
SCheeseman posted:lmao, no they're not? It's griping about how terrible all the players are, though the singling out of the AI startups makes me think they should open their scope a bit. I agree that they are all terrible. I would refer you to Main Paineframe's post above.
|
# ¿ Apr 5, 2023 02:57 |
|
SCheeseman posted:It's a pretty good summary, notable in that it makes this out to be not so clear cut while you've been pretty firm that there is no legal basis at all. It would seem to be a whole lot more clear cut then that Google thing; it would seem to be me to be a classic unauthorized derived work and a blatant violation of existing laws without the compensating factors of what google did.
|
# ¿ Apr 5, 2023 03:05 |
|
XboxPants posted:Yeah, that doesn't even seem to be the big issue to me. Let's say I'm the small artist who draws commissions for people for their original DnD or comic book or anime characters, and I'm worried how AI gen art is gonna hurt my business. Worried my customers will use an AI model instead of patronizing me. So, we can decide that we're going to treat it as a copyright infringement if a model uses art that they don't have the copyright for. That will at least keep my work from being used as part of the model. I'm not sure Disney would ever use such a thing tbh; they don't want anything that has to be disclaimed on a copyright holding. Outside of maybe some very early stages of concepting (not least because it may make it harder to go after leakers), I don't think they'd let it in because it may make for hairy chains of title, and I suspect they'd be iffy about selling or exposing something to the public that they didn't have a rock solid control over the outputs of, and right now, USCO might make that iffy. StratGoatCom fucked around with this message at 03:40 on Apr 5, 2023 |
# ¿ Apr 5, 2023 03:34 |
|
KwegiboHB posted:That means you completely missed the shady fundraiser they've been doing to get the laws changed. Do you know, have links outside of the AI sphere?
|
# ¿ Apr 5, 2023 04:15 |
|
Pvt. Parts posted:To me, this extra middle-man step of "use one model to feed another model tokens" has no bearing on the ultimate effort. In this case, the two models can be treated to be acting as one in whole. You don't get to transfer responsibility by abstracting away individual steps of the causal process. We say "trespassing" for both the act of entering a protected space and the lingering therein, nothing useful is to be gained legally by considering each act separately if in the first place the act was implied to be committed by one party. You have committed the transformative of data from one form to a highly related other. Whether or not you have committed plagiarism is a function of how similar the end product is to it's zygote, not the process by which it came to be. It's like a crypto tumbler - it exists to obfuscate the origin of what went in; basically the same, but for IP. Frankly, both tagging and an ability to determine data set contents is inevitable, because of (a) the need to efficiently find misinfo, and (b) any other finding basically destroys the ability of the IP system to function. Granted, again, as crypto to banks, this to copyright. Also, lmao at that open source BS; it's not gonna be good for that scene if it's used as an end run around the law, and CC specifically needs a rewrite to prevent LAION style chicanery. And the copyright infringement isn't the output, it's the model itself. StratGoatCom fucked around with this message at 16:55 on May 22, 2023 |
# ¿ May 22, 2023 16:52 |
|
cat botherer posted:This would be a good point if these models copy images, but they don’t. They do fairly regularly, and it is quite possible to figure out what it trained on.
|
# ¿ May 22, 2023 17:06 |
|
Private Speech posted:See my post above about that, the more complicated one. Then you had better be able to produce a searchable dataset to prove if it can or otherwise, and if it can't, you need to train your net to avoid infringing outputs harder.
|
# ¿ May 22, 2023 17:16 |
|
Clarste posted:Case law can go wherever it wants, but if people with money don't like where it went they can buy a senator or 50. All I have ever been saying is that the law can stop it if it wants to, and all these arguments about the internal workings of the machine or the nature of art are pretty irrelevant to that. More to the point, the stuff to make AI fully functional fucks with the complex systems of laws and finance - international laws and finance - that make media work, and especially now, you're not getting those reforms.
|
# ¿ May 22, 2023 19:00 |
|
SubG posted:I really don't think that "let's regulate a new technology without understanding anything about what it is or how it works" has been a winning strategy in the past. You put in IP, it makes IP very much like it without paying the author. That's all that really needs to be known for this poo poo as far as regulation goes, anything else is being drawn into the weeds.
|
# ¿ May 22, 2023 23:19 |
|
SubG posted:So the capacity for infringing use is the only criteria? Is this unique to AI? Or does it also apply to cameras? Cell phone audio and video? Pencils? AI, or very likely to have been trained on such, yes.
|
# ¿ May 22, 2023 23:29 |
|
|
# ¿ May 13, 2024 17:34 |
|
cat botherer posted:It sounds like you've decided "using something as training data" is not fair use, and you're working backward from there. Because it isn't. GlyphGryph posted:It can make IP very much like your IP without ever seeing your IP, is the problem. Because it turns out for 99.99% of artists, their work is composed entirely of elements that they did not create. The thing is, if you don't make it very clear what's in it....
|
# ¿ May 22, 2023 23:57 |