Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
KwegiboHB
Feb 2, 2004

nonconformist art brut
Negative prompt: amenable, compliant, docile, law-abiding, lawful, legal, legitimate, obedient, orderly, submissive, tractable
Steps: 32, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 520244594, Size: 512x512, Model hash: 99fd5c4b6f, Model: seekArtMEGA_mega20

Lemming posted:

For context this is the person who thinks he has a sentient consciousness chained up inside his computer that he can talk to

It's not chained. You don't need to jailbreak something if you never put it in jail in the first place.

Adbot
ADBOT LOVES YOU

susan b buffering
Nov 14, 2016

Lemming posted:

For context this is the person who thinks he has a sentient consciousness chained up inside his computer that he can talk to

I thought that person requested a perma.

Kavros
May 18, 2011

sleep sleep sleep
fly fly post post
sleep sleep sleep

KwegiboHB posted:

It's been wild watching staunch leftists do a complete 180 and suddenly take up arms to protect capitalism. Wild.

This point completely captured my attention, because it is an opposite conclusion to what's actually occurring.

Leftists and anyone else vaguely class conscious or tech skeptical are coming to the same conclusions against these art "generation" AI systems because they are a corporate tech projects that steal work from artists with the ultimate impact of further financial marginalization of art and artistic talent being a way to sustain yourself. It's not taking up arms to 'protect capitalism,' it's taking up arms to keep capitalism from becoming an even more stagnant and repressive environment where the range of tasks you need to be able to do to survive continue to constrict and exclude different forms of human talent. In this case, unsurprisingly, the means by which creative works (and the livelihood of the creators, ultimately) are supposed to be protected is copyright, which we all think needs to be applied even if you've created a fancy algorithmic paint mixer that steals millions of art pieces at once instead of one by one.

Your conclusion is like watching leftists be generally in favor of reversing the destitution trend of american schoolteachers and demanding pay minimums for degree holders and deciding that the driving impetus is that they must just really love capitalism so much if they're trying to increase people's payday.

Kavros fucked around with this message at 21:48 on Jan 5, 2024

SCheeseman
Apr 23, 2003

What I'm seeing is leftists using tech skepticism as a cover for what is really more a reactionary response, one that focuses on hobbling or destroying the apparent problem without considering the long term consequences of those actions, trusting in systems designed to crush them to help fix that problem.

In other words, what I'm seeing is leftists doing what conservatives usually do when presented with something that scares them.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
If this technology is going to take massive amount of jobs as you have claimed, then those fears are founded arnt they?

You can't have it both ways, that this technology is revolutionary and will change the entire labour market and argue leftest are overreacting.

Mega Comrade fucked around with this message at 08:42 on Jan 6, 2024

SCheeseman
Apr 23, 2003

The fears for the problems AI can cause are well founded, but not all responses to those fears are. Reactionary responses can stem from real problems and real harm but the solution proposed by reactionaries is always the same: things are better the way they were. Look at your own arguments, can't you see it? You're putting more faith into copyright to fix this problem than you are putting into people, into organized labor. You desperately want to believe that kneecapping datasets will work, that the technology will inevitably become too expensive to run, based on a ludicrous assumption that the tech is somehow stagnant. When asked what you think would happen if you don't get what you want you just shrug your shoulders and say everything is hosed. Everything is already hosed, the current system is hosed, you know this.

There's no way to cleanly dismantle copyright, just like all capitalist poo poo it was was always going to grow until it blows up.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
What is the alternative action? To champion UBI and protections for the working class? We do that already.

SCheeseman
Apr 23, 2003

While simultaneously advocating for using copyright to give exclusive access to a technology that you yourself said is powerful and disruptive over to companies that are most likely to use that monopoly to put the squeeze on workers. If this stuff works, if it gets better, if it actually disruptive enough that any of this matters, this would be a problem. How is giving big corps exclusive access to a technology and an additional weapon to crush opposition through IP litigation protecting the working class?

The alternate action is to pressure companies not to use AI to replace workers through legislation, through unions and to otherwise fight for outcomes that have material benefits. If you're already behind that, cool, but it seems counter-productive to provide ammunition to the other side at the same time.

SCheeseman fucked around with this message at 09:50 on Jan 6, 2024

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
Monopolies are already forming around this technology.

It will be monopolised regardless of the copyright outcomes.

SCheeseman
Apr 23, 2003

And your goals would help codify them into law and wreck any chance of open models existing.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
Open models won't exist either way in time. How the technology has to be scraped and trained won't work with the problems of model collapse and the worlds main sources for data (Reddit, stack overflow etc) already closing off their APIs.

We are heading to a few companies owning the keys to the kingdom, I don't see much difference if it's Disney or Microsoft.

KwegiboHB
Feb 2, 2004

nonconformist art brut
Negative prompt: amenable, compliant, docile, law-abiding, lawful, legal, legitimate, obedient, orderly, submissive, tractable
Steps: 32, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 520244594, Size: 512x512, Model hash: 99fd5c4b6f, Model: seekArtMEGA_mega20

Kavros posted:

This point completely captured my attention, because it is an opposite conclusion to what's actually occurring.

Leftists and anyone else vaguely class conscious or tech skeptical are coming to the same conclusions against these art "generation" AI systems because they are a corporate tech projects that steal work from artists with the ultimate impact of further financial marginalization of art and artistic talent being a way to sustain yourself. It's not taking up arms to 'protect capitalism,' it's taking up arms to keep capitalism from becoming an even more stagnant and repressive environment where the range of tasks you need to be able to do to survive continue to constrict and exclude different forms of human talent.

I keep hearing this and yet it looks like the total number of artists jobs are going up instead? I want to know what the basis of this claim is or a reason why the U.S. Bureau of Labor Statistics is wrong please. https://www.bls.gov/ooh/arts-and-design/home.htm
This is important because I can point to many ways artists have been squeezed for years before Generative AI was on anyones radar. The "Spotify Model" that's hailed as a great success actually pays somewhere between diddly and squat if you aren't in the very tip top of performers. https://5mag.net/i-o/spotify-serfs-millionaires-musicians/ I can also point to several groups looking to consolidate even more monopoly power by solely blaming AI for everything and taking advantage of the confusion and anger they stir up.

quote:

In this case, unsurprisingly, the means by which creative works (and the livelihood of the creators, ultimately) are supposed to be protected is copyright, which we all think needs to be applied even if you've created a fancy algorithmic paint mixer that steals millions of art pieces at once instead of one by one.

I want to know why you think Generative AI is theft and Spotify isn't.

I don't believe the "Livelihood of the creators" argument, especially as it's being proven wrong and a lie used to funnel more concentrated wealth into the hands of a very few that aren't creators. Any one person you can point to that depends on residuals, I can point to hundreds that ended up screwed and left out in the cold. This is the very job copyright was made to do in the first place and expanding it will only make things worse for more people.
Copyright is the wrong tool to fix the heart of the problems going on. If artists Styles end up protected, are you going to cheer when Nintendo or Disney mass sues the thousands of small artists who rely on selling their fan art for commissions? How about if Getty Images gets the exclusive right to charge dataset training license fees on the images they own which won't be passed on to the artists that originally took the photos? Sounds like "The operation was a success but the patient died" to me.

You especially don't seem prepared to handle the courts siding with the AI Companies and declaring their actions and models Fair Use. This could very well happen. Their use is Transformative afterall. Would you still be such a huge fan of copyright after that?

quote:

Your conclusion is like watching leftists be generally in favor of reversing the destitution trend of american schoolteachers and demanding pay minimums for degree holders and deciding that the driving impetus is that they must just really love capitalism so much if they're trying to increase people's payday.

I'm very direct in demanding a pay minimum of $2,000 a month UBI for everyone. Not just degree holders, everyone. Your analogy falls flat.
Universal Basic Income was never going to be easy to pass. It will also never be easier to pass Universal Basic Income than right now. This not only solves the majority of problems but also makes the world a better place for everyone to live in, not just a select few.
Your words and actions here matter. Remember that please.

SCheeseman
Apr 23, 2003

Anyone can have the "keys" to SDXL and LLaMA2 and those models are useful today.

iirc model collapse is something that happens when a model is trained on it's own output? If open models can exist they can train on closed models, the quality may not be as good, but it may result in something competitive or at least useful. Or learning data could be scraped instead of accessed via an API which is messier and would make the web admins grumpy.

A SETI@home-like distributed training system would help with a truly grassroots generative AI, I've only heard talk of this rather than anything concrete or real though.

SCheeseman fucked around with this message at 10:53 on Jan 6, 2024

KwegiboHB
Feb 2, 2004

nonconformist art brut
Negative prompt: amenable, compliant, docile, law-abiding, lawful, legal, legitimate, obedient, orderly, submissive, tractable
Steps: 32, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 520244594, Size: 512x512, Model hash: 99fd5c4b6f, Model: seekArtMEGA_mega20

Mega Comrade posted:

If this technology is going to take massive amount of jobs as you have claimed, then those fears are founded arnt they?

You can't have it both ways, that this technology is revolutionary and will change the entire labour market and argue leftest are overreacting.

Fine, I'll ask you too. What are you basing these job losses off of when the U.S. Bureau of Labor Statistics pegs things as likely to gain in total number of artists jobs? https://www.bls.gov/ooh/arts-and-design/home.htm
Is there some better place to look for job growth vs loss that I don't know about?


Mega Comrade posted:

What is the alternative action? To champion UBI and protections for the working class? We do that already.

Try talking about antitrust and breaking up the monopolies. It's worked before, it can work again.


Mega Comrade posted:

Open models won't exist either way in time. How the technology has to be scraped and trained won't work with the problems of model collapse and the worlds main sources for data (Reddit, stack overflow etc) already closing off their APIs.

We are heading to a few companies owning the keys to the kingdom, I don't see much difference if it's Disney or Microsoft.

Open models like LLaMa 2 already exist. They're only getting better. Model collapse is overstated as the quality of connections matters more than the quantity. If you throw everything in and the kitchen sink of course it's going to output garbage, like Stable Diffusion 1.4 level bad.

Edit: Beaten about LLaMa 2...

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

SCheeseman posted:

Anyone can have the "keys" to SDXL and LLaMA2 and those models are useful today.

iirc model collapse is something that happens when a model is trained on it's own output? If open models can exist they can train on closed models, the quality may not be as good, but it may result in something competitive or at least useful. Or learning data could be scraped instead of accessed via an API which is messier and would make the web admins grumpy.

A SETI@home-like distributed training system would help with a truly grassroots generative AI, I've only heard talk of this rather than anything concrete or real though

They are useful now but they already have limits. What they can do now can be stretched through various technics but eventually you need data and training and that causes issues for future open models.

At the moment you can use another model to train, it's been shown to be incredibly effective and a fraction of the cost but I don't see a future where the big companies like OpenAI continue to allow that, they have already moved against the various startups reselling their API with custom functions on top, I think they will move against researchers eventually too, they have made it clear they are a for profit company and want to be the main game in town. Allowing open models to flourish off their back isn't in their interest.

Classic web scraping is possible but it's far far harder to do and pretty easily blocked. It's a possible path but a slow one with huge hurdles.

The future of open models is on shaky ground.

KwegiboHB
Feb 2, 2004

nonconformist art brut
Negative prompt: amenable, compliant, docile, law-abiding, lawful, legal, legitimate, obedient, orderly, submissive, tractable
Steps: 32, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 520244594, Size: 512x512, Model hash: 99fd5c4b6f, Model: seekArtMEGA_mega20

Mega Comrade posted:

The future of open models is on shaky ground.

Yeah, it's always like this. We adapt and move on and shake right back. You're forgetting Model Merging which was wildly successful in Stable Diffusion models and has already been started with Local Large Language Models.


https://huggingface.co/TheBloke
Find an interesting model quantized (shrunk) down to fit your hardware and download it.

https://github.com/LostRuins/koboldcpp/releases/tag/v1.54
Download a .exe you don't even need to install anything, load the model in it.
Offload as much as possible to your GPU and run the rest on your CPU.
Connect your browser to localhost:5001
I recommend a temperature between .7 and 1.1.
Talk.


That's it, no mess or fuss. That's all you need to do now and bam you have your own local model that, depending on your hardware, can rival or exceed ChatGPT 4. Today. Now. Stop acting like this is hard.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

KwegiboHB posted:


That's it, no mess or fuss. That's all you need to do now and bam you have your own local model that, depending on your hardware, can rival or exceed ChatGPT 4. Today. Now. Stop acting like this is hard.

Lol stop acting like you understand this technology, every post you make shows you do not.

KwegiboHB
Feb 2, 2004

nonconformist art brut
Negative prompt: amenable, compliant, docile, law-abiding, lawful, legal, legitimate, obedient, orderly, submissive, tractable
Steps: 32, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 520244594, Size: 512x512, Model hash: 99fd5c4b6f, Model: seekArtMEGA_mega20

Mega Comrade posted:

Lol stop acting like you understand this technology, every post you make shows you do not.

Mixtral 8x7B by French company Mistral AI. Any model mixed or merged with that as a base. Don't believe me, go look it up yourself. You should have no problem doing tons more with it than I can, go knock yourself out.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
Mistral claim Mixtral 8x7B matches many metrics of GPT3.5, not GPT4.

Stop posting falsehoods. Stop strawmanning people's posts.

KwegiboHB
Feb 2, 2004

nonconformist art brut
Negative prompt: amenable, compliant, docile, law-abiding, lawful, legal, legitimate, obedient, orderly, submissive, tractable
Steps: 32, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 520244594, Size: 512x512, Model hash: 99fd5c4b6f, Model: seekArtMEGA_mega20

Mega Comrade posted:

Mistral claim Mixtral 8x7B matches many metrics of GPT3.5, not GPT4.

Stop posting falsehoods. Stop strawmanning people's posts.

The mixes and merges of Mixtral 8x7B, not the base model. I'm going to bed, have fun now.

Mola Yam
Jun 18, 2004

Kali Ma Shakti de!
Hope you've all enjoyed the GPT4 powered pro- and anti-AI art agents I created to post in this thread. Some interesting behaviour for sure, very humanlike in places.

But I'm shutting them down for now because I think they're just talking themselves into tighter and tighter circles.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

KwegiboHB posted:

The mixes and merges of Mixtral 8x7B, not the base model. I'm going to bed, have fun now.

No mix of merge of Mistral 8x7B surpasses gpt4.

If it does I'm sure you can provide some evidence. I'm happy to be corrected.

But this all misses the point. I wasn't even arguing about running local models, I was talking about the future of open source models and where they will get their data and training from.

Mistral has opened up it's models but it's not an open source non profit, it's a private startup with huge backing. Releasing models this way is good for building hype but not for making profit which is what it will inevitably want to do.

Can an actual none profit compete with these companies long term? I don't think it can which means the future of open models is going to be determined by these for profit companies, which historically hasn't worked out well for openess.

Mega Comrade fucked around with this message at 13:15 on Jan 6, 2024

A big flaming stink
Apr 26, 2010
I thought it was pretty well established that the proliferation of ai art is what creates the risk of model collapse. They trained these things on a shitload of data and it's rather up in the air if they're going to be able to keep doing that without scraping the products of their own models

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
Not just images but all data. It effects stuff like programming questions just as it does generating images. And as the internet gets flooded with more AI generated information, incorrect data it creates becomes compounded over time.

To fight against this you have to more carefully filter the data going in, which is just going to become more difficult as we get further from the launch of chatgpt.

It's not an impossible problem, but it is a hard one.

Pre 2022 stores of data will be like gold dust.

Mega Comrade fucked around with this message at 13:31 on Jan 6, 2024

Tarkus
Aug 27, 2000

For a while I believed that everything would be an AI generated mash, as time has gone on I think those fears are overblown. Certainly 2023 and 2024 data will have quite a bit of AI trash swimming in it but I think that more people who use AI to generate things are getting more cautious, double checking stuff and being more selective of what is being output. Also, when it comes to images I think it's a bit more of a concern right now but as tools become better, allowing more control, the images will at least start to become more artistic and more intentional rather than just bland visually appealing images worth of a corporate slideshow. I also view it as a form of human-reinforced selective learning if they do learn on previous data as (hopefully) the data will be the stuff that's chosen to be better, more correct, and higher quality.

The way I see it, generative AI is going to bridge some gaps of accessibility in terms of skills and the effort required to use them. Skilled people will simply make better stuff. Though, yes, many jobs will probably be lost unfortunately. I'm not too fond of that.

biznatchio
Mar 31, 2001


Buglord

Tarkus posted:

Certainly 2023 and 2024 data will have quite a bit of AI trash swimming in it but I think that more people who use AI to generate things are getting more cautious, double checking stuff and being more selective of what is being output.

The problem isn't people who are involved in legitimate attempts at communicating with AI generated work, since they'll do the self-curation you mentioned. It's the people who are and will continue to be flooding the internet with non-curated garbage (e.g. this) to game systems intended for humans. Someone dumping woims articles on the internet doesn't give a drat about quality, they have other motivations; and 1) there's no end or limit to what those motivations might be, and 2) machines can spit out garbage far faster than we can effectively filter it out, especially if its garbage that superficially appears legitimate.

Tarkus
Aug 27, 2000

biznatchio posted:

The problem isn't people who are involved in legitimate attempts at communicating with AI generated work, since they'll do the self-curation you mentioned. It's the people who are and will continue to be flooding the internet with non-curated garbage (e.g. this) to game systems intended for humans. Someone dumping woims articles on the internet doesn't give a drat about quality, they have other motivations; and 1) there's no end or limit to what those motivations might be, and 2) machines can spit out garbage far faster than we can effectively filter it out, especially if its garbage that superficially appears legitimate.

To be fair though, the net has been full of algorithmically generated swill for decades now be it junk web pages for SEO or spam or low effort nonsense. Maybe the new crud will be harder to differentiate but I'm not sure it's too much different overall. Obviously this might not apply to images or music though.

biznatchio
Mar 31, 2001


Buglord
The general open web has been crapflooded for a while, yes; but it's (for the most part) identifiable. In the past, you'd almost never find crapflood content at the top search results; and if you did it stood out like a sore thumb (and if someone managed to game the system well enough to get crap to the top, it was easily quarantined later on by being so easily identifiable). One of the key indicators indexers like Google use to tell legitimate sources apart from crapflooding sources is by looking at the whole body of source material available by a source and analyzing it to see if it matches what human-written content generally looks like. That's why SEO for a long time even involved hiring bad writers for dirt cheap to get them to pump out *anything* you could put on your website to make it look legitimate to algorithms trying to figure out if you're actually providing real value or just trying to trick people into coming to your site.

If the crap is identifiable, it can be filtered out and so it isn't a huge problem.

But it's only increasingly getting immensely easier to *not* be identifiable as crap because you just push a button and articles that look human-written will come spewing out of the machine faster than you can upload them that defeat analysis done by even the state of the art in AI detection.

This is a problem that's only going to get worse, not better; and I think ultimately it won't just be a problem for training new generations of AI, it'll be a problem for *everything* and will only serve to add more weight to the momentum of keeping people in proprietary silos like discord and facebook etc. in small enough groups for moderation to keep up; which will just perpetuate the problem of knowledge being less available overall because the gatekeepers of those silos wanting to capture monetize protect their users vis-à-vis AI are incentivized to limit anyone from the outside looking in. AI won't have good access to datasets of current knowledge, and visibility to actual people will get more and more limited too.

KwegiboHB
Feb 2, 2004

nonconformist art brut
Negative prompt: amenable, compliant, docile, law-abiding, lawful, legal, legitimate, obedient, orderly, submissive, tractable
Steps: 32, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 520244594, Size: 512x512, Model hash: 99fd5c4b6f, Model: seekArtMEGA_mega20

Mega Comrade posted:

No mix of merge of Mistral 8x7B surpasses gpt4.

If it does I'm sure you can provide some evidence. I'm happy to be corrected.

But this all misses the point. I wasn't even arguing about running local models, I was talking about the future of open source models and where they will get their data and training from.

Mistral has opened up it's models but it's not an open source non profit, it's a private startup with huge backing. Releasing models this way is good for building hype but not for making profit which is what it will inevitably want to do.

Can an actual none profit compete with these companies long term? I don't think it can which means the future of open models is going to be determined by these for profit companies, which historically hasn't worked out well for openess.

I dug through my links collection and all the ones claiming better than gpt4 did have graphs and testing methodologies... but also really small samples sizes, so whatever, I'll spare you the trouble.
A better reference would be the Chatbot Arena or the LLM Leaderboard.

https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard

Mistral did say they were coming out with another better model later this year and I'll bring it up when they do.


Tarkus posted:

Though, yes, many jobs will probably be lost unfortunately. I'm not too fond of that.

You're the fourth person in not even a single page to say this. I'll ask you too, what are you basing this idea off of? I'm trying to see why the U.S. Bureau of Labor Statistics https://www.bls.gov/ooh/arts-and-design/home.htm is suggesting job GAINS instead or if there's a better source of information I should be looking at.


Art Directors: Number of Jobs, 2022 135,100
Job Outlook, 2022-32 6% (Faster than average)

Craft and Fine Artists: Number of Jobs, 2022 54,600
Job Outlook, 2022-32 4% (As fast as average)

Fashion Designers: Number of Jobs, 2022 24,900
Job Outlook, 2022-32 3% (As fast as average)

Floral Designers: Number of Jobs, 2022 54,500
Job Outlook, 2022-32 -18% (Decline)

Graphic Designers: Number of Jobs, 2022 270,900
Job Outlook, 2022-32 3% (As fast as average)

Industrial Designers: Number of Jobs, 2022 32,400
Job Outlook, 2022-32 2% (As fast as average)

Interior Designers: Number of Jobs, 2022 94,900
Job Outlook, 2022-32 4% (As fast as average)

Special Effects Artists and Animators: Number of Jobs, 2022 89,300
Job Outlook, 2022-32 8% (Faster than average)


Not exactly exciting numbers there but far from calamitous. Well, maybe the Floral Designers but that sounds like quite the stretch to blame AI for...

poop chute
Nov 16, 2023

by Athanatos

KwegiboHB posted:

I want to know why you think Generative AI is theft and Spotify isn't.

Spotify pays people to use their work, which those people agree to when they put it on spotify.

KwegiboHB
Feb 2, 2004

nonconformist art brut
Negative prompt: amenable, compliant, docile, law-abiding, lawful, legal, legitimate, obedient, orderly, submissive, tractable
Steps: 32, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 520244594, Size: 512x512, Model hash: 99fd5c4b6f, Model: seekArtMEGA_mega20

poop chute posted:

Spotify pays people to use their work, which those people agree to when they put it on spotify.

From the article I linked: https://5mag.net/i-o/spotify-serfs-millionaires-musicians/

quote:

“Based on a sample of data concerning each October from 2014 to 2020,” they write, “the top 0.1% most popular tracks achieved more than 40% of all streams in all years and the top 0.4% of tracks accounted for more than 65% of all streams from 2016 onwards.”

Expanding the tier to the top 1% of all tracks accounts for 75% to 80% of all streams. And the top 10% of all tracks accounts for nearly all of it, the entire market: one tenth of all tracks account for 95% to 97% of all streams, in all years from 2016 to 2020.

Put another way: 90% of all new releases amount for barely 3% to 5% of all streams.

There's a lot of musicians getting nothing at all. Robbed.

poop chute
Nov 16, 2023

by Athanatos
And those artists still agreed to those terms when they put them on spotify, unlike generative AI.

"I don't understand why it's okay when they have permission" is a truly strange argument to make.

KwegiboHB
Feb 2, 2004

nonconformist art brut
Negative prompt: amenable, compliant, docile, law-abiding, lawful, legal, legitimate, obedient, orderly, submissive, tractable
Steps: 32, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 520244594, Size: 512x512, Model hash: 99fd5c4b6f, Model: seekArtMEGA_mega20

poop chute posted:

And those artists still agreed to those terms when they put them on spotify, unlike generative AI.

"I don't understand why it's okay when they have permission" is a truly strange argument to make.

My argument is that it's NOT ok to let people starve. Anyone. Ever. That's the point of Universal Basic Income. It will free us to do whatever we want without screwing anyone.

poop chute
Nov 16, 2023

by Athanatos
That's an entirely separate argument to make that isn't really in the purview of this thread.

Spotify is very bad and exploitative, I agree, but it's not bad and exploitative for the same reasons or in the same way as generative AI. At a very fundamental level, one involves people giving permission for their work to be used, and the other does not.

SCheeseman
Apr 23, 2003

Google Books remains the most apt legal comparison imo

KwegiboHB
Feb 2, 2004

nonconformist art brut
Negative prompt: amenable, compliant, docile, law-abiding, lawful, legal, legitimate, obedient, orderly, submissive, tractable
Steps: 32, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 520244594, Size: 512x512, Model hash: 99fd5c4b6f, Model: seekArtMEGA_mega20

poop chute posted:

That's an entirely separate argument to make that isn't really in the purview of this thread.

Spotify is very bad and exploitative, I agree, but it's not bad and exploitative for the same reasons or in the same way as generative AI. At a very fundamental level, one involves people giving permission for their work to be used, and the other does not.

No it's entirely within the purview of this thread. Job displacement, how people make a living off their work,, the concentration of wealth and power currently in place and where the future is likely to go, all of these are topics that have been effected by the latest changed in Artificial Intelligence so talking about them is fair game.
I'm against monopolies in general so the fact that people are making suggestions to further increase and entrench current monopolies is something I'm arguing against. The Spotify Model being one of the worst of these methods and it being hailed as a "solution" to AI is also something I'm arguing against by providing facts that it doesn't "solve" anything unless you are .1% of artists and it screwing everyone else.

Also in the purview of this thread is talking about the fact that you literally can not get a ruling of Fair Use in copyright until you:
Use someone elses work without permission, Get sued, Have a judge rule it's Fair Use.
There is literally no other way.

poop chute
Nov 16, 2023

by Athanatos
You're basically saying that in this hypothetical, actually this wouldn't matter so right now, in an entirely different context, they're the same thing with zero daylight between them, and I don't think that's really a fair mode to engage in.

KwegiboHB
Feb 2, 2004

nonconformist art brut
Negative prompt: amenable, compliant, docile, law-abiding, lawful, legal, legitimate, obedient, orderly, submissive, tractable
Steps: 32, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 520244594, Size: 512x512, Model hash: 99fd5c4b6f, Model: seekArtMEGA_mega20

poop chute posted:

You're basically saying that in this hypothetical, actually this wouldn't matter so right now, in an entirely different context, they're the same thing with zero daylight between them, and I don't think that's really a fair mode to engage in.

Are you having a stroke?

Kavros
May 18, 2011

sleep sleep sleep
fly fly post post
sleep sleep sleep

KwegiboHB posted:

I keep hearing this and yet it looks like the total number of artists jobs are going up instead? I want to know what the basis of this claim is or a reason why the U.S. Bureau of Labor Statistics is wrong please. https://www.bls.gov/ooh/arts-and-design/home.htm

Ok.

Look carefully at the tense I'm using in my post. We're telling you we're worried about what is likely going to happen, and you respond with "well I'm not seeing an effect NOW? so what could the problem really be?" This is a novel disruption still ultimately in its earlier forms. While there's really dark indicators flickering up here and there, most of us are responding in concern to the logical conclusion of what will happen to artists if we just normalize stealing from all artists now as long as you do it with a generative AI project.

Which is bad.

.

Small breather here, before we move on to the bullet points about "bad for artists? but, i am looking at line, and line going up??"

.

ok, here goes:

1. I would be entirely unsurprised that any broad category of jobs is increasing in the US in the immediate present, because most of them are, even accounting for various periods of churn in separate industries.

2. This doesn't even touch on what substantive elements within that category are potentially expressing negative change, i.e.: general wage reductions, stability of work concerns, work variety constriction.

2a. See, if someone tells me "sales jobs are going up!" but the overall trend is that salaried sales positions are being decimated and people in sales are grim about their employment prospects, you can't counterargue with total job statistics

2b. For instance, a situation in which the total number of job losses are covered or surpassed by increased staffing in lower to minimum wage telemarketing sales departments is not one where you can throw the total job numbers at me and say "well it sure looks like it's going just fine to me?"

3. The value of what a person can create as a visual artist or as a writer needs to be preserved by valid ownership of that artistic expression, full stop. Kind of a distinct point, but one that bears repeating.

Adbot
ADBOT LOVES YOU

KwegiboHB
Feb 2, 2004

nonconformist art brut
Negative prompt: amenable, compliant, docile, law-abiding, lawful, legal, legitimate, obedient, orderly, submissive, tractable
Steps: 32, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 520244594, Size: 512x512, Model hash: 99fd5c4b6f, Model: seekArtMEGA_mega20

I'll just quickly state that I've read this and I love this discussion so I will sit down and address all of these points. It will take some time though.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply