Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

SCheeseman posted:

Google Books remains the most apt legal comparison imo

Pretty much. It will be a fundamental part of almost all the cases coming up on the matter.

I'm pretty sure it will stand for a lot of them, it might not for others though, the NYT one as I mentioned before, the models are able to spit out entire articles in some case examples. Google books only ever showed a small part of a book so you could never argue it was replacing the original source. One could argue this isn't the case for the NYT law suit.

Google has already lost in lots of countries over Google news allowing people to read the bulk of the article without going to the actual source, so it could be argued tools like chatgpt are doing the same.

Mega Comrade fucked around with this message at 09:30 on Jan 7, 2024

Adbot
ADBOT LOVES YOU

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

KwegiboHB posted:

I dug through my links collection and all the ones claiming better than gpt4 did have graphs and testing methodologies... but also really small samples sizes, so whatever, I'll spare you the trouble.
so you posted falsehoods


KwegiboHB posted:

You're the fourth person in not even a single page to say this. I'll ask you too, what are you basing this idea off of?

The wealth of research that's come out in the last 2 years?

two-thirds of jobs in the U.S. and Europe are exposed to some degree of AI automation, and around a quarter of all jobs could be performed by AI entirely.

https://www.goldmansachs.com/intelligence/pages/generative-ai-could-raise-global-gdp-by-7-percent.html


AI will replace as many as two million manufacturing workers by 2025

https://www.forbes.com/sites/ariann...sh=4b7d49235957


By 2030, at least 14% of employees globally could need to change their careers due to digitization, robotics, and AI advancements

https://www.semanticscholar.org/paper/The-Impact-of-AI-and-Machine-Learning-on-Job-and-Tiwari/d44b651ee25cd6f8717a9b049d2f2e0de8a965f5

Now It's possible that eventually new jobs will be created as has happened before with automation, I'm actually quite hopeful of this, but seems an almost certainly for the near future a lot of people are going to face professional hardship due to this technology.

Mega Comrade fucked around with this message at 10:37 on Jan 7, 2024

Count Roland
Oct 6, 2013

Are we not yet at the point where we can agree this latest round of "AI taking my job" fervour is hype, along with the past ones?

I had an extended debate with a friend of mine over this. He was arguing that this time it was different, this time the technology was so powerful and so universally applicable that it would be a jobs apocalypse. I argued that new technology was always changing jobs, making some obsolete, but creating new ones along the way. He was so fervent that he almost had me convinced. The year was 2012, and I don't even remember what technology was causing the panic. Voice assistants like Alexa, maybe.

Yes this new technology is powerful, allows for some very interesting things. Having used it myself, I am quite comfortable in saying there will be no jobs apocalypse. Some methods and professions will become obsolete, but not differently from the past 100+ years where no job has gone untouched.

The artists I see on my social media that are grasping at copyright law as some sort of solution is really pitiful. The ones I know can barely afford to eat, they certainly will never afford a lawyer to sue some large company for copyright infringement of their watercolour mountain-vistas they sell to tourists. The digital artists I know I have even less sympathy for: the tools they use rendered obsolete past generations of art-making. Why is AI art treated differently than say photoshop or bandcamp?

If indeed there is mass employment then the state should intervene to protect people. But then I think the state should do that anyway. Unemployment insurance, skills training, UBI etc. As it stands now unemployment is near record lows in North America, a year after ChatGPT burst onto the scene, apparently threatening much of the workforce. I for one am not worried.

vegetables
Mar 10, 2012

biznatchio posted:

This is a problem that's only going to get worse, not better; and I think ultimately it won't just be a problem for training new generations of AI, it'll be a problem for *everything* and will only serve to add more weight to the momentum of keeping people in proprietary silos like discord and facebook etc. in small enough groups for moderation to keep up; which will just perpetuate the problem of knowledge being less available overall because the gatekeepers of those silos wanting to capture monetize protect their users vis-à-vis AI are incentivized to limit anyone from the outside looking in. AI won't have good access to datasets of current knowledge, and visibility to actual people will get more and more limited too.

I think this is correct, and is a generalisable concern— an algorithm which is designed to be useful for people is gamed in a way people can’t control or understand, to the point it ends up having no useful human function.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

Count Roland posted:

Why is AI art treated differently than say photoshop or bandcamp?

Because of scale and speed of the automation.
Photoshop has indeed taken jobs. But it's been a slow steady pace for 30 years.

Generative AI has advanced at a significantly faster pace and can be applied to many different fields. The speed at which is could disrupt the marketplace could be something that hasn't been seen for a long time. Or it might not be, even the people making these models aren't really sure.

For my own work we have already cancelled a contract with a third party that did work for us aligning data. AI can do the job in a fraction of the time and cost, I don't think that company will even exist in 5 years.

DavidCameronsPig
Jun 23, 2023
Photoshop also largely developed concurrently to the web, which created a lot more jobs in design. It didn't matter that one person could do what was the work of five if there was also five times as many briefs out there.

The real impact Photoshop had wasn't some jobpocolypse, but it did lower the barrier to entry for someone to claim they were 'a designer', which in turn depressed the amount anyone was willing to pay for a project and subsequently earnings.

Mega Comrade posted:

Now It's possible that eventually new jobs will be created as has happened before with automation, I'm actually quite hopeful of this, but seems an almost certainly for the near future a lot of people are going to face professional hardship due to this technology.

Eh. I don't see it. People will become professional AI wranglers instead of professional junior colleague wranglers. Actually, that's where I see the real damage - doing the boring gruntwork is how people learn a profession, and if companies are turning to AI to do it instead in order to save a bit of money, I don't know what the first rung of a lot of ladders looks like.

Tarkus
Aug 27, 2000

KwegiboHB posted:

You're the fourth person in not even a single page to say this. I'll ask you too, what are you basing this idea off of? I'm trying to see why the U.S. Bureau of Labor Statistics https://www.bls.gov/ooh/arts-and-design/home.htm is suggesting job GAINS instead or if there's a better source of information I should be looking at.

Not exactly exciting numbers there but far from calamitous. Well, maybe the Floral Designers but that sounds like quite the stretch to blame AI for...

I'll be honest, when I think of job losses, I don't mean art directors and floral arrangers. I mean telemarketers, office admin, paralegals, warehouse workers, draftspeople and a whole swath of semi professional jobs that are incentivized to be replaced. poo poo is moving at a breakneck pace and I think denying that there will be job losses is naive. Sure, some will be created and maybe it'll ease certain ideas around who does what but there will be an adjustment period and I think it'll be painful.

Nervous
Jan 25, 2005

Why, hello, my little slice of pecan pie.

Tarkus posted:

I'll be honest, when I think of job losses, I don't mean art directors and floral arrangers. I mean telemarketers, office admin, paralegals, warehouse workers, draftspeople and a whole swath of semi professional jobs that are incentivized to be replaced. poo poo is moving at a breakneck pace and I think denying that there will be job losses is naive. Sure, some will be created and maybe it'll ease certain ideas around who does what but there will be an adjustment period and I think it'll be painful.

I'm with you on this. I think we're going to see massive job displacement over the next quarter century as these AI models and robotics keep developing and getting integrated. Yes, news job will arise overseeing/maintaining/repairing these models and robots, but there will be a severe skill mismatch I think between the people losing jobs and the new ones being created. I don't see a good way forward beyond broad social safety nets that this country actively hates.

Count Roland
Oct 6, 2013

Edit: nevermind

Blut
Sep 11, 2009

if someone is in the bottom 10%~ of a guillotine

DavidCameronsPig posted:

Eh. I don't see it. People will become professional AI wranglers instead of professional junior colleague wranglers. Actually, that's where I see the real damage - doing the boring gruntwork is how people learn a profession, and if companies are turning to AI to do it instead in order to save a bit of money, I don't know what the first rung of a lot of ladders looks like.

How I've had likely AI related job losses explained to me as by fairly senior/knowledgeable IT / Law people is it'll not be existing job cuts so much as just large hiring cutbacks at entry level.

ie if a current org structure is 1 director > 3 managers > 30 staff, heavy AI use will change that to 1 director > 2 managers > 10 staff. Not much management loss, but a lot of the boring grunt work doers gone. The remaining entry level staff will largely just be AI wranglers/editors/proof readers essentially.

So the first rung does get pretty massive cuts, but in theory should still have enough people coming through to the more senior positions since they've always been fewer in number anyway. And most people already in a job at the moment will be fine - by the time AI really gets going in 3-5 years they'll all be reasonably senior.

Where the large numbers of the 2030ish college grads looking to start their careers go is what'll be the real question.

Freakazoid_
Jul 5, 2013


Buglord

Count Roland posted:

Are we not yet at the point where we can agree this latest round of "AI taking my job" fervour is hype, along with the past ones?

I had an extended debate with a friend of mine over this. He was arguing that this time it was different, this time the technology was so powerful and so universally applicable that it would be a jobs apocalypse. I argued that new technology was always changing jobs, making some obsolete, but creating new ones along the way. He was so fervent that he almost had me convinced. The year was 2012, and I don't even remember what technology was causing the panic. Voice assistants like Alexa, maybe.

Yes this new technology is powerful, allows for some very interesting things. Having used it myself, I am quite comfortable in saying there will be no jobs apocalypse. Some methods and professions will become obsolete, but not differently from the past 100+ years where no job has gone untouched.

The artists I see on my social media that are grasping at copyright law as some sort of solution is really pitiful. The ones I know can barely afford to eat, they certainly will never afford a lawyer to sue some large company for copyright infringement of their watercolour mountain-vistas they sell to tourists. The digital artists I know I have even less sympathy for: the tools they use rendered obsolete past generations of art-making. Why is AI art treated differently than say photoshop or bandcamp?

If indeed there is mass employment then the state should intervene to protect people. But then I think the state should do that anyway. Unemployment insurance, skills training, UBI etc. As it stands now unemployment is near record lows in North America, a year after ChatGPT burst onto the scene, apparently threatening much of the workforce. I for one am not worried.

https://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf

Automation, including AI, has been chipping away at middle class jobs.

The notion that technology creates more jobs than it replaces is misdirection. The jobs we're getting out of technology are no longer good paying middle class jobs. We're all being dog piled into the lower class and being told to be thankful we even have a job.

Liquid Communism
Mar 9, 2004

коммунизм хранится в яичках

Mega Comrade posted:

Not just images but all data. It effects stuff like programming questions just as it does generating images. And as the internet gets flooded with more AI generated information, incorrect data it creates becomes compounded over time.

To fight against this you have to more carefully filter the data going in, which is just going to become more difficult as we get further from the launch of chatgpt.

It's not an impossible problem, but it is a hard one.

Pre 2022 stores of data will be like gold dust.

Yeah, that's precisely the problem. Generative models are often confidently wrong. The more of that confidently wrong data enters the dataset being scraped to train further models, the more likely they become to repeat confidently wrong data.

DavidCameronsPig posted:

Eh. I don't see it. People will become professional AI wranglers instead of professional junior colleague wranglers. Actually, that's where I see the real damage - doing the boring gruntwork is how people learn a profession, and if companies are turning to AI to do it instead in order to save a bit of money, I don't know what the first rung of a lot of ladders looks like.

The goal is for there to be no ladder. If you want to see an MBA have a religious experience, tell them they can turn everyone into gig workers.

Liquid Communism fucked around with this message at 16:45 on Jan 8, 2024

repiv
Aug 13, 2009

i don't know how search engines are going to be able to filter this crap out, the cracks are already showing today

if you search for yoji shinkawa right now the headline image is an AI generated picture of aquaman sourced from a reddit post with 28 upvotes

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

Freakazoid_ posted:

https://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf

Automation, including AI, has been chipping away at middle class jobs.

The notion that technology creates more jobs than it replaces is misdirection. The jobs we're getting out of technology are no longer good paying middle class jobs. We're all being dog piled into the lower class and being told to be thankful we even have a job.
If it wasn’t “AI,” there would be some other excuse du jour to justify the continued chipping away the well-being of working class people in the increasingly doomed quest to slow the falling rate of profit on capital investment.

Tei
Feb 19, 2011

repiv posted:

i don't know how search engines are going to be able to filter this crap out, the cracks are already showing today

It will be interesting to see what new horror it spawn.

It can be worse than Elsagate stuff
https://www.theverge.com/culture/2017/11/21/16685874/kids-youtube-video-elsagate-creepiness-psychology

Tarkus
Aug 27, 2000

I will say though that AI poisoning of photos is a concern of mine. For example, my home town:

AI image with the prompt "Edmonton, Alberta, Canada":


This is the closest image I could get in terms of looking like my city.... It looks nothing like it. It's more a blend of edmonton and calgary, most of the generated photos are.

If people are using it to generate images of places or things, it's going to be very wrong and this is going to be filtered back through. The good thing is that it's super-easy to watermark an image with both metadata and in the image to identify it as AI generated. Not sure if Midjourney is actually doing that.

feedmyleg
Dec 25, 2004

Tarkus posted:

AI image with the prompt "Edmonton, Alberta, Canada":

This is misunderstanding how prompts work, though. You've given it 3 sets of images to pull from—ones labeled Edmonton, ones labeled Alberta, and ones labeled Canada.

But yeah, this poo poo is about to get out of hand.

Kavros
May 18, 2011

sleep sleep sleep
fly fly post post
sleep sleep sleep

repiv
Aug 13, 2009

Tarkus posted:

The good thing is that it's super-easy to watermark an image with both metadata and in the image to identify it as AI generated. Not sure if Midjourney is actually doing that.

AFAIK they are not, OpenAI made a token effort to identify DALL-E generated images (those colored squares in the corner) but nobody else bothered to implement a clear watermark, and even if there were a less obvious watermark the public version of stable diffusion could simply be patched to turn it off

in a way companies like midjourney are incentivised to not watermark their output, because even without a watermark they can keep fingerprints of images they have generated and filter them out when scraping the web for fresh training material, while their competitors have no easy way to filter them

repiv fucked around with this message at 13:37 on Jan 9, 2024

SCheeseman
Apr 23, 2003

You could feed it through another AI to get rid of the watermark anyway. Watermarks are dead, it's another thing that generative AI has made considerably less useful.

What would arguably be useful to track the origins of a photograph or work is an end-to-end cryptographically signed workflow, from the camera, through processing in an editor, to a final jpeg. Of course, this would mean some kind of master/root key held by a large company and of course, you would need to pay for the privilege of proving your work is "real". Thanks Adobe. Thadobe.

Christ. The incredible irony if artists start actually using blockchain.

SCheeseman fucked around with this message at 13:28 on Jan 9, 2024

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
Leica already released a camera that tries to tackles that.
Sony, Nikon and Canon are working to doing something similar.

Basically the camera will digitally sign the photo on creation and edits will be recorded to the file also. It will help organisations check the authenticity of photos, but it won't help the spread of misinformation online.


https://contentcredentials.org/

Mega Comrade fucked around with this message at 11:57 on Jan 10, 2024

repiv
Aug 13, 2009

on the legal debate front, getty just announced a genAI product which is trained exclusively on their licensed stock photo library so that's more fuel for the fire that AI is viable without indiscriminate scraping of unlicensed material, companies like midjourney would just prefer to not have to license anything

SCheeseman
Apr 23, 2003

Mega Comrade posted:

Leica already released a camera that tackles that.
Sony, Nikon and Canon are working to doing something similar.

Basically the camera will digitally sign the photo on creation and edits will be recorded to the file also. It will help organisations check the authenticity of photos, but it won't help the spread of misinformation online.


https://contentcredentials.org/

Until the system is compromised, which it will inevitably be, at which point it'll also be useful to bad actors for passing altered photographs as the real thing.

feedmyleg
Dec 25, 2004
Nobody is going to legitimately engage with that because it will take a ton of time, effort, energy, and money to implement and the only person it benefits is some poor artist. And nobody from camera manufacturers and photo editing software companies to AI giants have a profit incentive to do so. It's far too complex a thing to regulate, and different players within the industry aren't going to coordinate amongst themselves to do so for the same reasons. The government is hopelessly out of touch with technology to begin with, and something moving this quickly means that even the most stringent laws would be quickly outdated, and besides, why would they write laws to begin with that hurt the AI giants who are their advisors on how the laws should be written?

In the short term, the only thing to be done seems to be from an industry standards and practices standpoint for artists and third-party vendors to supply not only final assets, but artifacts that show their work. Layered Photoshop files, timelapses of the illustration process, identifying details of photo subjects, etc. But even then, that not only requires more work from artists, either unpaid or making such a system more cost-prohibitive, but also most companies don't care enough to check and review those artifacts.

Hell, every major drawing tablet manufacturer has been caught using AI art in their advertising already. These are companies whose bread and butter are digital artists who draw things by hand and are most at risk, and even they couldn't help themselves.

feedmyleg fucked around with this message at 13:54 on Jan 9, 2024

SCheeseman
Apr 23, 2003

You film a cop beating up a dude using some budget level Android phone? Could be altered with AI. Inadmissible. iPhones will probably have it by default, great if you can afford one.

Such authentication schemes aren't something to be celebrated, they're just another commercially minded grift to shave more bucks off people and put more power into the hands of those who already have too much.

SCheeseman fucked around with this message at 13:57 on Jan 9, 2024

Ran Mad Dog
Aug 15, 2006
Algeapea and noodles - I will take your udon!
While other groups were experimenting and running free tests, Midjourney, I believe, was the first to start charging people for appropriated art and were in such a hurry to do so that they ran (run?) their entire business out of a goddamn Discord channel, so if they are the first to go down then good riddance.

SCheeseman
Apr 23, 2003

A lot of AIs have APIs that interface through Discord for some reason.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

feedmyleg posted:

Nobody is going to legitimately engage with that because it will take a ton of time, effort, energy, and money to implement and the only person it benefits is some poor artist. And nobody from camera manufacturers and photo editing software companies to AI giants have a profit incentive to do so. It's far too complex a thing to regulate, and different players within the industry aren't going to coordinate amongst themselves to do so for the same reasons.

Adobe, Nikon, Sony, Canon and Leica have already signed up and are implementing the standard into upcoming products.

SCheeseman posted:

Until the system is compromised, which it will inevitably be, at which point it'll also be useful to bad actors for passing altered photographs as the real thing.


They are industry standard certs, tried and tested cryptography methods that are used to secure the web. I'd they are compromised you have more to worry about that fake images.

SCheeseman posted:

You film a cop beating up a dude using some budget level Android phone? Could be altered with AI. Inadmissible. iPhones will probably have it by default, great if you can afford one.

Such authentication schemes aren't something to be celebrated, they're just another commercially minded grift to shave more bucks off people and put more power into the hands of those who already have too much.

The main point of this standard is journalism. We might see it in phones eventually though.

Mega Comrade fucked around with this message at 14:17 on Jan 9, 2024

Bug Squash
Mar 18, 2009

The Blockchain post is obviously the worst suggestion possible, but I can see artists filming themselves drawing becoming the standard for bespoke work. It's already pretty common on twitch for community engagement purposes, so it wouldn't be all that dramatic a change.

SCheeseman
Apr 23, 2003

Bug Squash posted:

The Blockchain post is obviously the worst suggestion possible, but I can see artists filming themselves drawing becoming the standard for bespoke work. It's already pretty common on twitch for community engagement purposes, so it wouldn't be all that dramatic a change.

It wasn't a suggestion.

SCheeseman
Apr 23, 2003

Mega Comrade posted:

They are industry standard certs, tried and tested cryptography methods that are used to secure the web. I'd they are compromised you have more to worry about that fake images.
I already assume they are.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
Based on what?

repiv
Aug 13, 2009

Mega Comrade posted:

They are industry standard certs, tried and tested cryptography methods that are used to secure the web. I'd they are compromised you have more to worry about that fake images.

crypto is only as secure as its keys, and if the signing keys are embedded somewhere inside a camera then there's potential for them to be pulled out by a determined attacker

the reason why such crypto holds up in the case of say, web servers or games consoles is the encryption/signing keys aren't exposed to the outside world at all

SCheeseman
Apr 23, 2003

Mega Comrade posted:

Based on what?

There's lots of theoretical attacks that could easily be pulled off by a nation state with the legal ability to compel tech organizations to secretly give access to their systems, which we know they already do.

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.

SCheeseman posted:

You could feed it through another AI to get rid of the watermark anyway. Watermarks are dead, it's another thing that generative AI has made considerably less useful.

You could, sure, but the point here was that the watermarks would be exceptionally useful to the AI companies themselves to have. That they aren't doing it is more a problem of extremely short-sighted thinking than anything else, and it will likely, eventually, bite them in the rear end.

repiv
Aug 13, 2009

crypto-cameras are a can of worms in general because if the crypto scheme includes a per-device key then the same system which (ostensibly) proves a photo is real could also be used to trace its source, which is bad news for whistleblowers and dissidents

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

repiv posted:

crypto is only as secure as its keys, and if the signing keys are embedded somewhere inside a camera then there's potential for them to be pulled out by a determined attacker

the reason why such crypto holds up in the case of say, web servers or games consoles is the encryption/signing keys aren't exposed to the outside world at all

The signing key is hard bound to the asset and has a companion key that's with a CA. So if it is somehow compromised it can be invalidated by the CA.

https://c2pa.org/specifications/specifications/1.0/explainer/Explainer.html#_how_is_trust_in_digital_assets_established


It's not a perfect system, no system is but for its intended purpose of journalism, it's a pretty good well thought out one.

SCheeseman posted:

There's lots of theoretical attacks that could easily be pulled off by a nation state with the legal ability to compel tech organizations to secretly give access to their systems, which we know they already do.

True but this isn't out to fix that problem. You will have to trust the CA as we do now. The difference being the CA will be the Associated press, BBC, Reuters etc.

Mega Comrade fucked around with this message at 15:38 on Jan 9, 2024

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.
Who actually wants that kind of tech and isn't planning on using it for nefarious purposes? It doesn't seem like it would have any real utility to anyone not looking to track down problematic photographers.

Bug Squash
Mar 18, 2009

GlyphGryph posted:

Who actually wants that kind of tech and isn't planning on using it for nefarious purposes? It doesn't seem like it would have any real utility to anyone not looking to track down problematic photographers.

To be fair, I don't think there's any nefarious plan behind the tech (although the threat to journalists from bad actors is obvious). This nonsense exists because the companies selling the products are trying to upsell with a non-solution to a problem that doesn't exist. Tech fetishists are going to fetishise the wizz-bang crypto, and spin the hype cycle. But at the end of the day everything still relies on the reputation of the journalist for honesty, and that's going to continue.

Adbot
ADBOT LOVES YOU

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
News media organisations want it. When they purchase photographs they want to be sure they haven't been edited.

It's not even AI specific, this issue has been around forever, it's just come to ahead with how easy AI has made image manipulation and the rise of fake news.

Not sure how it could be used nefariously. You just don't buy a camera with the chip? It's a premium feature and requires actively setting up.
If you arnt a journalist who intends to sell their photos you have no need to do that.

Mega Comrade fucked around with this message at 17:12 on Jan 9, 2024

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply