Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Monglo
Mar 19, 2015
Ai is gonna take over the world! Lolol B)

(USER WAS PUT ON PROBATION FOR THIS POST)

Adbot
ADBOT LOVES YOU

Walked
Apr 14, 2003

Noam Chomsky posted:

So how long before it puts web developers and programmers out of work? I’m asking for a friend.

It’s me. I’m the friend.

I'm in tech leadership; have acknowledgments in 4-printed tech books - so I'm fairly qualified to answer but also recognize I have a certain bias and cynicism, so take this with a grain of salt.

Faster than you expect, in my (anecdotal) experience.
It's not so much that it's going to simply cause them to dump folks (imminently), but its yet another workforce multiplier that is going to impact the low and middle perfomers the most by making the majority of their capability AI-redundant.

So basically, the bottom is going to fall out and supply is going to vastly exceed demand. For a while the middleware tooling isnt going to exist, so developers are going to be around to help glue all the new stuff together, but it will both be less technical (and thus lower pay) and a decreasing body of work available as the tech gets better.

Again, my perspective is the cynical view in a way and I understand the argument of "new tech, new jobs" to which I'm hopeful, but not personally optimistic.

IShallRiseAgain
Sep 12, 2008

Well ain't that precious?

Honestly, I think once it reaches the point an AI can replace programmers, its basically capable of replacing any job, except for jobs that strongly rely on social interaction. There might be a slightly delay for physical labor, but the tech is almost already there.

SaTaMaS
Apr 18, 2003

Noam Chomsky posted:

So how long before it puts web developers and programmers out of work? I’m asking for a friend.

It’s me. I’m the friend.

Most large projects I've worked on have had a small collection of senior programmers, a small number of interns/junior programmers that the senior programmers are supposed to train in case they get hit by a bus, and a bunch of in-between programmers that are usually contract workers. ChatGPT can make the junior and senior programmers so much more productive that the in-between programmers are mostly unnecessary, but can't replace the senior programmers or their HBAB replacements in the foreseeable future.

Insanite
Aug 30, 2005

Walked posted:

I'm in tech leadership; have acknowledgments in 4-printed tech books - so I'm fairly qualified to answer but also recognize I have a certain bias and cynicism, so take this with a grain of salt.

Faster than you expect, in my (anecdotal) experience.
It's not so much that it's going to simply cause them to dump folks (imminently), but its yet another workforce multiplier that is going to impact the low and middle perfomers the most by making the majority of their capability AI-redundant.

So basically, the bottom is going to fall out and supply is going to vastly exceed demand. For a while the middleware tooling isnt going to exist, so developers are going to be around to help glue all the new stuff together, but it will both be less technical (and thus lower pay) and a decreasing body of work available as the tech gets better.

Again, my perspective is the cynical view in a way and I understand the argument of "new tech, new jobs" to which I'm hopeful, but not personally optimistic.

Speaking as someone who has orbited development without being a full-time developer (tech writing now, and UX design + information architecture in the past) for over a decade now, this is also how I see it.

I'm already seeing job postings for senior writers that mention using generative AI to augment productivity, and the tech appears good enough to do that--no need for a technical editor, nor junior writers to help with scut work. Machine translation already annihilated the technical translation labor market, and what's left there seems like it'll disappear almost completely.

Personally, I've used ChatGPT to refactor reasonably meaty code, think through problems in our CI, and quickly format + stylize documentation that developers drafted. I felt guilty about it, but I was curious. It works better than I'd like. It's not going to take my job immediately, but, at some point soon, will it allow my company to employ 15 folks in my group where previously it might've had 20? I think so.

Based on my experiences so far and the startling rates of improvement we see with generative AI, I don't expect to work in this field in five years. if I still do, somehow, I would be surprised if my salary were half of what it is now. That sucks, as I love writing and technical writing is just about the only writing job you can reliably sustain a middle class lifestyle on.

I think that these technologies will be broadly disruptive for people whose jobs revolve around moving and structuring information. If I didn't have a family and childcare bills to pay, I'd be retraining into healthcare or perhaps a trade.

Aramis
Sep 22, 2009



SaTaMaS posted:

Most large projects I've worked on have had a small collection of senior programmers, a small number of interns/junior programmers that the senior programmers are supposed to train in case they get hit by a bus, and a bunch of in-between programmers that are usually contract workers. ChatGPT can make the junior and senior programmers so much more productive that the in-between programmers are mostly unnecessary, but can't replace the senior programmers or their HBAB replacements in the foreseeable future.

As another person in tech leadership, this is my read as well.

Insanite posted:

I think that these technologies will be broadly disruptive for people whose jobs revolve around moving and structuring information.

More specifically, anything related to structuring information for human consumption is definitely going to be dead in the water real quick. Technical writers, copy editors, etc...

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

Aramis posted:

More specifically, anything related to structuring information for human consumption is definitely going to be dead in the water real quick. Technical writers, copy editors, etc...
I think that's a bit premature. Technical writing especially is pretty exacting, and usually always going to be with custom products or w/e where there's not going to be anything too similar in its training set. If it becomes a productivity multiplier, there could be less of those jobs, but I don't see them going away.

Carp posted:

That's a pretty good summary. Much better than my notes earlier in the thread, which are a little confused.
Thanks!

Count Roland
Oct 6, 2013

It isn't so much an AI being a 1 to 1 replacement for a human, in case anyone was thinking that. I'm the programmer context I think it would look like a team of 5 being cut to say a team of 3, as those 3 are very productive with AI tools to help them do their jobs.

Learning these AI tools is going to be absolutely essential for workers in... maybe every field. Just like we all use computers and the internet in one way or another. Of course this time it seems like the change is going to be even more rapid. A lot of people will be left behind.

Aramis
Sep 22, 2009



cat botherer posted:

I think that's a bit premature. Technical writing especially is pretty exacting, and usually always going to be with custom products or w/e where there's not going to be anything too similar in its training set. If it becomes a productivity multiplier, there could be less of those jobs, but I don't see them going away.

Oh yes, to be clear, I didn't mean that it's entirely going away. I'd expect the labor market for these positions to shrink to the point where it might as well be for most people in there.

Aramis fucked around with this message at 16:24 on Mar 31, 2023

Insanite
Aug 30, 2005

Aramis posted:

More specifically, anything related to structuring information for human consumption is definitely going to be dead in the water real quick. Technical writers, copy editors, etc...

Basically, yeah.

There'll be a short-lived boom in documenting the stuff that is automating lots of other folks out of work, but it's not a pretty picture, nope.

This is a really hot topic among the writers I talk to, and there is a strong streak of denialism there. "Sure, it can regurgitate things that people have already written, but talking to subject matter experts? Writing brand new stuff? Impossible."

You don't need a full-time writer to interpret some code, comments, and notes from a developer! ChatGPT can already do that pretty well right now! Might there be a human at the end of the process to edit/curate/question? Sure, but they'll have replaced ten other people.

If dev teams will be decimated in the historical sense by these technologies, allied tech roles will be decimated in the modern sense. Not gone, but pay will be reduced and competition will be vicious.

This uncertainty bothers me--in part, because, like any good American, I identify too much with my job, but also because I doubt that productivity gains will be distributed democratically.

Apparently, it's also not great for your health: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8685638/

quote:

Unemployment has been found to be associated with worsening mental health, while job insecurity may affect those who remain employed. Having a job where one has a higher risk of being laid off can cause stress and greater risk of anxiety and depression. Perceived job security and stressful working conditions are associated with the risk of new technologies displacing one’s job. Employees whose jobs face automation may be more likely to fear job displacement, and studies report associations between low job security and worsened health conditions for employees and their families, as well as fear of job displacement and reduced mental health. Norwegian data indicate a rise in the use of antidepressants and anxiety-reducing prescription drugs several months before a job loss occurs.

Insanite fucked around with this message at 16:20 on Mar 31, 2023

SaTaMaS
Apr 18, 2003
Another promising area is the possibility that ChatGPT can look at really old languages like Cobol and Fortran and not just improve the documentation but translate it into modern languages using cleaner code.

Aramis
Sep 22, 2009



SaTaMaS posted:

Another promising area is the possibility that ChatGPT can look at really old languages like Cobol and Fortran and not just improve the documentation but translate it into modern languages using cleaner code.

No. Nope. Nope nope nope, at least for COBOL.

The main reason that code is still in that archaic language is not because no one can read it, but because the risk-to-reward ratio of converting it to anything else is not worth it. It's the epitome of "if it ain't broke, don't fix it". It'll get replaced with modern code during a generational shift to a different infrastructure entirely and not a second before that.

Risk intolerant systems are going to be the last bastion of human-writen code, and this is notoriously one of the most risk intolerant system of all times.

Aramis fucked around with this message at 16:21 on Mar 31, 2023

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


Insanite posted:

I'm already seeing job postings for senior writers that mention using generative AI to augment productivity, and the tech appears good enough to do that--no need for a technical editor, nor junior writers to help with scut work. Machine translation already annihilated the technical translation labor market, and what's left there seems like it'll disappear almost completely.


I would avoid those people, because generative AI is poison for IP.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

SaTaMaS posted:

Another promising area is the possibility that ChatGPT can look at really old languages like Cobol and Fortran and not just improve the documentation but translate it into modern languages using cleaner code.
I actually think that would be one of the less likely areas, although it could still help. Legacy Cobol systems are all byzantine unstructured code written mostly by people who are now dead. These systems don't have any real specs for how exactly they should work or what they should do, but they're usually vital, and must keep doing whatever it is they are doing. Correctness of a replacement basically means that it should be identical in behavior to the old system, but there's no way of actually verifying that.

I think its one area where ChatGPT would really be lead astray. ChatGPT only understands text (including code) and textual contexts. Good code written in a modern structured programming language will usually have a pretty decent mapping between syntax and computational semantics. ChatGPT has no idea about any kind of computational semantics, but it is possible that there exists a faithful enough mapping,
code:
(program semantics) <- (syntax) -> (ChatGPT's internal representation) -> (generated syntax) -> (generated semantics)
such that the semantics of the generated code is faithful enough to the original semantics. This would work best with the same language, but probably would usually work translating between, e.g. Python and Ruby.

Cobol is not modern or structured - the relationship between syntax and semantics, in CS terminology, is "hosed up." Because the code is unstructured, it's a massively complex ball of entropy with all sorts of non-local interactions. A piece of code might do very different things depending on current program state, etc. The program can only be understood as a whole, and only fully upon running it many, many times with different inputs - so it's just something that ChatGPT can't do.

cat botherer fucked around with this message at 17:28 on Mar 31, 2023

gurragadon
Jul 28, 2006

Insanite posted:

Basically, yeah.

There'll be a short-lived boom in documenting the stuff that is automating lots of other folks out of work, but it's not a pretty picture, nope.

This is a really hot topic among the writers I talk to, and there is a strong streak of denialism there. "Sure, it can regurgitate things that people have already written, but talking to subject matter experts? Writing brand new stuff? Impossible."

You don't need a full-time writer to interpret some code, comments, and notes from a developer! ChatGPT can already do that pretty well right now! Might there be a human at the end of the process to edit/curate/question? Sure, but they'll have replaced ten other people.

If dev teams will be decimated in the historical sense by these technologies, allied tech roles will be decimated in the modern sense. Not gone, but pay will be reduced and competition will be vicious.

This uncertainty bothers me--in part, because, like any good American, I identify too much with my job, but also because I doubt that productivity gains will be distributed democratically.

Apparently, it's also not great for your health: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8685638/

I see this as the automation of factories happening to the white-collar workforce. It was really bad for factory workers and it's going to be bad for knowledge workers.

The problem isn't necessarily with unemployment though. The problem is unemployed people aren't provided with money to keep the alive. I have a lot less fear about losing my job if I know that I won't be treated as if it is my fault, and society is kind enough to take care of me. Another problem is one you pointed out, people put way too much of their identity into their jobs and we are taught to do so.

Human's will need to learn to find meaning in themselves outside of what they do for a living, and for a lot of people, that is going to be very difficult. You can see the detrimental effects of clinging to your job as if it was your whole being, look at coal mining in West Virginia. If you build your identity on your job your identity is just shattered when you lose it and keeping a job is something the employee really can't completely control.

gurragadon fucked around with this message at 17:06 on Mar 31, 2023

Main Paineframe
Oct 27, 2010

StratGoatCom posted:

I would avoid those people, because generative AI is poison for IP.

I think you're exaggerating the Copyright Office's decision a bit.

I'd say the Zarya of the Dawn outcome is of little consequence to your average closed-source software company. Even if the actual code itself isn't copyrightable, the code isn't usually being made available in the first place. And, to quote the Zarya decision, even if AI-generated material itself is uncopyrightable, the "selection, coordination, and arrangement" of that material by humans is still copyrightable, which is a big part of software development. Moreover, even the uncopyrightable parts can still become copyrightable if sufficiently edited by humans.

When you say "poison", it makes me think of "viral" IP issues like the GPL that will spread and "infect" anything they're mixed with, but the Copyright Office was pretty clear that the uncopyrightable status of generated material is quite limited and doesn't spread like that.

Seyser Koze
Dec 15, 2013

Mucho Mucho
Nap Ghost

gurragadon posted:

I see this as the automation of factories happening to the white-collar workforce. It was really bad for factory workers and it's going to be bad for knowledge workers.

The problem isn't necessarily with unemployment though. The problem is unemployed people aren't provided with money to keep the alive. I have a lot less fear about losing my job if I know that I won't be treated as if it is my fault, and society is kind enough to take care of me. Another problem is one you pointed out, people put way too much of their identity into their jobs and we are taught to do so.

Human's will need to learn to find meaning in themselves outside of what they do for a living, and for a lot of people, that is going to be very difficult. You can see the detrimental effects of clinging to your job as if it was your whole being, look at coal mining in West Virginia. If you build your identity on your job your identity is just shattered when you lose it and keeping a job is something the employee really can't completely control.

So all we need is a society completely unlike the one we live in, run by people completely unlike the ones running it, and a ton of people losing their jobs will be no issue. Great.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

Seyser Koze posted:

So all we need is a society completely unlike the one we live in, run by people completely unlike the ones running it, and a ton of people losing their jobs will be no issue. Great.
It's pretty incredible that labor-saving technologies hurt workers, rather than freeing them from rote tasks. One could say it is a contradiction, even.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


Main Paineframe posted:

I think you're exaggerating the Copyright Office's decision a bit.

I'd say the Zarya of the Dawn outcome is of little consequence to your average closed-source software company. Even if the actual code itself isn't copyrightable, the code isn't usually being made available in the first place. And, to quote the Zarya decision, even if AI-generated material itself is uncopyrightable, the "selection, coordination, and arrangement" of that material by humans is still copyrightable, which is a big part of software development. Moreover, even the uncopyrightable parts can still become copyrightable if sufficiently edited by humans.

When you say "poison", it makes me think of "viral" IP issues like the GPL that will spread and "infect" anything they're mixed with, but the Copyright Office was pretty clear that the uncopyrightable status of generated material is quite limited and doesn't spread like that.

Nope, it is very long standing doctrine that machine output cannot be copyrighted, as attempts at brute force of the copyright system would be anticipatable since Orwellian book kaleoscopes. Machine or animal generated, versus stuff touched up with is is not going to be allowed.

gurragadon
Jul 28, 2006

Seyser Koze posted:

So all we need is a society completely unlike the one we live in, run by people completely unlike the ones running it, and a ton of people losing their jobs will be no issue. Great.

I don't think it will be pretty but it's not the first-time society has changed and if society can adapt it won't be the last time either. The difference I see is that AI technologies are making the transition way faster than humans are accustomed to, so whether we are going to be able to manage is still up in the air to me.

Main Paineframe
Oct 27, 2010

StratGoatCom posted:

Nope, it is very long standing doctrine that machine output cannot be copyrighted, as attempts at brute force of the copyright system would be anticipatable since Orwellian book kaleoscopes. Machine or animal generated, versus stuff touched up with is is not going to be allowed.

Machine output can certainly be copyrighted. For example, photographs are machine output. What matters in whether something is copyrightable or not is whether it's the direct result of an expression of human creativity. This isn't due to worries about "brute-forcing" or anything like that, it's a practical result of the fact that only humans are legally entitled to copyright. Since only humans can legally hold copyrights, and since creative involvement with the work is necessary to claim initial copyright over it, human creative involvement is necessary because the work is uncopyrightable without human involvement.

But we don't have to speak in these broad, vague terms, because there is a recent and specific Copyright Office ruling covering the use of generative AI output. They clearly state that while Midjourney output itself cannot be copyrighted, human arrangements of Midjourney output are copyrightable, and sufficient human editing would make it copyrightable.

quote:

The Office also agrees that the selection and arrangement of the images and text in the Work are protectable as a compilation. Copyright protects “the collection and assembling of preexisting materials or of data that are selected, coordinated, or arranged” in a sufficiently creative way. 17 U.S.C. § 101 (definition of “compilation”); see also COMPENDIUM (THIRD) § 312.1 (providing examples of copyrightable compilations). Ms. Kashtanova states that she “selected, refined, cropped, positioned, framed, and arranged” the images in the Work to create the story told within its pages. Kashtanova Letter at 13; see also id. at 4 (arguing that “Kashtanova’s selection, coordination, and arrangement of those images to reflect the story of Zarya should, at a minimum, support the copyrightability of the Work as a whole.”). Based on the representation that the selection and arrangement of the images in the Work was done entirely by Ms. Kashtanova, the Office concludes that it is the product of human authorship. Further, the Office finds that the compilation of these images and text throughout the Work contains sufficient creativity under Feist to be protected by copyright. Specifically, the Office finds the Work is the product of creative choices with respect to the selection of the images that make up the Work and the placement and arrangement of the images and text on each of the Work’s pages. Copyright therefore protects Ms. Kashtanova’s authorship of the overall selection, coordination, and arrangement of the text and visual elements that make up the Work.

quote:

The Office will register works that contain otherwise unprotectable material that has been edited, modified, or otherwise revised by a human author, but only if the new work contains a “sufficient amount of original authorship” to itself qualify for copyright protection. COMPENDIUM (THIRD) § 313.6(D). Ms. Kashtanova’s changes to this image fall short of this standard. Contra Eden Toys, Inc. v. Florelee Undergarment Co., 697 F.2d 27, 34–35 (2d Cir. 1982) (revised drawing of Paddington Bear qualified as a derivative work based on the changed proportions of the character’s hat, the elimination of individualized fingers and toes, and the overall smoothing of lines that gave the
drawing a “different, cleaner ‘look’”)

quote:

To the extent that Ms. Kashtanova made substantive edits to an intermediate image generated by Midjourney, those edits could provide human authorship and would not be excluded from the new registration certificate.

Practically, what does all this mean? If you went through Midjourney's archive and took the original images that were used in Zarya of the Dawn, you could use them freely, they're public domain. However, you can't print off and sell your own bootleg Zarya of the Dawn comics, because the comic panels and comic pages are copyrighted. Although the individual images are unprotected, the way in which she assembled those images onto comic pages contains sufficient human creativity to qualify for copyright.

In other words, AI involvement doesn't wipe away human involvement and spread uncopyrightability through the entire finished work. In fact, it's exactly the opposite - human involvement wipes away AI involvement and removes uncopyrightability from the finished work.

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


Yes, but if you wiped away the human stuff - something that will likely be in the pipeline for both AI detection and further training of models - the assets themselves are free. That is a big problem for defense, and something no one with sense will touch

Small White Dragon
Nov 23, 2007

No relation.

gurragadon posted:

I see this as the automation of factories happening to the white-collar workforce. It was really bad for factory workers and it's going to be bad for knowledge workers.

The question is, if a bunch of people are suddenly unemployed, will there be new areas for them to be employed in?

Owling Howl
Jul 17, 2019

StratGoatCom posted:

Yes, but if you wiped away the human stuff - something that will likely be in the pipeline for both AI detection and further training of models - the assets themselves are free. That is a big problem for defense, and something no one with sense will touch

Depends what your motivation is. Obviously it's unacceptable for corporate media publishers but someone who is passionate about making a game/book/song/movie/image may be more interested in simply realizing a vision and making a name for themselves. With or without copyright people will make things.

Count Roland
Oct 6, 2013

Small White Dragon posted:

The question is, if a bunch of people are suddenly unemployed, will there be new areas for them to be employed in?

Each time some technology comes in and renders some types of work obsolete, there's always been new work created from it.

So, yes, in theory. Will these unemployed people have the skills and training to be able to do these new jobs? That's a bigger question. I think a computer programmer is going to be more flexible than say a coal miner or a loom operator but these things are hard to predict. The faster this happens, the harder it is for most people to adapt.

Small White Dragon
Nov 23, 2007

No relation.

Count Roland posted:

Each time some technology comes in and renders some types of work obsolete, there's always been new work created from it.

Right, but just because that's been true so far, doesn't mean it always be so.

Also, sometimes the number of jobs created are far less than the number replaced.

Blut
Sep 11, 2009

if someone is in the bottom 10%~ of a guillotine

gurragadon posted:

I see this as the automation of factories happening to the white-collar workforce. It was really bad for factory workers and it's going to be bad for knowledge workers.

The problem isn't necessarily with unemployment though. The problem is unemployed people aren't provided with money to keep the alive. I have a lot less fear about losing my job if I know that I won't be treated as if it is my fault, and society is kind enough to take care of me. Another problem is one you pointed out, people put way too much of their identity into their jobs and we are taught to do so.

Human's will need to learn to find meaning in themselves outside of what they do for a living, and for a lot of people, that is going to be very difficult. You can see the detrimental effects of clinging to your job as if it was your whole being, look at coal mining in West Virginia. If you build your identity on your job your identity is just shattered when you lose it and keeping a job is something the employee really can't completely control.

I'm mid level in a big tech MNC and anyone senior I've talked to about this shares that view. Its what I've come around to, too.

Its going to be very interesting (or horrifying, depending on outcome) to see how our societies react to a culling of middle class jobs over the next decade or two. Our governments in the first world largely ignored the working class being automated from unionised factory jobs to driving an uber, will they do the same when it happens to tech workers/lawyers/etc? Or will the middle class have more political pull to get things like UBI implemented?

I'd suspect parts of the world with more of a history of social democracy/social supports like Scandinavia will handle it a lot better than the more gently caress you, got mine places like the US. But we'll see I guess.

porfiria
Dec 10, 2008

by Modern Video Games
I think the thinking among economists is that if technology reduces the amount of labor input for each unit of production, the cost and therefore unit price falls, which stimulates demand, which causes more people to get employed.

This makes some sense but I have doubts it holds in all cases, particularly when the amount of human labor becomes extremely small.

Bar Ran Dun
Jan 22, 2006
Probation
Can't post for 5 hours!
So Krugman has written this on AI with a point that boils down to well we will see it’s impacts a decade from now.

https://www.nytimes.com/2023/03/31/opinion/ai-chatgpt-jobs-economy.html?smid=nytcore-ios-share&referringSource=articleShare

Here he could have gone a more interesting place. So I’m going to ask the question he should have.

If we look to the foundations of cybernetics, we find that tools always have two parts. The toll itself and the suite of ideas that allow for the full use of the potential of the tool. Right now a lot of folks are rather enchanted with the tool itself (the language and image models ).

For the language models I don’t think the suite of ideas for full use exists yet.

What do y’all think it looks like?

Bar Ran Dun fucked around with this message at 22:21 on Apr 1, 2023

Cephas
May 11, 2009

Humanity's real enemy is me!
Hya hya foowah!
How much of an impact, collectively, does it make for users to interact with these programs? If random users input a million prompts to Dall-E, does that have some kind of impact on its function (or on the way the owners of such programs can analyze input trends)? Basically, does interacting with an AI model out of curiosity empower and further legitimize it?

I'm curious if using art AI might be beneficial for brainstorm ideas for drawing. If an AI image generator is being used solely for reference, much the same way one would browse pinterest or google image for reference, I don't know if that would necessarily be an ethical issue in and of itself. But I recognize and am very wary of art AI specifically because it is trained on peoples' art without their permission; I think that really poisons the well. On the other hand, gathering reference images from the internet is fine, and the ability to gather reference images is a great benefit for the artistic community as a whole. So using an AI image generator solely to brainstorm may not be a problem. It would only be a problem if interacting with AI generation programs at all somehow empowers and further entrenches the owners of said programs to continue to do unethical actions.

Does any of that make sense?? I would love to hear people's perspectives.

Cephas fucked around with this message at 16:07 on Apr 1, 2023

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
Depends on the model and interface. Some say they might analyse your inputs, others claim they wont.
There are many models you can now download and run yourself locally if you want complete control. They won't be the state of the art ones though.

Adobe has created a 'art' generator that is from public domain and stock images they own the rights to, assuming it was how the models were trained that is the main issue you have.

Count Roland
Oct 6, 2013

A side effect of Dall-e and similar programs is that there's a lot of AI art being generated, which shows up on the internet, which is trawled for data, which is then presumably fed back into AI models. I wonder if AI generated content is somehow filtered out to prevent feedback loops.

XboxPants
Jan 30, 2006

Steven doesn't want me watching him sleep anymore.

Bar Ran Dun posted:

So Krugman has worked in on AI with a point that boils down to well we will see it’s impacts a decade from now.

https://www.nytimes.com/2023/03/31/opinion/ai-chatgpt-jobs-economy.html?smid=nytcore-ios-share&referringSource=articleShare

Here he could have gone a more interesting place. So I’m going to ask the question he should have.

If we look to the foundations of cybernetics, we find that tools always have two parts. The toll itself and the suite of ideas that allow for the full use of the potential of the tool. Right now a lot of folks are rather enchanted with the tool itself (the language and image models ).

For the language models I don’t think the suite of ideas for full use exists yet.

What do y’all think it looks like?

I think we're about to see some very targeted advertising. Based on my very simple and probably flawed understanding of what cat botherer said about being able to look at a user review and predict a rating from that, it seems that turning text into some useful number is a potentially big use.

In that case, am I correct in understanding that someone should be able to train an AI to read tweets in real time, assign them a score on a scale of, say, conservativeness to liberalness, and use that to target campaign ads or alt-right pipeline stuff? (unless the advertising industry is already machine-reading tweets to find targets) But with a better and better AI, you could, say, find people who seem to have transgender children, but are showing concern, and blast them with memes about children getting mutilated. It's a less risky use case since the failure tolerance is pretty high, worst case you advertise to unintended audience.

XboxPants
Jan 30, 2006

Steven doesn't want me watching him sleep anymore.

Count Roland posted:

A side effect of Dall-e and similar programs is that there's a lot of AI art being generated, which shows up on the internet, which is trawled for data, which is then presumably fed back into AI models. I wonder if AI generated content is somehow filtered out to prevent feedback loops.

We've already seen this happening in Bing. People asked it to give records from previous chat logs, and it was able to provide some, and at first people were freaked out because they thought it meant the logs were actually being stored. Eventually they realized Bing had just found a chat log that someone uploaded to reddit or whatever, and Bing was getting details from that.

I do wonder how this is going to effect it, as more and more examples of "this is what a Bing user sessions looks like" get uploaded online. Especially since atypical ones are more likely to be uploaded.

IShallRiseAgain
Sep 12, 2008

Well ain't that precious?

Count Roland posted:

A side effect of Dall-e and similar programs is that there's a lot of AI art being generated, which shows up on the internet, which is trawled for data, which is then presumably fed back into AI models. I wonder if AI generated content is somehow filtered out to prevent feedback loops.

AI being used to train AI is not the problem that people think it is. In fact, using AI to generate more training data is sometimes actually something desirable because you can better control the input. Midjourney for example uses RLHF (Reinforcement Learning from Human Feedback) to improve its model, and Stable Diffusion is going to release a model using the same technique. The controls used to gather the original dataset will work fine with a bunch of AI data, because even before AI, there was a lot of really bad data out there. (You can check the LAION database and search for a term, and see there is a lot of unrelated garbage.)

As for the full potential of AI, I see a future where anybody can create their own TV show, game, or movie if they are willing to put in the effort without working themselves to death or relying on a team. I don't mean somebody just types in a prompt and they just automatically generate one, but they focus on their strengths, and use AI as an assistant to generate other stuff. Then there is using AI with robots. I could see a future where people do have robot housekeepers. Then there is the medical advantages, with early detection of symptoms, and hopefully more accurate diagnosis.

My biggest concern is that getting the hardware/training data required for AI will require a large organization either government or corporate. I don't want a future where you have to pay a subscription for everything, corporations or government have firm control over what is acceptable or not, and its very easy for them to know everything about you. I have hope that the Open Source community won't let that scenario happen. Its been pretty good at keeping up with technological advancements even though higher quality LLMs are a bit hard to run on consumer hardware at the moment.

-edit VVVV Local Stable Diffusion doesn't put a watermark on the images for most GUIs, and its fairly simple to remove the watermark.

IShallRiseAgain fucked around with this message at 19:11 on Apr 1, 2023

Main Paineframe
Oct 27, 2010

Count Roland posted:

A side effect of Dall-e and similar programs is that there's a lot of AI art being generated, which shows up on the internet, which is trawled for data, which is then presumably fed back into AI models. I wonder if AI generated content is somehow filtered out to prevent feedback loops.

The major algorithms put an invisible digital watermark in the images. It's fairly simple to check for that watermark and remove anything with that watermark from the training data. It's not perfectly reliable at the level of "mass-scraping random poo poo off the web", since modifying the image (such as resizing or cropping it) may damage the watermark, but it should at least substantially reduce the amount of AI-generated media in a training set.

Bar Ran Dun
Jan 22, 2006
Probation
Can't post for 5 hours!

XboxPants posted:

I think we're about to see some very targeted advertising. Based on my very simple and probably flawed understanding of what cat botherer said about being able to look at a user review and predict a rating from that, it seems that turning text into some useful number is a potentially big use.

I think I’m already seeing that, I know I’m seeing AI generated images in ads.

That still strikes me as a substitution not a new mode. Here’s what I mean by that. Take electricity. On the industrial side at the beginning you start to get electric motors replacing belt driven machinery. So you had a steam engine turning a wheel with complicated systems of belts driving the factory machinery. That becomes just individual machines driven by a motor.

For a long time like decades that was the extent of what changed. Then eventually it was realized that the positions of the machines in the factory could be reordered. That’s the idea suite that’s the big deal. That’s where things like the assembly line pop up.

So the targeted ads seems like the motors replacing belt drives. They could micro target ads and were especially by Facebook before this.

What the idea where this new thing utterly changes the system?

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


The correct response is to flatly ban it, because epistemic pollution is an existential risk.

https://twitter.com/stealcase/status/1642019617609506816

Typical. This poo poo won't work long term any longer then crypto, but the damage is spectacular.

Bar Ran Dun
Jan 22, 2006
Probation
Can't post for 5 hours!

StratGoatCom posted:

The correct response is to flatly ban it, because epistemic pollution is an existential risk.

I agree epistemic pollution is an existential risk but we already got that without AI as long as social media, maybe media at all, exists.

Adbot
ADBOT LOVES YOU

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


Bar Ran Dun posted:

I agree epistemic pollution is an existential risk but we already got that without AI as long as social media, maybe media at all, exists.

Self-targeting AI is the methane to the current media's CO2.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply