Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
XboxPants
Jan 30, 2006

Steven doesn't want me watching him sleep anymore.

PT6A posted:

I think the degree to which an AI being capable of passing the bare minimum requirements of a given vocation is being lauded is quite stunning. Like, loving Rudy Giuliani passed the bar! It's not really that impressive, and it doesn't mean poo poo. The basic licensure requirements of any profession are usually not that difficult, and do not represent expert practice in that profession. What's happening here is people are looking at this thing and saying "it knows poo poo I don't, it must be an expert!!!" when that's really not the case.

The reason it was impressive to me wasn't because it was objectively impressive, it was because of the rate at which it improved. ChatGPT 3.5 scored in the bottom 10% of test takers for the bar, and then just a few months later v4.0 comes out and scores in the top 90%. That's a very rapid improvement. It doesn't mean that ChatGPT 4.0 is capable of practicing law or anything, but it is shockingly fast improvement.

It feeds into a larger narrative where some version of one of these models will be released, and it will seem impressive at first, but quickly people will find weaknesses, things it can't do. And they use those weaknesses as evidence of how far away modern AI is from being able to be useful in a meaningful way. And then very, very rapidly, a new version comes out that can do the exact thing that it couldn't before.

So then that makes it difficult to me to make any firm guesses on what the state of AI is going to look like 5, 10, 20 years from now and that's exciting, and also concerning. We want to try to prepare for what fields of employment are going to be displaced, and when, and that's turning out to be exceedingly difficult to predict.

Adbot
ADBOT LOVES YOU

XboxPants
Jan 30, 2006

Steven doesn't want me watching him sleep anymore.

Bar Ran Dun posted:

So Krugman has worked in on AI with a point that boils down to well we will see it’s impacts a decade from now.

https://www.nytimes.com/2023/03/31/opinion/ai-chatgpt-jobs-economy.html?smid=nytcore-ios-share&referringSource=articleShare

Here he could have gone a more interesting place. So I’m going to ask the question he should have.

If we look to the foundations of cybernetics, we find that tools always have two parts. The toll itself and the suite of ideas that allow for the full use of the potential of the tool. Right now a lot of folks are rather enchanted with the tool itself (the language and image models ).

For the language models I don’t think the suite of ideas for full use exists yet.

What do y’all think it looks like?

I think we're about to see some very targeted advertising. Based on my very simple and probably flawed understanding of what cat botherer said about being able to look at a user review and predict a rating from that, it seems that turning text into some useful number is a potentially big use.

In that case, am I correct in understanding that someone should be able to train an AI to read tweets in real time, assign them a score on a scale of, say, conservativeness to liberalness, and use that to target campaign ads or alt-right pipeline stuff? (unless the advertising industry is already machine-reading tweets to find targets) But with a better and better AI, you could, say, find people who seem to have transgender children, but are showing concern, and blast them with memes about children getting mutilated. It's a less risky use case since the failure tolerance is pretty high, worst case you advertise to unintended audience.

XboxPants
Jan 30, 2006

Steven doesn't want me watching him sleep anymore.

Count Roland posted:

A side effect of Dall-e and similar programs is that there's a lot of AI art being generated, which shows up on the internet, which is trawled for data, which is then presumably fed back into AI models. I wonder if AI generated content is somehow filtered out to prevent feedback loops.

We've already seen this happening in Bing. People asked it to give records from previous chat logs, and it was able to provide some, and at first people were freaked out because they thought it meant the logs were actually being stored. Eventually they realized Bing had just found a chat log that someone uploaded to reddit or whatever, and Bing was getting details from that.

I do wonder how this is going to effect it, as more and more examples of "this is what a Bing user sessions looks like" get uploaded online. Especially since atypical ones are more likely to be uploaded.

XboxPants
Jan 30, 2006

Steven doesn't want me watching him sleep anymore.

StratGoatCom posted:

For the hundedth time, this isn't. This is merely using already existing rules. Allowing it to be otherwise in fact will have that effect you fear, because it makes literally anything free real estate for billionaire bandits. Indeed, the point is laundering this behavior, much as crypto was laundering for securities bs.

So? It's no different then throwing dice, and we don't allow copyright of stuff from that either. Procedures are not copyrightable.

You sure about that? I think that if someone uses a noise tool to generate a 3D landscape, they can copyright that, and that's fancy dice. Or, am I wrong about that?

XboxPants
Jan 30, 2006

Steven doesn't want me watching him sleep anymore.

SCheeseman posted:

Output not being direct copies of the original works is a compensating factor, one not shared by Google Books which copied verbatim.

Ugh this law stuff is boring. AI generators are going to be a hydra of problems regardless of whether the data that's thrown into them is licensed properly or not.

Yeah, that doesn't even seem to be the big issue to me. Let's say I'm the small artist who draws commissions for people for their original DnD or comic book or anime characters, and I'm worried how AI gen art is gonna hurt my business. Worried my customers will use an AI model instead of patronizing me. So, we can decide that we're going to treat it as a copyright infringement if a model uses art that they don't have the copyright for. That will at least keep my work from being used as part of the model.

But, that won't protect me.

Disney, as has been noted, has an incredible library of art including every Marvel comic going back 70+ years. They may not have my works to use, but they'll just train a model based on the art they have access to.

So, as that artist who was doing commissions for people, I'm just as hosed now. My old customers just get a license to use Disney's AI now, instead of using Midjourney. StratGoatCom's suggestion hasn't helped me at all.

XboxPants
Jan 30, 2006

Steven doesn't want me watching him sleep anymore.

BrainDance posted:

Just saying "there's a difference" doesn't answer it. That's a huge copout non-answer. As far as we can tell the human brain does exactly that, its lives experience, the context of its art, is " training data" that's used to generate what it does.

I think there's a lot of really good science that shows that the human brain basically is little more than a huge prediction engine. That's how it works. I saw this TED talk by a neurologist recently, and was struck by how much of it is relevant to this debate. I know it's a TED talk, ew pop science, but if you can't understand why some people don't see the obvious difference between how a human mind works and how a computer works, I'd strongly suggest checking it out.
https://www.youtube.com/watch?v=0gks6ceq4eQ

Barrett is a neuroscientist, and her assertion here is that emotions, something we take as one of the most fundamental parts of the human experience, are not hardwired into the human brain; they are guesses.

quote:

Using past experience, your brain predicts and constructs your experience of the world.

Human brains cannot come up with "new" creations completely disconnected from prior experiences, either. Watch that video and tell me you could see the image in the blobs before you're given the "training data" of the real image.

The point of all this being, as far as I can tell, AIs may be prediction engines, but so are humans. So that doesn't discount them from being able to create new art, especially if they're being used by a human who can ask them for "image of 32 cans of campell's soup in commercial art style".

Actually, that's a good question. If Warhol was never born, and the whole pop art movement never happened, and someone instead just used "image of 32 cans of campell's soup in commercial art style" as a Midjourney prompt... and it created an image similar to Warhol's soup cans... and hung it in a gallery... would it be a less meaningful piece of art, because they used a computer to create the individual pixels, rather than doing it by longhand?

Adbot
ADBOT LOVES YOU

XboxPants
Jan 30, 2006

Steven doesn't want me watching him sleep anymore.
Google I/O keynote is live atm, and as you can imagine there's a poo poo ton of AI stuff:
https://io.google/2023/?utm_source=google-hpp&utm_medium=embedded_marketing&utm_campaign=hpp_watch_live&utm_content=

edit: There's so much here that's gonna be so helpful to ADHD people like me with executive dysfunction. These tools like Sidekick do exactly the kind of stuff that's the most difficult to me, like sorting through a group chat for relevant information for planning a trip.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply