Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BoldFace
Feb 28, 2011

Liquid Communism posted:

AI can't create, it can only interpolate from its training set.

Is there a practical, objective way to put this to test? What is the simplest creative task that humans can perform, but AI can not?

Adbot
ADBOT LOVES YOU

BoldFace
Feb 28, 2011

Imaginary Friend posted:

Use the prompt "create an original house" or any other prompt without any creative details on ten different AI models and then ask ten different artists to ask the same question to ten artists they know about.

You left out the most important part. What happens next? A panel of human judges rates the works in terms of perceived originality? Assuming that they do rate the human works higher, what does that exactly prove? The question was not whether AI can be as creative as humans, but if it can be creative at all.

BoldFace
Feb 28, 2011

Liquid Communism posted:

The key is the difference between interpolation and extrapolation.

An AI can make a guess what the next point in a pattern is based on all the other, similar patterns it has been trained on, but limited by the outer bounds of that training data. It will also be confidently wrong, as it is incapable of second-guessing its own work.

A human can take a series of data points and make inferences based on data not actually present.

I'm only familiar with interpolation and extrapolation in mathematical context involving things like numbers or geometric objects. I'm struggling to undestand how you use these terms with language models. If I ask GPT-4 to come up with a new word that doesn't exist in its training data, in what sense is this interpolation rather than extrapolation? Similarly, I can ask it to create a natural number larger than any other present in the training data (which is finite). You say that the training data imposes limits on the output of the model. I would like to know how these limits manifest in practice. Is there a simplest task an AI fails because of these limits, but a human doesn't?

BoldFace
Feb 28, 2011

Liquid Communism posted:

If you asked it to come up with a word not in its training data, how would you vet it? It could certainly generate semi-random nonsense and tell you it's a new word, but it couldn't make like Tolkien and invent a language from first principals.

If I had access to the training data, I could simply search through it to see if the word exists. If I don't have access to the training data, I could give the AI additional requirements for the word, like it has to be at least 100 characters long and include conjugations from at least 20 different languages, which makes it overwhelmingly improbable that the word is included in the training data. The Tolkien example is hard to evaluate mainly because it is really complex. There is a risk that the AI would fail the task simply due to the complexity (no current model can output a fully consistent language textbook), rather than because the AI lacks creativity. That's why I keep asking for the simplest practical task that can be put to test. Then there's the question to what extent are Tolkien's languages completely original creations, and how much are they just "interpolation" from existing languages and linguistic stuff he knew.

BoldFace
Feb 28, 2011

roomtone posted:

I watched this video last night: https://www.youtube.com/watch?v=tjSxFAGP9Ss

It's a 47 minute takedown of all the defenses made around the AI art models just now, but I think it's all pretty solid and clear-headed stuff rather than coming from purely being pissed off, although the guy is very pissed off. I think I agree with him.

About the legal side of it, something I didn't know about was that there is a model by the stable diffusion people called dance diffusion which has been trained only on non-copyrighted material, unlike the image generators which have just hoovered up everything indiscriminately. Because the music industry would be litigious and come after them unlike visual arts. This reveals to me how exploitative the image generators actually are.

It seems like the reasonable thing to do is to have these datasets be opt-in rather than either opt-out or no choice. I think that should apply across the board for any kind of dataset that involves intellectual property. That way, you slow down this wholesale replacement of meaningful careers with AI generation, respect the rights of creators, and maybe even create some income for artists in the way of paying them to allow their work to be included in a dataset or giving them royalties when it was used in the generation of an image (although I don't know if that can be determined, but it's just one aspect of this).

Ultimately, none of that will make any difference. At the end of the day, what artists really care about is whether they can still make a career out of what they love doing in the next 5-10 years. It will suck just as much if their jobs are taken by ethical AI models. Regulating AI now will not make the long-term effects go away, it will just drive the companies to other countries.

BoldFace
Feb 28, 2011
A sufficiently intelligent AGI should be able to make its own case and convince any reasonable human being that it's at least as intelligent as them.

BoldFace
Feb 28, 2011
Many would consider robots destroying social media a net positive for humanity.

BoldFace
Feb 28, 2011
Uncanniness is part of the charm for many people. A subject who doesn't know how to hold a spoon correctly makes the image more interesting, not less.

BoldFace
Feb 28, 2011
It seems to me that whenever the discussion turns to sentience, consciousness, awareness, experience, etc., it devolves into arguments about how these concepts are defined in the first place. What I would like to know is what are the practical implications of any of this? Regardless of whichever definition you use, how does a sentient intelligence differ from a non-sentient intelligence?

If you are put in a room together with a sentient human and a non-sentient human, how do you tell them apart? Do you just talk to them or do you need to probe their brains? What if their brain activity is identical and the non-sentient one mistakenly thinks it is sentient? Does something change if the humans in this scenario are swapped with a sentient and non-sentient AGIs? If you can't tell the difference and still deny someone's or something's sentience, is it because you chose a definition that was biased against them from the beginning?

BoldFace
Feb 28, 2011
I want to see all OpenAI employees quit and join Altman's new company just because how hilariously catastrophic it would be to Microsoft.

BoldFace
Feb 28, 2011
Looks like Sam is not coming back after all.

https://twitter.com/_akhaliq/status/1726467905527611441

BoldFace
Feb 28, 2011
There's a huge flurry of OpenAI employees posting "OpenAI is nothing without its people" on Twitter. I don't know if it's just a call for the OAI board to resign or a sign that they're willing to follow Sam over to Microsoft.

Adbot
ADBOT LOVES YOU

BoldFace
Feb 28, 2011
https://twitter.com/bene25_/status/1762631362597519859

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply