|
Liquid Communism posted:AI can't create, it can only interpolate from its training set. Is there a practical, objective way to put this to test? What is the simplest creative task that humans can perform, but AI can not?
|
# ¿ May 10, 2023 09:01 |
|
|
# ¿ May 13, 2024 10:13 |
|
Imaginary Friend posted:Use the prompt "create an original house" or any other prompt without any creative details on ten different AI models and then ask ten different artists to ask the same question to ten artists they know about. You left out the most important part. What happens next? A panel of human judges rates the works in terms of perceived originality? Assuming that they do rate the human works higher, what does that exactly prove? The question was not whether AI can be as creative as humans, but if it can be creative at all.
|
# ¿ May 10, 2023 16:38 |
|
Liquid Communism posted:The key is the difference between interpolation and extrapolation. I'm only familiar with interpolation and extrapolation in mathematical context involving things like numbers or geometric objects. I'm struggling to undestand how you use these terms with language models. If I ask GPT-4 to come up with a new word that doesn't exist in its training data, in what sense is this interpolation rather than extrapolation? Similarly, I can ask it to create a natural number larger than any other present in the training data (which is finite). You say that the training data imposes limits on the output of the model. I would like to know how these limits manifest in practice. Is there a simplest task an AI fails because of these limits, but a human doesn't?
|
# ¿ May 11, 2023 05:48 |
|
Liquid Communism posted:If you asked it to come up with a word not in its training data, how would you vet it? It could certainly generate semi-random nonsense and tell you it's a new word, but it couldn't make like Tolkien and invent a language from first principals. If I had access to the training data, I could simply search through it to see if the word exists. If I don't have access to the training data, I could give the AI additional requirements for the word, like it has to be at least 100 characters long and include conjugations from at least 20 different languages, which makes it overwhelmingly improbable that the word is included in the training data. The Tolkien example is hard to evaluate mainly because it is really complex. There is a risk that the AI would fail the task simply due to the complexity (no current model can output a fully consistent language textbook), rather than because the AI lacks creativity. That's why I keep asking for the simplest practical task that can be put to test. Then there's the question to what extent are Tolkien's languages completely original creations, and how much are they just "interpolation" from existing languages and linguistic stuff he knew.
|
# ¿ May 11, 2023 07:25 |
|
roomtone posted:I watched this video last night: https://www.youtube.com/watch?v=tjSxFAGP9Ss Ultimately, none of that will make any difference. At the end of the day, what artists really care about is whether they can still make a career out of what they love doing in the next 5-10 years. It will suck just as much if their jobs are taken by ethical AI models. Regulating AI now will not make the long-term effects go away, it will just drive the companies to other countries.
|
# ¿ May 13, 2023 15:59 |
|
A sufficiently intelligent AGI should be able to make its own case and convince any reasonable human being that it's at least as intelligent as them.
|
# ¿ Jul 14, 2023 11:11 |
|
Many would consider robots destroying social media a net positive for humanity.
|
# ¿ Jul 23, 2023 03:11 |
|
Uncanniness is part of the charm for many people. A subject who doesn't know how to hold a spoon correctly makes the image more interesting, not less.
|
# ¿ Oct 1, 2023 20:58 |
|
It seems to me that whenever the discussion turns to sentience, consciousness, awareness, experience, etc., it devolves into arguments about how these concepts are defined in the first place. What I would like to know is what are the practical implications of any of this? Regardless of whichever definition you use, how does a sentient intelligence differ from a non-sentient intelligence? If you are put in a room together with a sentient human and a non-sentient human, how do you tell them apart? Do you just talk to them or do you need to probe their brains? What if their brain activity is identical and the non-sentient one mistakenly thinks it is sentient? Does something change if the humans in this scenario are swapped with a sentient and non-sentient AGIs? If you can't tell the difference and still deny someone's or something's sentience, is it because you chose a definition that was biased against them from the beginning?
|
# ¿ Oct 5, 2023 03:03 |
|
I want to see all OpenAI employees quit and join Altman's new company just because how hilariously catastrophic it would be to Microsoft.
|
# ¿ Nov 19, 2023 07:07 |
|
Looks like Sam is not coming back after all. https://twitter.com/_akhaliq/status/1726467905527611441
|
# ¿ Nov 20, 2023 06:29 |
|
There's a huge flurry of OpenAI employees posting "OpenAI is nothing without its people" on Twitter. I don't know if it's just a call for the OAI board to resign or a sign that they're willing to follow Sam over to Microsoft.
|
# ¿ Nov 20, 2023 11:24 |
|
|
# ¿ May 13, 2024 10:13 |
|
https://twitter.com/bene25_/status/1762631362597519859
|
# ¿ Feb 28, 2024 11:31 |