|
It probably knows that images similar to this are usually labeled “chair” on the internet. Test it on a 3d model of a chair or a shop fabrication drawing of one.
|
# ? Jan 28, 2024 03:31 |
|
|
# ? Jun 6, 2024 12:40 |
|
TGLT posted:Yes but it does not know that a chair is for sitting, and if you rotate that chair it's not likely to identify it correctly - and god forbid you show it a completely new chair because it'll have to work all over again. It does not understand the qualities of a chair. If you train it on chairs that have four leg what happens when it gets hit with a bauhaus chair or a stool? It's a blind association between pixels and a meaningless-to-it string value. These models aren't terrible at coming to correct conclusions with new information. The AI model that my robot vacuum uses to scan a room and identify objects gets things right most of the time in spite it being unlikely that it was trained on any of my furniture specifically and the unique (very low) angle that it's taking photos of it's environment from. Can they gently caress up? Of course, but so can humans. I'm not sure if AI models could be considered intelligent, but that's mostly because intelligence is extremely hard to define and even I'm not sure where the line is drawn.
|
# ? Jan 28, 2024 03:38 |
|
The core issue is you're conflating identification in the sense of "an algorithm can associate a set of pixels with a string value" with meaningful understanding of a thing. Your roomba lacks a theory of chairs, and will never discover how crucial chairs are to subjugating the human race. e: I am also going to bet it gets that identification process wrong a lot more often than a human being does, in large part because it does not understand what things are.
|
# ? Jan 28, 2024 03:43 |
|
TGLT posted:The core issue is you're conflating identification in the sense of "an algorithm can associate a set of pixels with a string value" with meaningful understanding of a thing. Your roomba lacks a theory of chairs, and will never discover how crucial chairs are to subjugating the human race.
|
# ? Jan 28, 2024 03:52 |
|
"Intelligence" is fundamentally the capacity to reason about things' thing-ness and connect those concepts to derive new information and solutions. This is a higher form of learning that most animals do not have. Many animals can learn behaviors they're shown, but that's training. AIs can do training.
|
# ? Jan 28, 2024 03:58 |
|
The higher understanding is the intelligence, is the thing. It does not matter what humans do most of the time - a human's or another animal's capacity to hold a concept in our mind is a useful floor for defining intelligence. That's like trying to reduce language to just being able to string words together. Even if humans usually put zero thought into syntax, we are still capable of it in a way not otherwise demonstrated in other species. And similarly I do not know of a single AI model that has yet to demonstrate it can actually understand a concept.
|
# ? Jan 28, 2024 03:59 |
|
Kyte posted:"Intelligence" is fundamentally the capacity to reason about things' thing-ness and connect those concepts to derive new information and solutions. This is a higher form of learning that most animals do not have. Many animals can learn behaviors they're shown, but that's training. AIs can do training. intelligence is so fundamentally undefinable with our current knowledge that the best suggested test for it is “can you tell the difference between this and a human”
|
# ? Jan 28, 2024 04:03 |
|
I guess what it comes down to is that AIs can't currently conceptualize? Given that's a necessary component of intelligence (I would agree with this), it's a pretty good argument against calling current generative AI truly intelligent.
SCheeseman fucked around with this message at 04:12 on Jan 28, 2024 |
# ? Jan 28, 2024 04:09 |
|
Clarste posted:I think the problem is more that intelligence has been rather poorly defined. The way it's commonly used basically means "thinks like a human" which isn't very helpful. It's not very helpful, but that doesn't mean it's poorly defined. Sometimes stuff just isn't useful. "Intelligent" is a wholly arbitrary concept that we creates mostly for philosophical reasons, not because it's actually useful for anything.
|
# ? Jan 28, 2024 04:36 |
|
LLMs can be described using the Chinese room model, which proves they're not intelligent nor can they ever be intelligent. I'm getting tired of AI companies promising or even threatening general AI is just around the corner when their technical progress is easily twenty years or more from that since they're currently funneling resources into a dead end technology.
|
# ? Jan 28, 2024 04:39 |
|
LLMs might be the best they will ever be. As the internet gets filled up with LLM generated stuff then eventually the LLMs will literally just be regurgitating themselves
|
# ? Jan 28, 2024 04:42 |
|
I think LLMs are as good at language generation as they need to be. Things like the GPTs feel like they're the start of the next stage, where it's about how to give LLMs guardrails and "intelligence" through nested, linked, and specialized LLMs. All the ones GPTs I've played with are still dumb as rocks, but it feels like that's where things are headed.
|
# ? Jan 28, 2024 05:00 |
|
Intelligence doesn’t have to be explicitly defined to have a meaningful discussion about AI in the two primary contexts it seems to be discussed most - replacing labor en masse or destroying civilization. If we’re talking about replacing human labor, then it’s really a question of accuracy and how much accuracy actually matters in the jobs where humans may eventually be replaced. Maybe it’ll get better or maybe LLMs will regurgitate each other’s hallucinations endlessly, but neither of those have to do with intelligence. Intelligence is just a red herring in these discussions. If we’re talking about destroying civilization, is any LLM going to start turning into Skynet at any point? No because its not intelligent - it can’t think about concepts and therefore it can’t do things like take your last 10 questions, deduce your motivation behind those, and start willfully misleading you down a path of its own choosing with deliberately constructed false information.
|
# ? Jan 28, 2024 05:08 |
|
Kwyndig posted:LLMs can be described using the Chinese room model, which proves they're not intelligent nor can they ever be intelligent. I'm getting tired of AI companies promising or even threatening general AI is just around the corner when their technical progress is easily twenty years or more from that since they're currently funneling resources into a dead end technology. llms can be trivially differentiated from a human, which is why they’re not intelligent there is nothing at all that tells me you cannot be described by the Chinese room model, which is why that’s not why they’re not intelligent
|
# ? Jan 28, 2024 05:34 |
|
evilweasel posted:llms can be trivially differentiated from a human, which is why they’re not intelligent How, assuming all you is have a text interface?
|
# ? Jan 28, 2024 05:40 |
|
evilweasel posted:there is nothing at all that tells me you cannot be described by the Chinese room model, which is why that’s not why they’re not intelligent Huh?
|
# ? Jan 28, 2024 05:45 |
|
BabyFur Denny posted:How, assuming all you is have a text interface? I’m sorry, but I can’t provide the requested response as it violates OpenAI’s use case policy.
|
# ? Jan 28, 2024 05:47 |
|
evilweasel posted:llms can be trivially differentiated from a human, which is why they’re not intelligent Mm, this whole "is machine intelligence possible" debate runs right into the heart of the whole Problem of Other Minds, which normally we just kind of take on faith other humans have and maybe some animals, but we generally don't worry about that. There's no way around it here: to know if machines can ever "think", we need to finally decode how we do, which I consider the truly exciting possibility.
|
# ? Jan 28, 2024 05:49 |
|
feedmyleg posted:I think LLMs are as good at language generation as they need to be. Things like the GPTs feel like they're the start of the next stage, where it's about how to give LLMs guardrails and "intelligence" through nested, linked, and specialized LLMs. All the ones GPTs I've played with are still dumb as rocks, but it feels like that's where things are headed. I had Google's Bard argue with me and double down on its sources and the numbers it was using. tl;dr I'm a WW2 navy nerd and this gets into the weeds. I couldn't convince it it's numbers were off, it stuck with its sources and gave reasons why. I was watching Drachinifell's video on naval history myths, and during the segment on why Japan went with the Kamikaze strategy I decided to run the numbers. I gave Bard this prompt: quote:Analyze the effectiveness of Japanese air attacks on the US Navy in World War 2 in terms of aircraft lost per aircraft carrier sunk, 1000 tons of warship sunk, and 1000 USN lives lost. Break the results into periods covering the start of the war, up to the introduction of the F6F Hellcat, up to the Battle of the Phillippine Sea, and then the kamikaze attacks through the end of the war. Display the results in tabular form. Show your sources. I got a table with realistic numbers in it. The first three time periods looked good, but the time period where the Kamikazes were the dominant strategy seem way off. They were way more effective than a conventional air attack, but 5.5 kamikaze sorties per US aircraft carrier sunk is way off. So I challenged it, but it came back that the key point wasn't aircraft carriers, the dozens of landing craft sunk were a major impact on the allied war effort. And the numbers did support that, sorties per 1000 tons sunk does match up with reality. tl;dr again Bard schooled me in an area I consider myself knowledgeable.
|
# ? Jan 28, 2024 06:55 |
|
The models right now are quite funny with how they will stick to their guns on some topics and then just fold on others. It doesn't seem to matter if the answer they are defending is correct or not.
|
# ? Jan 28, 2024 07:26 |
|
and outside the interesting theoretical discussion around reinventing the Turing test, the focus on the intelligence part of the AI seems to be in defense of their job. My job requires intelligence, AI doesn't have intelligence so therefore my job can't be done by AI. Of course, most jobs have large portions that don't require any particularly high level of intelligence, and are quite susceptible to large parts of it being done by automation. A good example is to extend on the "AI can't identify a chair if you give it a bad enough picture". Well, identifying good fruit from a bunch of bad fruit is a bit like that (should be more difficult really) and sorting fruit has been automated to a large degree. You can youtube tomatoes being conveyed up over an edge and little bursts of air or a paddle wack out the bad/unripe fruit at hundreds per minute. The good fruit will then be taken further and sorted into bins by individual weight. You give an untrained human that task and they will gently caress it up or work exceedingly slowly. Once the human is trained up and is well experienced, it is possible to outperform the machine on accuracy (unless the AI is doing some sort of internal look by x-ray for bugs but that is uncommon, most are just camera based from what I understand). AI doesn't even need deterministic tasks. Playing chess is not like tic tac toe where every path is well understood and playing opponents can't be purely based on percentage plays (because the opponent will pick up the intent and play a counter designed to overcome that play), it requires planning without certainty of the result and humans can't be trained well enough to beat a good automated chess player these days after much development. Where AI is likely to be really challenging is in subjective tasks. "Chip me out an aesthetically pleasing human form sculpture from this hunk of marble - and it better not be copy past of David!" but that is a matter of the beholders fashion more than anything else (which is often just as much driven by the source of the art as much as the art itself - you tell people AI carved this rock and all of a sudden, their opinion on the piece will drop several points).
|
# ? Jan 28, 2024 12:00 |
|
Electric Wrigglies posted:which is often just as much driven by the source of the art as much as the art itself - you tell people AI carved this rock and all of a sudden, their opinion on the piece will drop several points
|
# ? Jan 28, 2024 12:46 |
|
I sometimes try to get ChatGPT to do busywork, like organizing data or making tables of values that I don't want to type in manually, and it's not even really good at that because it's so loving lazy. It gives the absolute minimum output, which is often wrong, and then is like "ok here's something like 25% of what you asked for, as an example!"
|
# ? Jan 28, 2024 16:34 |
|
Mercury_Storm posted:I sometimes try to get ChatGPT to do busywork, like organizing data or making tables of values that I don't want to type in manually, and it's not even really good at that because it's so loving lazy. It gives the absolute minimum output, which is often wrong, and then is like "ok here's something like 25% of what you asked for, as an example!" Stop trying to make ChatGPT sounds so relatable!
|
# ? Jan 28, 2024 16:36 |
|
Kwyndig posted:LLMs can be described using the Chinese room model, which proves they're not intelligent nor can they ever be intelligent. I'm getting tired of AI companies promising or even threatening general AI is just around the corner when their technical progress is easily twenty years or more from that since they're currently funneling resources into a dead end technology. The Chinese Room thought experiment is a bullshit appeal to incredulity. The sleight of hand lies in declaring that we’ve laid out rules for the person in the room that produce responses that represent an understanding of Chinese. We are supposed to believe that those rules can do the task perfectly but at the same time are so rote and mechanical as to prevent any room-scale “understanding” of Chinese. And yet nothing provably happens in our brains that can’t be reduced to a rule being followed in a room (even if you needed enough of a room to simulate every atom in our brains). The problem isn’t your concept of how limited AI is, it’s the idea that we are somehow doing something different and special in our heads. I would argue it’s more likely we are just doing something more complex and cross-domain integrated.
|
# ? Jan 28, 2024 21:52 |
|
How tech companies are using/selling ML (or whatever the current industry branding campaign is calling it) is probably relevant in this thread, but we have a whole thread to talk about AI, how it works, and how "intelligent" it is right here.
|
# ? Jan 28, 2024 22:38 |
Literally all of the ads on my kindle (already a tech nightmare IMO) are now for obviously AI-generated, search -engine-optimized books, which are all subtitled "cute fairy tale bedtime story for kids" and which all feature obvious AI-generated cover art. I don't even have kids. The Algorithm was never great for kindle recommendations, but it used to at least have the decency to recommend human-written fantasy erotica.
|
|
# ? Jan 29, 2024 00:01 |
|
VikingofRock posted:Literally all of the ads on my kindle (already a tech nightmare IMO) are now for obviously AI-generated, search Lmao, I just started getting this as well, despite reading nothing but dry political non-fiction for the past several years. I also have no children.
|
# ? Jan 29, 2024 02:13 |
|
Have you guys tried torrenting some books from your local library?
|
# ? Jan 29, 2024 02:19 |
|
Dry political non-fiction erotica?
|
# ? Jan 29, 2024 02:19 |
|
feedmyleg posted:Dry political non-fiction erotica? Ron Chernow, right this way.
|
# ? Jan 29, 2024 04:09 |
Mercury_Storm posted:Have you guys tried torrenting some books from your local library? This is actually how I get most of my books nowadays (i.e., through Libby), and then I read them on my Kindle. Kindles have ads on the lock screen unless you pay $20 to get ad-free reading.
|
|
# ? Jan 29, 2024 04:57 |
|
Best $20 I ever spent.
|
# ? Jan 29, 2024 04:59 |
|
VikingofRock posted:This is actually how I get most of my books nowadays (i.e., through Libby), and then I read them on my Kindle. Kindles have ads on the lock screen unless you pay $20 to get ad-free reading. I have a kindle touch from like 2010 that I really need to figure out how to get it to talk to stuff again. None of that poo poo on it.
|
# ? Jan 29, 2024 05:40 |
|
I went with a Kobo instead specifically because Kindles jumped the shark years ago. Mine displays the cover art of the book I am reading. My next reader will have physical buttons again, though. gently caress touch interfaces while laying in bed holding the thing in one hand.
|
# ? Jan 29, 2024 09:20 |
|
Antigravitas posted:I went with a Kobo instead specifically because Kindles jumped the shark years ago. The Kobo Libra II has nice buttons and a touch interface, and weighs basically nothing, I love it dearly.
|
# ? Jan 29, 2024 11:43 |
I got a reMarkable 2 and am super-happy with it. Turns out what I was missing from pdfs to keep my attention was the ability to scribble in the margins to keep me occupied.
|
|
# ? Jan 29, 2024 12:12 |
|
The Artificial Kid posted:The Chinese Room thought experiment is a bullshit appeal to incredulity. The sleight of hand lies in declaring that we’ve laid out rules for the person in the room that produce responses that represent an understanding of Chinese. We are supposed to believe that those rules can do the task perfectly but at the same time are so rote and mechanical as to prevent any room-scale “understanding” of Chinese. And yet nothing provably happens in our brains that can’t be reduced to a rule being followed in a room (even if you needed enough of a room to simulate every atom in our brains). Also the thought experiment was meant to apply to expert systems from the 80s which worked by rules of logic and symbol processing which is completely different from how LLMs work. The relavant metaphor is the "China brain" thought experiment (https://en.wikipedia.org/wiki/China_brain) which is actually older than the Chinese Room thought experiment.
|
# ? Jan 29, 2024 19:38 |
|
VikingofRock posted:This is actually how I get most of my books nowadays (i.e., through Libby), and then I read them on my Kindle. Kindles have ads on the lock screen unless you pay $20 to get ad-free reading. Or never enable Wifi and you get a generic image on the lockscreen. That's what I do.
|
# ? Jan 31, 2024 12:14 |
|
|
# ? Jun 6, 2024 12:40 |
|
busalover posted:Or never enable Wifi and you get a generic image on the lockscreen. That's what I do. This move also allows you to keep a Libby book until you're done because it won't return the book until it can phone home.
|
# ? Jan 31, 2024 13:45 |