|
Boris Galerkin posted:I get what you're trying to say here but it's also wrong. Every day I use a fancy calculator to solve an equation similar to this: I really think it's a semantic issue more than anything. Modern LLMs aren't inherently reliable for math, but an LLM can act as a sort of delegator and make a high level plan and send sub-tasks to specialized systems (such as a calculator, a database, whatever) and then incorporate that data into its response. Folks can argue whether that counts as the LLM "learning" math, but what really matters to me is the capability of the system as a whole. Lucid Dream fucked around with this message at 23:05 on Mar 16, 2024 |
# ? Mar 16, 2024 23:03 |
|
|
# ? Apr 28, 2024 03:11 |
|
Main Paineframe posted:Judging from what you're saying about AlphaZero here, I think it's time to to suggest that we all agree on an important ground rule here: "LLM" is not a synonym of "transformer", "neural network", "machine learning", or "AI". LLMs are a kind of transformer, which is a kind of neural network, which is a kind of machine learning, which is a kind of AI. But that doesn't mean that LLMs can do anything that AI, machine learning, neural networks, or transformers can do. I get what you're saying, not disagreeing or anything, but the "LLMs are a kind of transformer" thing just made me think, are they necessarily transformers? I cant remember it right now but I swear there was a model released that used an older approach than transformers a bit ago but still performed decently, I just dont remember specifically what it was. Wouldn't a several billion parameter RNN also count as an LLM? Or if we find a new approach that's better than transformers, that would be an LLM too, I'd imagine? A computer doing math, I feel like theres some difference between a calculator doing math and a brain doing math that you just kinda know in your gut. A brain doing math is going through a non-deterministic process to figure it out. But the whole idea of a different AI doing math, another AI doing language, I dont see what the issue is. The AI is doing math if you think about "the AI" as not the single model but the whole thing, interconnected models. That's how actual brains work anyway, specialized parts of the brain doing a specific thing connected to all the other parts of the brain. BrainDance fucked around with this message at 06:16 on Mar 17, 2024 |
# ? Mar 17, 2024 06:11 |
|
Lucid Dream posted:I really think it's a semantic issue more than anything. Modern LLMs aren't inherently reliable for math, but an LLM can act as a sort of delegator and make a high level plan and send sub-tasks to specialized systems (such as a calculator, a database, whatever) and then incorporate that data into its response. Folks can argue whether that counts as the LLM "learning" math, but what really matters to me is the capability of the system as a whole. But that's not how we describe systems. I'm completely baffled by your argument. The LLM doesn't calculate, attempt to calculate or even see the answer, it simply provides another strut when the main AI cannot resolve the question. It doesn't matter if you "feel" the LLM is solving math, the reality is it is not. Mega Comrade fucked around with this message at 07:58 on Mar 17, 2024 |
# ? Mar 17, 2024 07:45 |
|
Mega Comrade posted:But that's not how we describe systems. I'm completely baffled by your argument. It's really not very complicated. The LLM itself is not very good at math, but it can be trained to utilize external tools such as a calculator. Whether you think that counts as the LLM "learning to do math" or whether it counts as the LLM "solving the problem" is obviously subjective. I think it counts, but we can agree to disagree. Mega Comrade posted:The LLM doesn't calculate, attempt to calculate or even see the answer, it simply provides another strut when the main AI cannot resolve the question. Edit: To clarify a bit, I'm talking about using multiple LLM calls as part of a response to the user. The first call determines the need for the tool and requests it, the second call formats the response after the tool returns the answer. Neither call is individually "solving the problem" but the problem is solved by the system enabled by the LLM. Lucid Dream fucked around with this message at 08:34 on Mar 17, 2024 |
# ? Mar 17, 2024 08:24 |
|
You made a comment on semantics and it's obviously clear that they matter because it seems like you're conflating LLM with "computer" or "software" in general. Control logic in programming has been a thing since the first programming languages were invented. Fortran programmed with punch cards were capable of "knowing" whether to do multiply two vectors or multiply a vector with a matrix or whatever. The LLM that takes your natural language input and delegates smaller bite sized commands to other systems does the same thing but differently. Also you keep talking about "it" doing things and it's sounding awfully close to you thinking that the system is acting like a person with agency instead of just following code.
|
# ? Mar 17, 2024 13:01 |
|
quote:A computer doing math, I feel like theres some difference between a calculator doing math and a brain doing math that you just kinda know in your gut. A brain doing math is going through a non-deterministic process to figure it out. If by math you mean calculation, then there is or should be no difference as to how a computer or a person does it. Remember that calculation methods are programmed by people, who developed a method/algorithm and implemented it as code. If by math you mean like coming up with these methods and such, then perhaps. But as I said before, having the "here's what you do" is not entirely useful until you know why it's done this way.
|
# ? Mar 17, 2024 13:04 |
|
Well personally I don't normally do maths in base2.
|
# ? Mar 17, 2024 15:06 |
|
Boris Galerkin posted:You made a comment on semantics and it's obviously clear that they matter because it seems like you're conflating LLM with "computer" or "software" in general. When folks type a question into ChatGPT, the system that responds is what I’m talking about. Boris Galerkin posted:Control logic in programming has been a thing since the first programming languages were invented. Fortran programmed with punch cards were capable of "knowing" whether to do multiply two vectors or multiply a vector with a matrix or whatever. The LLM that takes your natural language input and delegates smaller bite sized commands to other systems does the same thing but differently. Boris Galerkin posted:Also you keep talking about "it" doing things and it's sounding awfully close to you thinking that the system is acting like a person with agency instead of just following code.
|
# ? Mar 17, 2024 15:16 |
|
Lucid Dream posted:It's really not very complicated. The LLM itself is not very good at math, but it can be trained to utilize external tools such as a calculator. Whether you think that counts as the LLM "learning to do math" or whether it counts as the LLM "solving the problem" is obviously subjective. I think it counts, but we can agree to disagree. It obviously doesn't. If someone said "this 5-year-old human knows how to do division", you would expect that human to be able to solve "20 / 5" without a calculator. If their math demonstration consisted solely of just entering the problem into a calculator and repeating the result, and if they were completely unable to solve the problem without the help of a calculator, then I think most people would get annoyed at the parent for wildly exaggerating their child's abilities. "The LLM can do math" and "a system including both a LLM and a math-solver can do math" are very different statements. I don't see why so many people are so incredibly eager to treat them as synonymous. quote:Well, they could attempt to calculate the answer and they'd probably do a hell of a lot better than me if you gave me the same problems See, this is what I mean with people losing track of the limitations of LLMs. LLMs are fundamentally incapable of doing this sort of mathematical calculation. They can't attempt to calculate arithmetic, because they're highly specialized tools that are capable of doing exactly one thing: statistically calculating what word a human would use next. That's basically the only thing they can do. That's an enormously powerful thing, of course, because it turns out that drat near everything we do can be expressed in words. But despite how broad their abilities seem to be, they are not general-purpose AIs. Word-prediction is a technique that can be enormously versatile, but there are fields where it just doesn't hold up, and one of them is arithmetic. If you ask ChatGPT to solve "two plus two", it might be able to answer "four", but only because "two plus two" shows up fairly frequently in human writings and is typically followed by the words "four" or "equals four". It can't actually do numerical addition.
|
# ? Mar 17, 2024 17:00 |
|
Main Paineframe posted:It obviously doesn't. If someone said "this 5-year-old human knows how to do division", you would expect that human to be able to solve "20 / 5" without a calculator. If their math demonstration consisted solely of just entering the problem into a calculator and repeating the result, and if they were completely unable to solve the problem without the help of a calculator, then I think most people would get annoyed at the parent for wildly exaggerating their child's abilities. Lucid Dream fucked around with this message at 17:36 on Mar 17, 2024 |
# ? Mar 17, 2024 17:34 |
|
Ok it can do math. E: I think it's impressive that the models can take natural language and make calls to other models to write a simple math program etc but I still don't consider this to be the LLM "doing" math. Boris Galerkin fucked around with this message at 17:43 on Mar 17, 2024 |
# ? Mar 17, 2024 17:40 |
|
Boris Galerkin posted:
I said one API call, ChatGPT is using code analysis and multi stage prompts. Retry it and tell it to only use inference. When I did that it skipped the code analysis, broke the problem down into words and gave me the answer with only inference. Edit: I tried it a few more times and telling it *not* to use code analysis is more likely to skip that part. You can tell when it’s not doing it because it doesn’t give you a link to let you expand the python part. Edit2 (on the playground, so just bare model here): Lucid Dream fucked around with this message at 18:10 on Mar 17, 2024 |
# ? Mar 17, 2024 17:45 |
what if we build an A.I. that has a chronic upscaling issue for over 9 years? how much will that cost us?
|
|
# ? Apr 17, 2024 17:56 |
|
a lot of the AI we can see and use is real and practical but then we have the promises that border "hard ai" and these are ridiculous and will not deliver but like in other areas, the more loud guy can get more eyeballs for his things, and pockets
|
# ? Apr 17, 2024 23:03 |
|
|
# ? Apr 28, 2024 03:11 |
|
Rappaport posted:Can we make a Bob Ross AI? Not like the digital ghouls Disney keeps conjuring up, just train an AI to chat soothing small nothings and making nice paintings on the user's screen. Maybe throw in some Mister Rogers for the terminally online doom-scrolling 4-year-olds, too. Happy little decision trees
|
# ? Apr 20, 2024 14:56 |