|
Boris Galerkin posted:I get what you're trying to say here but it's also wrong. Every day I use a fancy calculator to solve an equation similar to this: I really think it's a semantic issue more than anything. Modern LLMs aren't inherently reliable for math, but an LLM can act as a sort of delegator and make a high level plan and send sub-tasks to specialized systems (such as a calculator, a database, whatever) and then incorporate that data into its response. Folks can argue whether that counts as the LLM "learning" math, but what really matters to me is the capability of the system as a whole. Lucid Dream fucked around with this message at 23:05 on Mar 16, 2024 |
# ? Mar 16, 2024 23:03 |
|
|
# ? Jun 3, 2024 07:02 |
|
Main Paineframe posted:Judging from what you're saying about AlphaZero here, I think it's time to to suggest that we all agree on an important ground rule here: "LLM" is not a synonym of "transformer", "neural network", "machine learning", or "AI". LLMs are a kind of transformer, which is a kind of neural network, which is a kind of machine learning, which is a kind of AI. But that doesn't mean that LLMs can do anything that AI, machine learning, neural networks, or transformers can do. I get what you're saying, not disagreeing or anything, but the "LLMs are a kind of transformer" thing just made me think, are they necessarily transformers? I cant remember it right now but I swear there was a model released that used an older approach than transformers a bit ago but still performed decently, I just dont remember specifically what it was. Wouldn't a several billion parameter RNN also count as an LLM? Or if we find a new approach that's better than transformers, that would be an LLM too, I'd imagine? A computer doing math, I feel like theres some difference between a calculator doing math and a brain doing math that you just kinda know in your gut. A brain doing math is going through a non-deterministic process to figure it out. But the whole idea of a different AI doing math, another AI doing language, I dont see what the issue is. The AI is doing math if you think about "the AI" as not the single model but the whole thing, interconnected models. That's how actual brains work anyway, specialized parts of the brain doing a specific thing connected to all the other parts of the brain. BrainDance fucked around with this message at 06:16 on Mar 17, 2024 |
# ? Mar 17, 2024 06:11 |
|
Lucid Dream posted:I really think it's a semantic issue more than anything. Modern LLMs aren't inherently reliable for math, but an LLM can act as a sort of delegator and make a high level plan and send sub-tasks to specialized systems (such as a calculator, a database, whatever) and then incorporate that data into its response. Folks can argue whether that counts as the LLM "learning" math, but what really matters to me is the capability of the system as a whole. But that's not how we describe systems. I'm completely baffled by your argument. The LLM doesn't calculate, attempt to calculate or even see the answer, it simply provides another strut when the main AI cannot resolve the question. It doesn't matter if you "feel" the LLM is solving math, the reality is it is not. Mega Comrade fucked around with this message at 07:58 on Mar 17, 2024 |
# ? Mar 17, 2024 07:45 |
|
Mega Comrade posted:But that's not how we describe systems. I'm completely baffled by your argument. It's really not very complicated. The LLM itself is not very good at math, but it can be trained to utilize external tools such as a calculator. Whether you think that counts as the LLM "learning to do math" or whether it counts as the LLM "solving the problem" is obviously subjective. I think it counts, but we can agree to disagree. Mega Comrade posted:The LLM doesn't calculate, attempt to calculate or even see the answer, it simply provides another strut when the main AI cannot resolve the question. Edit: To clarify a bit, I'm talking about using multiple LLM calls as part of a response to the user. The first call determines the need for the tool and requests it, the second call formats the response after the tool returns the answer. Neither call is individually "solving the problem" but the problem is solved by the system enabled by the LLM. Lucid Dream fucked around with this message at 08:34 on Mar 17, 2024 |
# ? Mar 17, 2024 08:24 |
|
You made a comment on semantics and it's obviously clear that they matter because it seems like you're conflating LLM with "computer" or "software" in general. Control logic in programming has been a thing since the first programming languages were invented. Fortran programmed with punch cards were capable of "knowing" whether to do multiply two vectors or multiply a vector with a matrix or whatever. The LLM that takes your natural language input and delegates smaller bite sized commands to other systems does the same thing but differently. Also you keep talking about "it" doing things and it's sounding awfully close to you thinking that the system is acting like a person with agency instead of just following code.
|
# ? Mar 17, 2024 13:01 |
|
quote:A computer doing math, I feel like theres some difference between a calculator doing math and a brain doing math that you just kinda know in your gut. A brain doing math is going through a non-deterministic process to figure it out. If by math you mean calculation, then there is or should be no difference as to how a computer or a person does it. Remember that calculation methods are programmed by people, who developed a method/algorithm and implemented it as code. If by math you mean like coming up with these methods and such, then perhaps. But as I said before, having the "here's what you do" is not entirely useful until you know why it's done this way.
|
# ? Mar 17, 2024 13:04 |
|
Well personally I don't normally do maths in base2.
|
# ? Mar 17, 2024 15:06 |
|
Boris Galerkin posted:You made a comment on semantics and it's obviously clear that they matter because it seems like you're conflating LLM with "computer" or "software" in general. When folks type a question into ChatGPT, the system that responds is what I’m talking about. Boris Galerkin posted:Control logic in programming has been a thing since the first programming languages were invented. Fortran programmed with punch cards were capable of "knowing" whether to do multiply two vectors or multiply a vector with a matrix or whatever. The LLM that takes your natural language input and delegates smaller bite sized commands to other systems does the same thing but differently. Boris Galerkin posted:Also you keep talking about "it" doing things and it's sounding awfully close to you thinking that the system is acting like a person with agency instead of just following code.
|
# ? Mar 17, 2024 15:16 |
|
Lucid Dream posted:It's really not very complicated. The LLM itself is not very good at math, but it can be trained to utilize external tools such as a calculator. Whether you think that counts as the LLM "learning to do math" or whether it counts as the LLM "solving the problem" is obviously subjective. I think it counts, but we can agree to disagree. It obviously doesn't. If someone said "this 5-year-old human knows how to do division", you would expect that human to be able to solve "20 / 5" without a calculator. If their math demonstration consisted solely of just entering the problem into a calculator and repeating the result, and if they were completely unable to solve the problem without the help of a calculator, then I think most people would get annoyed at the parent for wildly exaggerating their child's abilities. "The LLM can do math" and "a system including both a LLM and a math-solver can do math" are very different statements. I don't see why so many people are so incredibly eager to treat them as synonymous. quote:Well, they could attempt to calculate the answer and they'd probably do a hell of a lot better than me if you gave me the same problems See, this is what I mean with people losing track of the limitations of LLMs. LLMs are fundamentally incapable of doing this sort of mathematical calculation. They can't attempt to calculate arithmetic, because they're highly specialized tools that are capable of doing exactly one thing: statistically calculating what word a human would use next. That's basically the only thing they can do. That's an enormously powerful thing, of course, because it turns out that drat near everything we do can be expressed in words. But despite how broad their abilities seem to be, they are not general-purpose AIs. Word-prediction is a technique that can be enormously versatile, but there are fields where it just doesn't hold up, and one of them is arithmetic. If you ask ChatGPT to solve "two plus two", it might be able to answer "four", but only because "two plus two" shows up fairly frequently in human writings and is typically followed by the words "four" or "equals four". It can't actually do numerical addition.
|
# ? Mar 17, 2024 17:00 |
|
Main Paineframe posted:It obviously doesn't. If someone said "this 5-year-old human knows how to do division", you would expect that human to be able to solve "20 / 5" without a calculator. If their math demonstration consisted solely of just entering the problem into a calculator and repeating the result, and if they were completely unable to solve the problem without the help of a calculator, then I think most people would get annoyed at the parent for wildly exaggerating their child's abilities. Lucid Dream fucked around with this message at 17:36 on Mar 17, 2024 |
# ? Mar 17, 2024 17:34 |
|
Ok it can do math. E: I think it's impressive that the models can take natural language and make calls to other models to write a simple math program etc but I still don't consider this to be the LLM "doing" math. Boris Galerkin fucked around with this message at 17:43 on Mar 17, 2024 |
# ? Mar 17, 2024 17:40 |
|
Boris Galerkin posted:
I said one API call, ChatGPT is using code analysis and multi stage prompts. Retry it and tell it to only use inference. When I did that it skipped the code analysis, broke the problem down into words and gave me the answer with only inference. Edit: I tried it a few more times and telling it *not* to use code analysis is more likely to skip that part. You can tell when it’s not doing it because it doesn’t give you a link to let you expand the python part. Edit2 (on the playground, so just bare model here): Lucid Dream fucked around with this message at 18:10 on Mar 17, 2024 |
# ? Mar 17, 2024 17:45 |
what if we build an A.I. that has a chronic upscaling issue for over 9 years? how much will that cost us?
|
|
# ? Apr 17, 2024 17:56 |
|
a lot of the AI we can see and use is real and practical but then we have the promises that border "hard ai" and these are ridiculous and will not deliver but like in other areas, the more loud guy can get more eyeballs for his things, and pockets
|
# ? Apr 17, 2024 23:03 |
|
Rappaport posted:Can we make a Bob Ross AI? Not like the digital ghouls Disney keeps conjuring up, just train an AI to chat soothing small nothings and making nice paintings on the user's screen. Maybe throw in some Mister Rogers for the terminally online doom-scrolling 4-year-olds, too. Happy little decision trees
|
# ? Apr 20, 2024 14:56 |
.
|
|
# ? May 6, 2024 18:49 |
|
Mega Comrade posted:They aren't, or at least they haven't said they are. For what its worth, I'm pretty sure robotics have been part of the openAI core mission from their founding. The company started with four explicitly stated goals and goal 2 is "build a robot". Goal 3 was natural language parsing, the one people commonly associate with the company, but it the robot one was actually first!
|
# ? May 7, 2024 21:23 |
|
GlyphGryph posted:For what its worth, I'm pretty sure robotics have been part of the openAI core mission from their founding. The company started with four explicitly stated goals and goal 2 is "build a robot". Goal 3 was natural language parsing, the one people commonly associate with the company, but it the robot one was actually first!
|
# ? May 7, 2024 23:40 |
|
They did a chef robot demo a couple years ago, didn't they? Though I don't think anyone was impressed.
|
# ? May 8, 2024 03:43 |
rig the thing so that i can sleep all day
|
|
# ? May 10, 2024 16:57 |
the artificial intelligence that's still off limits
|
|
# ? May 13, 2024 14:35 |
|
OpenAI has announced GPT-4o ("o" for omni) which seems to give near realtime chat capabilities through voice and video. The videos they posted to showcase it are starting to look like the movie Her. It's actually pretty impressive to be honest.
|
# ? May 13, 2024 23:11 |
|
They made a big deal about how this will be "free for everyone" during their presentation, but in reality the number of free messages you get is very limited and if you want to get more, you have to pay.
|
# ? May 14, 2024 02:15 |
|
Boris Galerkin posted:OpenAI has announced GPT-4o ("o" for omni) which seems to give near realtime chat capabilities through voice and video. The videos they posted to showcase it are starting to look like the movie Her. It's actually pretty impressive to be honest. The response time is like talking to a human in those videos and the voice sounds pretty natural. I can see a real life Her happening with this. I couldn't figure out how to try it though, the link on the GPT-4o page just went to GPT 3.5.
|
# ? May 14, 2024 15:16 |
|
gurragadon posted:I can see a real life Her happening with this Dorks are already falling in love with chat bots, it's not that challenging.
|
# ? May 14, 2024 16:23 |
|
she's not a bot!! she's a lovely lady, man!
|
# ? May 14, 2024 18:36 |
|
Kevin Bacon posted:she's not a bot!! she's a lovely lady, man! She's intrigued by my unwashed bedsheets and other dirty laundry, it's love I tells ya. Intrigued!
|
# ? May 15, 2024 06:16 |
|
Boris Galerkin posted:OpenAI has announced GPT-4o ("o" for omni) which seems to give near realtime chat capabilities through voice and video. The videos they posted to showcase it are starting to look like the movie Her. It's actually pretty impressive to be honest. OpenAI is pulling the default voice for their AI because it sounded too similar to Scarlet Johansson in Her. https://www.bloomberg.com/news/articles/2024-05-20/openai-to-pull-johansson-soundalike-sky-s-voice-from-chatgpt quote:OpenAI is working to pause the use of the Sky voice from an audible version of ChatGPT after users said that it sounded too much like actress Scarlett Johansson.
|
# ? May 20, 2024 15:05 |
|
|
# ? May 20, 2024 23:52 |
|
Bug Squash posted:Dorks are already falling in love with chat bots, it's not that challenging. https://www.youtube.com/watch?v=RYQUsp-jxDQ
|
# ? May 21, 2024 00:04 |
|
Reading up on how bad humanity is at AI alignment, are there any of the big AI leaders that seem to have a good culture and implementation of AI safety?
|
# ? May 24, 2024 06:09 |
|
Zudgemud posted:Reading up on how bad humanity is at AI alignment, are there any of the big AI leaders that seem to have a good culture and implementation of AI safety? It seems like you're treating "AI safety" and "AI alignment" as synonyms, but that doesn't really make sense. AI safety is serious work concerned with the real harms that AI actually poses. AI alignment is science-fiction speculation from people who have a vested interest in not only wildly exaggerating the capabilities of LLMs but also minimizing the concept of human responsibility for using these tools safely. In other words, it's mostly just a PR effort for the AI companies.
|
# ? May 24, 2024 06:43 |
|
While I agree it is used to minimize the concept of human responsibility for using these tools safely, isn't AI alignment basically just long term safety for when AI research actually moves away from traditional LLMs? It feels like something that is worth figuring out while we regulate the field and before we accidentally create stockmarketbot 5000 that figures out that humans are not technically needed for number going up.
|
# ? May 24, 2024 08:13 |
|
Zudgemud posted:While I agree it is used to minimize the concept of human responsibility for using these tools safely, isn't AI alignment basically just long term safety for when AI research actually moves away from traditional LLMs? Like I said, science-fiction speculation. Nobody has any real idea what AGI is going to look like - or, for that matter, whether AGI is even possible in the foreseeable future. It's like writing traffic regulations for flying cars - you don't know exactly what capabilities flying cars are even going to have, nor do you have any guarantee that flying cars are ever actually going to be a thing, so it's usually just intellectual masturbation. If someone wants to do it for fun, then feel free, but they shouldn't be taken seriously. The way to keep AGI Stockmarketbot 5000 from killing a bunch of people is exactly the same way that we've kept non-AGI Stockmarketbots 0001 through 4999 from killing a bunch of people - don't hook up your flash trading algorithm to anything capable of loving killing people. Besides, a far bigger problem is the large number of humans willing to kill people or get people killed for the sake of the stock price. It's downright silly to worry about hypothetical future stock market maximizer AIs when we've already got the likes of Blackwater and Raytheon running wild. If we invent AGI in the future and it starts killing people for capitalism, it's going to do it using pretty much the same means that humans and non-AGI algorithms are already killing people for capitalism, so the logical conclusion would be to address the ways in which capitalism kills people. AI isn't special, but it's to the advantage of these companies to encourage the perception that it is.
|
# ? May 24, 2024 15:16 |
|
Zudgemud posted:While I agree it is used to minimize the concept of human responsibility for using these tools safely, isn't AI alignment basically just long term safety for when AI research actually moves away from traditional LLMs? I haven't really been keeping up with developments but OpenAI talks about AI Alignment some as something they are working on. https://openai.com/index/our-approach-to-alignment-research/ quote:There is currently no known indefinitely scalable solution to the alignment problem. As AI progress continues, we expect to encounter a number of new alignment problems that we don’t observe yet in current systems. Some of these problems we anticipate now and some of them will be entirely new. I also saw a document about "superalignment" on their website which appears to be attempting to solve to problem of aligning an AGI system that are smarter than a human. https://openai.com/index/introducing-superalignment/ If you are looking for more information just into the general idea of AI alignment KillHour posted a bunch of good videos by Rob Miles that talk about it in a pretty easy to digest way. Edit: Those videos may have been in the old thread. This is his youtube page. https://www.youtube.com/@RobertMilesAI gurragadon fucked around with this message at 15:44 on May 24, 2024 |
# ? May 24, 2024 15:39 |
|
lol, we aren't getting an AI with superhuman intelligence anytime remotely soon. OpenAI would like you to think that's in the cards, though.
|
# ? May 24, 2024 15:48 |
|
cat botherer posted:lol, we aren't getting an AI with superhuman intelligence anytime remotely soon. OpenAI would like you to think that's in the cards, though. I know that's true, but Zudgeman was asking about AI alignment discussion at AI companies not whether it is reasonable to be talking about it.
|
# ? May 24, 2024 15:53 |
|
Alignment isn't really an AI concept - it's a modeling concept. Misalignment happens any time a model doesn't perfectly represent reality (which is always - models are necessarily simplistic). Any situation where a model gives a different answer than the real world is a "misalignment." Misalignment can be negligible or it can be a problem. Your smart watch thinking you burned an extra 100 calories on your run is a minor issue. An engineering model giving a safety factor of 3 when it's really 0.8 is a much larger issue. This is because alignment only really matters when decisions are made based on the model. This happens in the real world all the time. Buying a bunch of stock that is about to crash based on an algorithm that makes an incorrect prediction is an alignment issue, but so is a CEO intentionally running a company into the ground because it made their bonuses better. Another common alignment issue that's a real problem is cops seizing cars and your life savings because you had an ounce of weed in it during a move across the country. Ostensibly, the point is to prevent drug trafficking, but the actual incentive is to make money for the police department, which is a clear conflict of interest. The reasoning of the law that lets them do this is misaligned with the reality of how that law is used, with predictable results. You could write forever about major economic and humanitarian disasters that were caused by this kind of misalignment. The AI thing comes in when people realized (like 100 years ago) that a machine only ever cares about the model - it's a literal sociopath. As far as anyone knows, this issue is intractable - you can't write a finite list of rules that would keep an all-powerful God from destroying the universe. As you pointed out, the obvious solution is "don't make an all powerful God under the delusion that you can control it." But like, have you met people? Flying too close to the sun is what we do. TL;DR: We're probably hosed, but I actually came here to show everyone this sweet deepfake: https://www.youtube.com/watch?v=Tw7uij5CK40 gurragadon posted:If you are looking for more information just into the general idea of AI alignment KillHour posted a bunch of good videos by Rob Miles that talk about it in a pretty easy to digest way. He now publishes videos about the subject under the channel Rational Animations, which I'm aware sounds like a cringey libertarian fart huffing thing, but I promise it isn't. It IS trying to convince you that you should care about misalignment though, so it definitely has an agenda that it's trying to sell. https://www.youtube.com/@RationalAnimations Edit: They're pretty lefty, as far as I can tell: https://www.youtube.com/watch?v=2DUlYQTrsOs Double Edit: cat botherer posted:lol, we aren't getting an AI with superhuman intelligence anytime remotely soon. OpenAI would like you to think that's in the cards, though. I mean, I hope you're right, but we also hoped we wouldn't have to worry about the industrial revolution making the planet literally uninhabitable and oops now we have to scramble to make it not do exactly that. Even if it's a low-probability thing, it's good people are thinking about it and not just going "nah it'll probably be fine." KillHour fucked around with this message at 16:07 on May 24, 2024 |
# ? May 24, 2024 15:55 |
|
And the reverse isn't true. If we program Stockbot 5000 to preserve the sanctity of human life and it decides to purchase index funds and distribute shares to every person in America, MicroOpenAnthropMeta isn't going to throw their hands and say it's doing what it was programmed, they'll try and claw that money back
|
# ? May 24, 2024 16:10 |
|
|
# ? Jun 3, 2024 07:02 |
|
The two faces of AI https://www.404media.co/google-is-paying-reddit-60-million-for-fucksmith-to-tell-its-users-to-eat-glue/ https://www.nbcnews.com/tech/tech-news/scarlett-johansson-shocked-angered-openai-voice-rcna153180
|
# ? May 24, 2024 17:00 |