Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
KillHour
Oct 28, 2007


Tei posted:

Hope corporations lose control of the AI.

One way I could see this fit into corporations strategy is somebody rent you the server to run ChatGPT, and somebody else sell your the trained dataset. And everything else is open source. So corpos make money from the dataset and renting the server, and everything else is just a commodity.

In practice, stacks are quickly becoming far more complicated than just standing up an LLM and pointing a chat box at it.

You're probably going to see the industry fragment into a few different areas: SaaS offerings that are full stack and designed to do a specific thing; "AIaaS" ("AI as a service", or maybe they'll call it "Model as a Service") offerings that are basically just an AI model on a stick with an API like the current ChatGPT stuff; and something more like a platform that can be hosted or possibly on prem, and require full time specialists or consultants to properly build and maintain.

I expect that to also be in order from most-common to least-common, in terms of market share.

There will probably always be open source models that fall into the third category, but like everything else they'll be useful for consumer use mostly as a toy for nerds because the secret sauce will be in the stack.

KillHour fucked around with this message at 00:29 on Nov 21, 2023

Adbot
ADBOT LOVES YOU

Tei
Feb 19, 2011
Probation
Can't post for 4 days!
The whole Altman / OpenAI kerfurkle is weird.

So now Microsoft will have under his right wing Altman and big group of people. And in the other wing OpenAI.

Somehow I think this will also slow down OpenAI research since people will get distracted from the drama or pause while moving to a new job. //. Things completelly stall if lawyers are involved.

Lucid Dream
Feb 4, 2003

That boy ain't right.
I think OpenAI figured out good memory retrieval as part of doing the GPTs thing, and they used that to create autonomous agents that can complete a new class of tasks with no human intervention. Then, Sam started demoing it to potential investors without board approval. Ilya saw Sam hocking a proto-agi without even consulting the board, got spooked, and pulled the ripcord.

KillHour
Oct 28, 2007


Lucid Dream posted:

I think OpenAI figured out good memory retrieval as part of doing the GPTs thing, and they used that to create autonomous agents that can complete a new class of tasks with no human intervention. Then, Sam started demoing it to potential investors without board approval. Ilya saw Sam hocking a proto-agi without even consulting the board, got spooked, and pulled the ripcord.

Do you have evidence for any of this or is it just speculation?

Lucid Dream
Feb 4, 2003

That boy ain't right.

KillHour posted:

Do you have evidence for any of this or is it just speculation?

Nah, pure speculation. I used the GPTs thing enough to see that it has the ability for it to re-write its own system prompt, and it has the ability to retrieve arbitrary data from uploaded files for use in the output. Right after the GPTs thing was announced I remember using an early UI of it or something where it had a log on the side that was showing it doing agent stuff, so I'm just sorta putting the pieces together and extrapolating. I'm guessing they figured out a way to write their own "memories" and then retrieve them in a way that's scalable enough to tackle more complex problems.

Lucid Dream fucked around with this message at 03:40 on Nov 21, 2023

Tei
Feb 19, 2011
Probation
Can't post for 4 days!
I am way better programmer than ChatGPT. Sometimes I get frustrated how obtuse mediocre is ChatGPT.

At the same time when I need something mediocre like "give me a curl request sending photo1 and photo2 to /face_recognition", I get the resulting code in about 3 seconds. Way faster than I would even being.

But theres something worse, much worse.

I could say something horribly poorly, with spelling horrors, using the wrong word, mistaking a few parts.. and I still get a good answer. ChatGPT is better at receiving broken information than me. ChatGPT would understand a customer much better than I can. In that sense ChatGPT is much better programmer than I can possibly be. My problem is when I receive conflicting information, I get super frustrated, angry, and don't know what to do. And people produce naturally this type of feedback all the time "We want all buttons in our webpage to be permanently disabled when the user double click". <-me-> "Also the logout button?" . "No, no that one". "And the search button?". "no that on exclused". "And the filter buttons?". "no these buttons not".
In user speak all is not all. First is not first. Left is not left. Right is not right. Yes is not yes. No is not not. They start talking about what they want, jump to how they want it, describe a broken mechanic has desirable. Want things to work.
All of this frustrate me to no end. But ChatGPT get this poo poo, and produce a correct code, in 3 seconds.

Maybe I am too much like a robot. And ChatGPT too much like a empatic human being.

If they ever get a ChatGPT to write good programs (not just "Code"), the hilarious thing is that will be better at decyphering users broken feedback, than many of us programmers.

Tei fucked around with this message at 01:49 on Nov 23, 2023

Fautzo
Jan 3, 2012

u can read this i guess idc

Lucid Dream posted:

I think OpenAI figured out good memory retrieval as part of doing the GPTs thing, and they used that to create autonomous agents that can complete a new class of tasks with no human intervention. Then, Sam started demoing it to potential investors without board approval. Ilya saw Sam hocking a proto-agi without even consulting the board, got spooked, and pulled the ripcord.

I know you were speculating, and this article is pretty speculative as well, but this may be closer to the truth than I was willing to believe a few days ago.
https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/

Seems like Ilya might have been scared by the progress and wanted to proceed with more caution...though this is all based on hearsay so take it with a grain of salt.

Heran Bago
Aug 18, 2006



Someone please explain to me how Q* solving math for 10-year olds is dangerous, where Wolfram Alpha aceing "what is six over four?" isn't.

Tei
Feb 19, 2011
Probation
Can't post for 4 days!

Heran Bago posted:

Someone please explain to me how Q* solving math for 10-year olds is dangerous, where Wolfram Alpha aceing "what is six over four?" isn't.

The G in AGI is for general. Wolfram Alpha is not General, is centered around math.

Wolfram Alpha is not going to turn the entire solar system into paperclips to maximize the shareholders value, whatever is Q* maybe might if somebody stupid enough ask it that.

AGI is a human classifiction.

Maybe between the intelligence of a Calculator and the Inteligence of AGI theres many stages and systems. We may not invent a AGI, but we may invent one of the intermediate stages that lead to AGI, with a destructive potential a fraction of AGI, but still enough to kill us all.

Tei fucked around with this message at 13:44 on Nov 23, 2023

Bwee
Jul 1, 2005

Heran Bago posted:

Someone please explain to me how Q* solving math for 10-year olds is dangerous, where Wolfram Alpha aceing "what is six over four?" isn't.


It wasn't trained on math for ten year olds

Lucid Dream
Feb 4, 2003

That boy ain't right.

Heran Bago posted:

Someone please explain to me how Q* solving math for 10-year olds is dangerous, where Wolfram Alpha aceing "what is six over four?" isn't.

OpenAI have an apparently incredibly accurate method of predicting capabilities of their models before they are fully trained. Better math on a smaller model using a new architecture might suggest a fully trained model would have dramatically more capabilities than GPT4.

KillHour
Oct 28, 2007


It sure is a good thing that OpenAI lives up to its name and publishes their research and findings so external experts can't weigh in and validate their findings / interpret the importance and implications of the results. What's that? We're all reading tea leaves and have no idea what is going on except it might be SHODAN or completely bunk or anything in between? Sweet. I love that.

SCheeseman
Apr 23, 2003

We've been poking at it for about a week in the GBS AI art thread, but I figure those who've been avoiding going there might want to know something that's slipped under the radar this last month. AI generated music has made a fairly significant leap forward. Suno.ai takes song style suggestions and either generated or BYO lyrics and has (basic) extension and stitching capabilities for creating full songs. The quality is absurdly high compared to what came before, a lot are comparing it to Dall-E but music.

Self serving example generated by me:
https://voca.ro/1hZS3KPqg6k3

This thread has more. Originally the generator didn't let you use your own lyrics and didn't have the capability to extend songs, that was added just recently, so most of what is in the thread is fairly short and with pretty wonky AI generated lyrics. You could suggest a sentence worth, but it was just a starting prompt. Now that's changed it's become a lot more useful, to the point where people in the music industry might start to notice. I'm curious how they'll react.

SCheeseman fucked around with this message at 15:50 on Nov 23, 2023

Lucid Dream
Feb 4, 2003

That boy ain't right.

SCheeseman posted:

Self serving example generated by me:
https://voca.ro/1hZS3KPqg6k3

Ahaha that's fantastic. Still has some minor acoustic artifacts, but goddrat all the same.

Tei
Feb 19, 2011
Probation
Can't post for 4 days!
Oh my god, this thing is addictive. I foresee millions of weird songs everywhere.

Liquid Communism
Mar 9, 2004

коммунизм хранится в яичках

Tei posted:

I am way better programmer than ChatGPT. Sometimes I get frustrated how obtuse mediocre is ChatGPT.

At the same time when I need something mediocre like "give me a curl request sending photo1 and photo2 to /face_recognition", I get the resulting code in about 3 seconds. Way faster than I would even being.

But theres something worse, much worse.

I could say something horribly poorly, with spelling horrors, using the wrong word, mistaking a few parts.. and I still get a good answer. ChatGPT is better at receiving broken information than me. ChatGPT would understand a customer much better than I can. In that sense ChatGPT is much better programmer than I can possibly be. My problem is when I receive conflicting information, I get super frustrated, angry, and don't know what to do. And people produce naturally this type of feedback all the time "We want all buttons in our webpage to be permanently disabled when the user double click". <-me-> "Also the logout button?" . "No, no that one". "And the search button?". "no that on exclused". "And the filter buttons?". "no these buttons not".
In user speak all is not all. First is not first. Left is not left. Right is not right. Yes is not yes. No is not not. They start talking about what they want, jump to how they want it, describe a broken mechanic has desirable. Want things to work.
All of this frustrate me to no end. But ChatGPT get this poo poo, and produce a correct code, in 3 seconds.

Maybe I am too much like a robot. And ChatGPT too much like a empatic human being.

If they ever get a ChatGPT to write good programs (not just "Code"), the hilarious thing is that will be better at decyphering users broken feedback, than many of us programmers.

That's only obvious. A generative AI can never be better than the average of its inputs, and there's a ton of lovely inputs.

SaTaMaS
Apr 18, 2003

Liquid Communism posted:

That's only obvious. A generative AI can never be better than the average of its inputs, and there's a ton of lovely inputs.

Not necessarily true, a knowledgeable person can look at 9 garbage outputs and 1 great output and just use the great output.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
What if you get 10 lovely outputs, like I did yesterday because I couldn't remember the syntax on a datapager.

I had to go to stack overflow! What is this 2021!?

Smiling Demon
Jun 16, 2013

SaTaMaS posted:

Not necessarily true, a knowledgeable person can look at 9 garbage outputs and 1 great output and just use the great output.

If I recall correctly, in a machine learning contest about a decade ago by (I think?) netflix for video recommendation, the winning entry was just to recommend the most popular overall videos and ignore trying to tailor recommendations to individuals by their viewing history.

Tei
Feb 19, 2011
Probation
Can't post for 4 days!
I think algorithms and sites like Netflix should do a better effort at presenting viewers with artist. To have viewers engage with the creators of the movies / tv series.
Like sex is better with the people you love than a prostitute. A movie is better when you understand the authors.
The idea to present TV series and Movies like products in a supermarket have this flaw. Imho. That it does nothing to entice people to understand the artist behind.
That word, "Content" is everywhere, and people is deluded into thinking about movies has "products".
We are here, because the people in power treat people like numbers, art like content, and so on. But is wrong.
I think old TV kinda did this better, with shows where before or after the movie you would have a debate with people talking about the movie.

A movie recomendation algorithm should be more the guy that have watched all the black and white Poland movies.
Currently, I think these recomendation algorithms answer to simple questions like this:
"Of this 20 shows in promotion, if the viewer where able to make a choice between them, what show will likelly watch first?".
"What show or movie is more likelly to be watched till the end, by this viewer?".

Better questions to suggest to the algorithm would be:
"Taking into account this list of movies the player enjoyed, the type of viewer, and the time he have/current attitude, what movie he does not know will be a incredible pleasure to him, or will open this viewer to other interesting movies"

Netflix can turn people into movies experts that want to watch all types of movies, and just love cinema. Create a need, instead of just fullfilling it. Teach people, not just entertain them.

Tei fucked around with this message at 13:52 on Nov 30, 2023

Bug Squash
Mar 18, 2009

I'm not intimately familiar with any of Netflix's algorithms, but "WriterName" and "DirectorName" seem like they would be incredibly easy things to add into pretty much any machine learning model, so I'd place some solid money on them already trying that.

Heran Bago
Aug 18, 2006



This is the kind of AI implementation I'm really interested and afraid about - interpereting and dynamically manipulating objects in 3D space.

https://arstechnica.com/information-technology/2023/11/mother-plucker-steel-fingers-guided-by-ai-pluck-weeds-rapidly-and-autonomously/

quote:

Mother plucker: Steel fingers guided by AI pluck weeds rapidly and autonomously
Robot that uses AI to pull weeds may reduce poisonous herbicide use by 70% for some crops.

Anybody who has pulled weeds in a garden knows that it's a tedious task. Scale it up to farm-sized jobs, and it becomes a nightmare. The most efficient industrial alternative, herbicides, have potentially devastating side effects for people, animals, and the environment. So a Swedish company named Ekobot AB has introduced a wheeled robot that can autonomously recognize and pluck weeds from the ground rapidly using metal fingers.

The four-wheeled Ekobot WEAI robot is battery-powered and can operate 10–12 hours a day on one charge. It weighs 600 kg (about 1,322 pounds) and has a top speed of 5 km/h (2.5 mph). It's tuned for weeding fields full of onions, beetroots, carrots, or similar vegetables, and it can cover about 10 hectares (about 24.7 acres) in a day. It navigates using GPS RTK and contains safety sensors and vision systems to prevent it from unintentionally bumping into objects or people.

To pinpoint plants it needs to pluck, the Ekobot uses an AI-powered machine vision system trained to identify weeds as it rolls above the farm field. Once the weeds are within its sights, the robot uses a series of metal fingers to quickly dig up and push weeds out of the dirt. Ekobot claims that in trials, its weed-plucking robot allowed farmers to grow onions with 70 percent fewer herbicides. The weed recognition system is key because it keeps the robot from accidentally digging up crops by mistake.

Two years ago, Ekobot announced a collaboration with Swedish telecom company Telia that led to the integration of 5G mobile technology into the robot, which lets it communicate remotely with a central server to share collected learning data from anywhere in a farm field. This development was part of a pilot project for onion cultivation, and just recently, the company announced that the first "5G onions" grown using this weeding method are now available.

"The 5G onion has proven to have an extended shelf life, something that contributes to a reduction in wastage," reads a press release from Telia. "The 5G onion is not only more sustainable—it also tastes better. This is because efficient weeding and reduced use of pesticides enables onion shoots to grow more freely and for longer, enabling the onions to receive more sunlight and nutrients, making them more hardy and tasty."

Telia says that a limited number of 5G onions are available in Telia stores now (grab us one if you go) and that Ekobot's weeding technology will soon be used on carrots and beets.

Aside from Sweden, the tech is available in the Netherlands and is about to come to Denmark and Norway. Telia expects that the Ekobot system will become available "in 9 EU countries, the United Kingdom, and the United States" by 2030. When coupled with research on lasers that zap pests in flight, AI may help pave the way for a more sustainable and environmentally friendly farming system if widely adopted.

I assume it's not just a different implementation of the classic suction robot arms arranging chips on a conveyor belt. Reducing pesticide use and number of humans subjected to spraying them is a pretty good thing. There's stuff in this article to be genuinely happy and excited for.

But when AI can make basic crop-vs-weed judgements and get handsy about it I worry. Stocking shelves and flipping burgers probably aren't that far off.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
We've had these sorts of technologies for ages, there are examples that exist in the wild

https://youtu.be/ZNVuIU6UUiM?feature=shared

The issue is the mechanical side more then the AI. They are big and expensive to run and maintain and often have only one use. A under paid worker might not be as effecient but they can do multiple tasks given to them.

And it's not like machinery hasn't been reducing the need for humans for farmwork for the last few centuries.


There has also been a burger flipping robot for ages https://youtu.be/KJVOfqunm5E?feature=shared

Apparently it breaks down constantly and is a loss leader

Mega Comrade fucked around with this message at 15:31 on Nov 30, 2023

Tree Reformat
Apr 2, 2022

by Fluffdaddy
It's still cheaper to mass produce and replace humans for a lot of modern manual labor jobs than hand bots.

It's why they keep fretting about the birth rate. And even then, I expect a sudden interest in reviving human cloning among the wealth class to solve that problem before investing in a bunch of strawberry picking Terminators.

Lord Of Texas
Dec 26, 2006

Liquid Communism posted:

That's only obvious. A generative AI can never be better than the average of its inputs, and there's a ton of lovely inputs.

This is not remotely true, and it's such a blatantly false/low-effort take that I don't feel it's even worth refuting. Just google any number of LLM benchmark papers on semantic scholar.

Bug Squash
Mar 18, 2009

Tree Reformat posted:

It's still cheaper to mass produce and replace humans for a lot of modern manual labor jobs than hand bots.

It's why they keep fretting about the birth rate. And even then, I expect a sudden interest in reviving human cloning among the wealth class to solve that problem before investing in a bunch of strawberry picking Terminators.
There's no way anyone is going to opt for cloning as you're left with an unproductive expensive child for over a decade. It's just going to be new and innovative ways to re-invent slavery.

Freakazoid_
Jul 5, 2013


Buglord

Tree Reformat posted:

It's still cheaper to mass produce and replace humans for a lot of modern manual labor jobs than hand bots.

It's why they keep fretting about the birth rate. And even then, I expect a sudden interest in reviving human cloning among the wealth class to solve that problem before investing in a bunch of strawberry picking Terminators.

It's the same story about every individual piece of technology for the past several decades. Robots never replace workers overnight. It takes years for the tech to mature.

Rogue AI Goddess
May 10, 2012

I enjoy the sight of humans on their knees.
That was a joke... unless..?
The part that no one talks about is that you could replace 80% of the executives and 95% of management consultants with AIs today and they'd do a better job.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!


This could do a better job

Gynovore
Jun 17, 2009

Forget your RoboCoX or your StickyCoX or your EvilCoX, MY CoX has Blinking Bewbs!

WHY IS THIS GAME DEAD?!

Rogue AI Goddess posted:

The part that no one talks about is that you could replace 80% of the executives and 95% of management consultants with AIs today and they'd do a better job.

This is true, but by the same token, you could drop 90% of executives and 100% of management consultants into a composter, and they would serve humanity better as fertilizer.

Tei
Feb 19, 2011
Probation
Can't post for 4 days!

Bug Squash posted:

I'm not intimately familiar with any of Netflix's algorithms, but "WriterName" and "DirectorName" seem like they would be incredibly easy things to add into pretty much any machine learning model, so I'd place some solid money on them already trying that.

the point is not to "mirror" the desires of the public with the perfect selection, but teaching the public to appreciate the artist in background, so they apreciate more the art in foreground

this can be done by changing the question the algorithm answer

Tei fucked around with this message at 09:03 on Dec 1, 2023

Tree Reformat
Apr 2, 2022

by Fluffdaddy
Streaming services basically killing the extra features of DVDs like creator commentary and Behind the Scenes while the likes of Disney and the other media conglomerates completely clamping down on post-hoc tell-alls about productions certainly hasn't helped either.

These days, if anyone hears anything about the making of their favorite TV show or movie, it's usually just a whistleblower telling them what awful sexpests the director and bosses were, not the joy of the creative process. Which is an indictment of how exploitative traditional mass media production is more than anything, but still.

Bug Squash
Mar 18, 2009

Tei posted:

the point is not to "mirror" the desires of the public with the perfect selection, but teaching the public to appreciate the artist in background, so they apreciate more the art in foreground

this can be done by changing the question the algorithm answer

It sounds like this would be better done by YouTube essays rather than an algorithm

Tree Reformat
Apr 2, 2022

by Fluffdaddy
Anyway, in actual AI chat, this tweet has been making the rounds:

https://twitter.com/SashaMTL/status/1730552781323317508

It feels a bit fuzzy to me. I'd prefer to see these measurements in watts, and maybe a direct comparison to common computing tasks. The idea that a single generation on my RTX 3060, which takes less than half a minute (although I haven't tested SDXL yet), somehow consumes more electricity in that time than an entire hour of playing, say, Horizon Zero Dawn for hour with all the settings cranked up on that same machine seems absurd. I feel if that were the case I'd be tripping my circuit breaker with every generation.

I don't have a Kill-A-Watt to actually test this myself, unfortunately.

esquilax
Jan 3, 2003

Tree Reformat posted:

Anyway, in actual AI chat, this tweet has been making the rounds:

https://twitter.com/SashaMTL/status/1730552781323317508

It feels a bit fuzzy to me. I'd prefer to see these measurements in watts, and maybe a direct comparison to common computing tasks. The idea that a single generation on my RTX 3060, which takes less than half a minute (although I haven't tested SDXL yet), somehow consumes more electricity in that time than an entire hour of playing, say, Horizon Zero Dawn for hour with all the settings cranked up on that same machine seems absurd. I feel if that were the case I'd be tripping my circuit breaker with every generation.

I don't have a Kill-A-Watt to actually test this myself, unfortunately.

The paper says that's per 1000 inferences. Reading 2.9 kWh per 1,000 in the paper = 10k Joules each ~ 350W@30 seconds.

Paper doesn't say specifically how many inferences per generation or how those are specifically defined

esquilax fucked around with this message at 20:27 on Dec 1, 2023

Liquid Communism
Mar 9, 2004

коммунизм хранится в яичках

Lord Of Texas posted:

This is not remotely true, and it's such a blatantly false/low-effort take that I don't feel it's even worth refuting. Just google any number of LLM benchmark papers on semantic scholar.

My dude, you're trusting an AI-based research tool to provide you with accurate summaries of scientific literature, when the technology is well known for outright making stuff up that is trivially wrong?

Continually yelling 'nuh uh' is not a refutation.

SaTaMaS
Apr 18, 2003
https://www.technologyreview.com/2023/12/14/1085318/google-deepmind-large-language-model-solve-unsolvable-math-problem-cap-set/

quote:

The researchers started by sketching out the problem they wanted to solve in Python, a popular programming language. But they left out the lines in the program that would specify how to solve it. That is where FunSearch comes in. It gets Codey to fill in the blanks—in effect, to suggest code that will solve the problem.

A second algorithm then checks and scores what Codey comes up with. The best suggestions—even if not yet correct—are saved and given back to Codey, which tries to complete the program again. “Many will be nonsensical, some will be sensible, and a few will be truly inspired,” says Kohli. “You take those truly inspired ones and you say, ‘Okay, take these ones and repeat.’”

After a couple of million suggestions and a few dozen repetitions of the overall process—which took a few days—FunSearch was able to come up with code that produced a correct and previously unknown solution to the cap set problem, which involves finding the largest size of a certain type of set. Imagine plotting dots on graph paper. The cap set problem is like trying to figure out how many dots you can put down without three of them ever forming a straight line.

The people who dismiss the entire concept of LLMs because they hallucinate when asked about obscure or recent facts are missing the point. As the test benchmarks have shown, they do very well on established facts, and even hallucinations are useful when looking for creative solutions as opposed to facts.

KillHour
Oct 28, 2007


Nobody is suggesting that these systems don't have practical uses. This isn't the first time this has happened either - AI improved the best known matrix multiplication algorithm a couple years ago. The problem is that people are taking news like this and extrapolating to an idea that these systems will produce only good code and be usable by people with no development knowledge to replace developers. It's like claiming that calculators will replace mathematicians.

SaTaMaS
Apr 18, 2003

KillHour posted:

Nobody is suggesting that these systems don't have practical uses. This isn't the first time this has happened either - AI improved the best known matrix multiplication algorithm a couple years ago. The problem is that people are taking news like this and extrapolating to an idea that these systems will produce only good code and be usable by people with no development knowledge to replace developers. It's like claiming that calculators will replace mathematicians.

It's more that people point to hallucinations as a reason for why LLMs can't be considered intelligent.

Adbot
ADBOT LOVES YOU

Bar Ran Dun
Jan 22, 2006




SaTaMaS posted:

It's more that people point to hallucinations as a reason for why LLMs can't be considered intelligent.

I don’t think they’re intelligent.

But specific objection not hold up.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply