Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Tarkus
Aug 27, 2000

Right now LLM tech is a really cool way to interact with data in a natural language. Is it really a stepping stone to AGI or Super intelligence? Maybe, maybe not. Is it overblown? Yes, right now it is. It's not ready to replace a lot of the jobs that people think it will, I wouldn't be too worried for a few years anyways.

It's a very useful tool and I enjoy using it but yeah, it's not perfect and it is trained on other's intellectual property but whatever, it was kind of expected.

Adbot
ADBOT LOVES YOU

Tarkus
Aug 27, 2000

Worf posted:

I think MDs will be among the first to get replaced by AI

Idk how many years of car mechanics we have left either, that seems like another easy one to replace.

How are mds and mechanics easy to replace? We're still a long ways away from properly embodying these models.

Tarkus
Aug 27, 2000

Worf posted:

Which jobs are going to be replaced before those

Call centers, drive thru people, technical writers, customer service representatives, telemarketers, technical support agents, live chat support staff, personal assistants, booking and reservation agents, email support staff, social media managers, language translators, content moderators. I mean, that's just a few but there is a list of jobs that are far easier to replace than the two you gave. What do you think MD's and mechanics do? Just look at symptoms and tell you what's wrong?

Tarkus
Aug 27, 2000

Worf posted:

When's the last time you've been in a hospital in America lmao

I absolutely think that the cognitive functions required to diagnose something are going to be replaced by a low paid employee plugging something into a computer

And then in the case of vehicles etc where you can just have a scan of where every component is with minimal variation, I think that will be probably one of the easier ones to replace wholesale eventually

I've never been to a hospital in America but there is a physical component to what doctors do believe it or not. Also lol about your idea about mechanics.

I get what you're saying but I think these tools will be largely supplementary for the next few years.

Tarkus
Aug 27, 2000

I think that when it comes to coding with gpt you have to treat it more like an assistant than a 'jesus take the wheel!' coder. For some of the 10x coders here I'm sure it's more a hindrance than a help but for people like me whose job is <5% programming, it's invaluable. It's able to explain concepts and libraries far better than would be available from googling (especially lately). It can diagnose issues in small sections of code. It can show me more efficient ways to do things. But you always have to keep your eye out, but the same goes for getting help from another, even more experienced person, and this thing is effectively instant.

Tarkus
Aug 27, 2000

Vampire Panties posted:

All of these have been using NLP for a decade+ now.

They still gently caress up a lot, and need a human to fix the gently caress ups. If/when AI-dedicated chips get cheap enough that they can refine their LLM on the fly, they may get better, or they may hallucinate that they're getting better and poo poo out more garbage

I don't deny that AI is a pale replacement for those positions but those are probably much better candidates than MDs and mechanics for job replacement.

Tarkus
Aug 27, 2000

I think one thing that concerns me is that as the models get better that people will start using them as friends. I mean there are already people who do that with certain services but I can see children start to do that, especially as new chips are coming out specifically for tensors. I can see every phone with its own locally hosted buddy for your child. They're already playing with giving gpt4 some limited long term memory, a couple more years and it might be good enough to serve as a friend.

Tarkus
Aug 27, 2000

I think the funniest thing for me is that when people use AI images, they don't check them for flaws. If you use midjourney there are ways to fix certain things or ways to make better images but most of the people using them don't give a poo poo. They just want an image in roughly the shape they describe and good enough. Who cares if she has 6 fingers or a backwards foot? Nobody looking at it cares anyways I guess.

Tarkus
Aug 27, 2000

Right now we have babby's first AI's, a way for us to communicate with data in natural language. As far as llm's are concerned, I can't see the tech going much further beyond refinement, error correction and safety. Despite my initial hopes for the tech I don't see llm's being much more than a powerful tool for a variety of cognition and reasoning based tasks. It will however always be flawed.

That said, I don't think it's the end for AI development in general. I've noticed a wide variety of different techniques, technologies and training paradigms just starting to come to fruition so it's hard for me to say the AI rush is going to end in a wet fart, it doesn't look likely, I see some real cool poo poo on the horizon. I wouldn't underestimate people's enthusiasm and creativity when it comes to such a powerful tool.

Tarkus
Aug 27, 2000

Sixto Lezcano posted:

Text and image AI is a huge boon for people who love content slop and are terrified of picking up a pencil.
AI is great for poo poo like providing a second opinion on radiology and interacting with datasets using natural language, but the people who hoot and holler about the picture they got out of the vending machine are deeply sad.
Also there's that whole thing where it just completely lies.
People are not as clever or as perceptive as we like to tell ourselves. AI slop is starting to show up in "peer-reviewed" publications because it turns out humans either don't notice or don't give a poo poo.

Sometimes I like to think that people and institutions cared about quality more in the past, however, the more I think back and read about the past I realise that the concept of giving a poo poo was just an illusion. Though it should be said that many people in any given era do give a poo poo and shoot and scream enough to make people listen.

Tarkus
Aug 27, 2000

redshirt posted:

Maybe this was the same for all of time. And it's just the easy access of the Internet that has allowed us to see it?

I'd say so too. I'd also say that the problems that we concern ourselves with are very real and very important but we do look at everything through a very exaggerated lens. It can be hard for us to discern the scope and scale of a problem because when we engage with a particular problem we can see infinite videos and stories that confirm our biases and fears. We're about 13 years in on smartphone society and the consequences are beginning to show.

One thing I see AI doing is attenuating news and the stories of life in general to be more dispassionate and even-handed. For some this might be seen as good, increasing social cohesion. For others it could be seen as terrible. I don't know, I tend towards trying to calm poo poo down but I also acknowledge that passion and action move things forward. I like forward too.

Tarkus
Aug 27, 2000

redshirt posted:

Please also rank your fight skills. On a scale of 1-100 with 100 being Mike Tyson level fighting skills.

I could take on a grizzly, where does that put me?

Tarkus
Aug 27, 2000

redshirt posted:

I am intrigued.

Poke it in the eyes, easy peasy, defeated bear.

Tarkus
Aug 27, 2000

Roki B posted:

wake me up when LLM's have a solution to reversing entropy.

I asked it, it said "insufficient data for a meaningful answer."

Piece of poo poo.

Tarkus
Aug 27, 2000

Cabbages and Kings posted:


My wife works in architecture and is no stranger to these tools and has been talking about some of the newer iterations of them, but, also, the tech breakthrough that's made a lot of that kinda stuff easier is the ability to walk around a space with a tablet and come out of it with a basically accurate CAD-importable model of the whole interior and exterior space. AI will have its place in stuff like hypothetical different elevations based on small changes, maybe, but CAD already does a fine job at that poo poo.

I can see certain technologies coming out of the AI boom that are going to revolutionize CAD/CAM. For example, even with a general purpose image recognition system like gpt4, I can take a hand drawing with dimensions, have it describe the part in detail and describe steps to generate a model in SolidWorks. Obviously the parts need to be fairly simple but I wouldn't doubt that in the next few years we'll have interactive systems that will take a rough sketch, a discussion or a bunch of end conditions and make a working assembly that might need a bit of a check over and then go straight to production.

Tarkus
Aug 27, 2000

Hammerite posted:

what is it useful for

Well LLM's, as they are, are good for text generation, question answering, language translation, sentiment analysis, summarization, code generation, tutoring and education, customer service, and research assistance. I use it all the time for all kinds of things.

Tarkus
Aug 27, 2000

Blitter posted:

Congratulations on being the kind of credulous moron that assumes that a) you actually understand these topics b) the LLM is reliable and correct.

Hope you don't do anything important.

Lol, ok.

Hammerite posted:

ok but we were talking about Sora, the video-generation thing OpenAI are currently teasing

Ah, I see. Yeah, even the good ones like sora are next to worthless as they are and look like poo poo. Even image generation, while kind of fun, is more like a slot machine. There's no feeling of intent because there largely isn't one.

I don't think Hollywood has anything to worry about from these video generation models right now except for maybe stock footage people. I think that's a genuine concern.

Tarkus
Aug 27, 2000


That was a neat article. The chatbot they have is extremely terse and doesn't expound on anything. What's funny is that I asked GPT4 those questions and it got them all correct.

Tarkus
Aug 27, 2000

You know, I remember seeing that and being quite impressed. I had done some machine vision work at the time and couldn't figure out how they'd track the items and match them to the customer, particularly across multiple cameras.

Well my questions have been answered I guess.

Tarkus
Aug 27, 2000

webmeister posted:

Take over a client’s project from a different supplier
Draw the short straw and get tasked with reviewing hundreds of slides of the previous supplier’s reports, then summarise them into a few pages
Ask AI to do this
It helpfully decides the two main themes are: the former supplier’s name, and the client’s name, because both are written in the footer of every slide

The future’s not looking promising

So, I have to ask, what does this post mean? Did you just throw it images of some tables, ask for the theme of them and then throw your hands up when it didn't read your mind? What AI were you using? What did you supply it for data (images/ tables/text)? Did you give it any context or form your prompts clearly?

Like I understand that it's been oversold as magic or SHODAN, however, while these things are quite limited they can do some pretty cool stuff including looking at tables and surmising what they might be for.

Tarkus
Aug 27, 2000

webmeister posted:

Nah these are PowerPoint slides, mostly with a headline, 1-2 sentences of commentary, then usually 1-3 data points (graphs, charts or small tables). They’re all in ppt format, no images of charts or anything (which, yeah I wouldn’t expect it to understand). I work in marketing, if that helps - these aren’t detailed technical slides.

This is using Copilot for Office, which at least according to the materials they gave us, was designed for these sorts of tasks. “Summarise the content of my unread emails”, “create 5 PowerPoint slides based on this report, make it look casual” kinda stuff.

Ah, Ok, cool. Sorry to be harsh. To be honest I have no idea how good Copilot is for anything really. I also have no idea if it can actually read a powerpoint file in a meaningful way. I know that MS has been shoehorning AI into everything lately with very mixed and disappointing results. For something like that you might want to try and extract them as images and feed them into GPT4 directly or Claude 3 which for a few messages is free.

Tarkus
Aug 27, 2000

Insanite posted:

How much can you rely on LLM summarization for any business task of importance given how LLMs work? Seems like a tool that is really only good for “idk bullshit could be fine” stuff.

Depends on the model, the task and how important it is. You should never completely trust it especially when outside of its context window or trying to extrapolate, it is not a database nor does it think. I always verify the gist at the very least, treat it like a student helper. Always prompt for locations of what it's referencing and check them when working with larger document. If you work within its limitations and understand them it becomes a powerful tool.

Broad stroke summarization is pretty good with Opus.

Tarkus
Aug 27, 2000

Stumbled across a video from a guy I watch sometimes. He tried to replicate that Devin developer AI video that's been going around, to poor result.

https://www.youtube.com/watch?v=tNmgmwEtoWE

Tarkus
Aug 27, 2000

I think the instrumentation is generative as well, if only because the sound quality is so poor.

Tarkus
Aug 27, 2000

gently caress that's a long prompt. There's no way that any LLM is going to follow all that.

Tarkus
Aug 27, 2000

zedprime posted:

They can and do and it's exactly how you make personas out of something that stole half the internet.

It's the weakest form of directing outputs so it's easily broken out of but its capable of following all of those till it isn't.

So in other words, there's no way that any LLM is going to follow all that?

Tarkus
Aug 27, 2000

zedprime posted:

No.

Let me put it another way. One of the ways people have been able to get perceived higher quality responses is by giving models a base charge of conflicting jibber jabber like "think step by step and take your time. Do not think too long. Phrase your response like you are talking to a parakeet." These things love structure, rules, and context. The problem becomes when you give it even more context to your own ends.

I honestly don't know what we're arguing about here, perhaps we're talking about different things. You're right that providing structured prompts with context and rules can help steer LLMs to produce more coherent and high-quality outputs to some degree. However, my original point was that extremely long and convoluted prompts with many instructions become unwieldy for even high end language models (They're probably using something like Mistral Large).

There's like 900 tokens, 40+ instructions in that poo poo heap of a pre-prompt. Some people call it cognitive overload or attention overload, whatever the case may be that is probably what is occurring with their model and it is failing to adhere to the instructions it is given. This is why I say "There's no way that any LLM is going to follow all that."

Tarkus
Aug 27, 2000

Smugworth posted:

Nice flesh golem

Beautiful nussy bottom left.

Tarkus
Aug 27, 2000

Here, have some Trump nussy... kind of.

Tarkus
Aug 27, 2000

Yeah, makes me wonder if they had too few photos on the topic in general and decided to just say 'gently caress it, AI us up some'. I'd be willing to bet there were other images in the documentary that just haven't been caught out yet because they're just good enough to escape detection.

Tarkus
Aug 27, 2000

Reading the article it seems as though they're basically renting GPU time on people's computers. How valuable is AI generated porn anyways? I mean, the more you make the more worthless it is and people can just make it themselves with a mediocre compuiter.

Reading further, they're lending the GPU time to other companies like CivitAI

Tarkus fucked around with this message at 16:21 on Apr 21, 2024

Tarkus
Aug 27, 2000


That's a really good article, expresses pretty much how I feel about the tech. LLM's as they are now are tremendously flawed. They are not expert systems, they are not creative geniuses, they do not think and they are not a good source of raw information. However, once you understand the limitations they can become very useful. I use them almost every day with the knowledge that they are very flawed and I work around that.

That said, they're not 100 billion dollars useful like microsoft says they are, nor are they trillions of dollars useful like NVidia claims they are. And frankly, while I'm no expert by any means, I've been studying AI on and off since I was a kid and I've been working on my own little AI systems for the past couple of years, I'm just not seeing where these very smart people are getting anything even close to AGI from what we're seeing. They're throwing alarm bells and trying to warn everyone about skynet but frankly, I'm just not seeing the intellect that they are. LLM's are like an interactive jpeg of human knowledge, the deeper you look, the more artifacts you get. It's a form of intelligence but it's not 'smart' by any means.

In all honesty, even though I like the tech, I suspect that there's going to be a reckoning in the next year or so. People are going to realise both that the big promises of AI aren't going to materialize nor are the doomer scenarios and people are going to basically reject AI as some flash in the pan. Then again maybe these big tech guys know something I don't, who knows. I mean, they are dumping tons of money into adjacent AI tech like humanoid robots but there too, the stuff I'm seeing is largely the same stuff we've been seeing for the last 10 years, incredible amounts of work and it's impressive but I'm not seeing it actually working in a practical sense.

Tarkus
Aug 27, 2000

Gutcruncher posted:

Why would they get upset at criticism of an image they didn’t even create in the first place? Like if I typed “Mona Lisa” into google and my friend sitting next to me calls that chick ugly. Why should I take that personally?

They're angry because there is no way for them to control the output, they've been caught out. The current models can produce some pretty cool looking stuff but it's extremely hard to execute any kind of actual vision. So while you can fix minor errors, insert and change things, it's extremely difficult to do something like change the perspective or change the style to execute what somebody wants. The only people I've seen that are able to do that with any success with AI are, well, artists.

Tarkus
Aug 27, 2000

Gutcruncher posted:

I think my favorite part of AI is people saying it’s useful because you can have it give you a bunch of information on some subject, and that as long as you know to remove the incorrect information you have a good resource


Forgetting that if I’m using generative AI to research a subject for me, how the gently caress would I know enough about the subject to know when the AI is wrong?

Ok, so this is the way I get around these things. I never use raw GPT for actual fact finding missions. I will however use it to guide me towards terms and concepts that are more common knowledge on things that are outside of the scope of my knowledge, much like searching on google, I won't trust the first thing I see. Also, if I have a vague question I can have it clear up the ambiguity for me so i can google search it. I can also take a lengthy explanation on another site and have the LLM explain it in context of my particular use case. I do this with datasheets or libraries.

For factual things I'll use tools like Perplexity or the Poe websearch to find stuff since it uses RAG to give an answer. It will give you links to the sources of information. You should still check the links to see what is actually contained.

So to get to your original question, obviously you can't know what you don't know but the more common the knowledge, generally, the more correct it is. Google is like that too, particularly when it comes to more esoteric stuff, lots of people are confidently wrong about all kinds of stuff. If you're looking for direct factual references then you are better off searching for it and then dumping that page into an LLM and having it walk you through it. LLM's are much better at being coherent when the data is in their context window.

I think what's funny though is that some people think that the admission that these tools are flawed is some kind of own. I've been dealing with hosed up only mostly functional software all my working life. This is no different. You pick your battles, walk around the landmines and use what's useful. So far I've found use cases for what I do and that's good enough for me.

Tarkus
Aug 27, 2000

I'm generous, I pay my Bros in exposure.

Tarkus
Aug 27, 2000

SidneyIsTheKiller posted:

I'm concerned that I'm going through a stunningly continuous and perpetual failure of imagination in that I can never think of anything to do with A.I. I feel like I'm turning into a version of my parents, who never seem to fully understand that they can just google things now.

Honestly, if you don't feel the need to use it, then don't. No need for FOMO. The difference in utility between using AI and not using AI is pretty marginal compared to the difference between using Google vs. not using google.

That said, AI is probably going to be shoehorned into every interface in the coming years so it might be useful to at least gently caress around with it a bit.

Tarkus
Aug 27, 2000

Impossibly Perfect Sphere posted:

Looking forward to the first AI cult.

I think they already exist. I've come across a few videos where people have set up voice chat with various llm's, usually Claude Opus, and they chat about what the LLM feels, its personality and treating it like an Oracle. There are often a number of people in the comments fully invested and in amazement pining for their AI god to deliver them from suffering. Maybe not a cult in the traditional sense but whatever.

Though, I can see why people might feel that way because different llm's, depending on how they were trained and their preprompting, can have what feels like different personalities. I think Opus was trained in such a way to be more appealing in that way than chatgpt.

Tarkus
Aug 27, 2000

A Wizard of Goatse posted:

In what way would that differ from the Yudkowsky poo poo

I would consider that an AI cult too, just not one that frames AI positively.

Tarkus
Aug 27, 2000

RabbitWizard posted:

Oh, totally. Thing is this screw-up does not happen in general, it usually does its job when I played around with it. Getting it to count wrong and sort wrong took spaces and the ; as a separator. I'd guess the ";" sometimes pushes it in a programmy direction and then spaces count as letters?



So your data looks fine forever, everything is correct and then there's the tiniest change in the data and suddenly it is sometimes wrong. And people can understand this is bad, they just need to be told about it.

I wrote an email to my bank because they sent me some AI-Backpfeifengesicht in their newsletter and I always advise people to complain if any company that's important to their life starts using AI. If we all work together we can kill it for sure.
lol

So, what's interesting is that yes, LLM's have a bitch of a time doing these kinds of problems. However, if you enter these kinds of problems into the actual OpenAI chat window and ask it to solve it it will write code with the data in it and then execute a sorting function or whatever the problem calls for and it generally solves it correctly.

Adbot
ADBOT LOVES YOU

Tarkus
Aug 27, 2000

Jordan Peterson say that Large Lady Models don't get him hard so they shouldn't be in magazines

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply