|
Right now LLM tech is a really cool way to interact with data in a natural language. Is it really a stepping stone to AGI or Super intelligence? Maybe, maybe not. Is it overblown? Yes, right now it is. It's not ready to replace a lot of the jobs that people think it will, I wouldn't be too worried for a few years anyways. It's a very useful tool and I enjoy using it but yeah, it's not perfect and it is trained on other's intellectual property but whatever, it was kind of expected.
|
# ¿ Mar 11, 2024 14:15 |
|
|
# ¿ May 21, 2024 03:10 |
|
Worf posted:I think MDs will be among the first to get replaced by AI How are mds and mechanics easy to replace? We're still a long ways away from properly embodying these models.
|
# ¿ Mar 11, 2024 14:18 |
|
Worf posted:Which jobs are going to be replaced before those Call centers, drive thru people, technical writers, customer service representatives, telemarketers, technical support agents, live chat support staff, personal assistants, booking and reservation agents, email support staff, social media managers, language translators, content moderators. I mean, that's just a few but there is a list of jobs that are far easier to replace than the two you gave. What do you think MD's and mechanics do? Just look at symptoms and tell you what's wrong?
|
# ¿ Mar 11, 2024 14:35 |
|
Worf posted:When's the last time you've been in a hospital in America lmao I've never been to a hospital in America but there is a physical component to what doctors do believe it or not. Also lol about your idea about mechanics. I get what you're saying but I think these tools will be largely supplementary for the next few years.
|
# ¿ Mar 11, 2024 14:56 |
|
I think that when it comes to coding with gpt you have to treat it more like an assistant than a 'jesus take the wheel!' coder. For some of the 10x coders here I'm sure it's more a hindrance than a help but for people like me whose job is <5% programming, it's invaluable. It's able to explain concepts and libraries far better than would be available from googling (especially lately). It can diagnose issues in small sections of code. It can show me more efficient ways to do things. But you always have to keep your eye out, but the same goes for getting help from another, even more experienced person, and this thing is effectively instant.
|
# ¿ Mar 11, 2024 15:16 |
|
Vampire Panties posted:All of these have been using NLP for a decade+ now. I don't deny that AI is a pale replacement for those positions but those are probably much better candidates than MDs and mechanics for job replacement.
|
# ¿ Mar 11, 2024 15:18 |
|
I think one thing that concerns me is that as the models get better that people will start using them as friends. I mean there are already people who do that with certain services but I can see children start to do that, especially as new chips are coming out specifically for tensors. I can see every phone with its own locally hosted buddy for your child. They're already playing with giving gpt4 some limited long term memory, a couple more years and it might be good enough to serve as a friend.
|
# ¿ Mar 11, 2024 17:13 |
|
I think the funniest thing for me is that when people use AI images, they don't check them for flaws. If you use midjourney there are ways to fix certain things or ways to make better images but most of the people using them don't give a poo poo. They just want an image in roughly the shape they describe and good enough. Who cares if she has 6 fingers or a backwards foot? Nobody looking at it cares anyways I guess.
|
# ¿ Mar 11, 2024 22:52 |
|
Right now we have babby's first AI's, a way for us to communicate with data in natural language. As far as llm's are concerned, I can't see the tech going much further beyond refinement, error correction and safety. Despite my initial hopes for the tech I don't see llm's being much more than a powerful tool for a variety of cognition and reasoning based tasks. It will however always be flawed. That said, I don't think it's the end for AI development in general. I've noticed a wide variety of different techniques, technologies and training paradigms just starting to come to fruition so it's hard for me to say the AI rush is going to end in a wet fart, it doesn't look likely, I see some real cool poo poo on the horizon. I wouldn't underestimate people's enthusiasm and creativity when it comes to such a powerful tool.
|
# ¿ Mar 17, 2024 03:21 |
|
Sixto Lezcano posted:Text and image AI is a huge boon for people who love content slop and are terrified of picking up a pencil. Sometimes I like to think that people and institutions cared about quality more in the past, however, the more I think back and read about the past I realise that the concept of giving a poo poo was just an illusion. Though it should be said that many people in any given era do give a poo poo and shoot and scream enough to make people listen.
|
# ¿ Mar 17, 2024 04:01 |
|
redshirt posted:Maybe this was the same for all of time. And it's just the easy access of the Internet that has allowed us to see it? I'd say so too. I'd also say that the problems that we concern ourselves with are very real and very important but we do look at everything through a very exaggerated lens. It can be hard for us to discern the scope and scale of a problem because when we engage with a particular problem we can see infinite videos and stories that confirm our biases and fears. We're about 13 years in on smartphone society and the consequences are beginning to show. One thing I see AI doing is attenuating news and the stories of life in general to be more dispassionate and even-handed. For some this might be seen as good, increasing social cohesion. For others it could be seen as terrible. I don't know, I tend towards trying to calm poo poo down but I also acknowledge that passion and action move things forward. I like forward too.
|
# ¿ Mar 17, 2024 04:35 |
|
redshirt posted:Please also rank your fight skills. On a scale of 1-100 with 100 being Mike Tyson level fighting skills. I could take on a grizzly, where does that put me?
|
# ¿ Mar 17, 2024 05:20 |
|
redshirt posted:I am intrigued. Poke it in the eyes, easy peasy, defeated bear.
|
# ¿ Mar 17, 2024 05:24 |
|
Roki B posted:wake me up when LLM's have a solution to reversing entropy. I asked it, it said "insufficient data for a meaningful answer." Piece of poo poo.
|
# ¿ Mar 18, 2024 19:14 |
|
Cabbages and Kings posted:
I can see certain technologies coming out of the AI boom that are going to revolutionize CAD/CAM. For example, even with a general purpose image recognition system like gpt4, I can take a hand drawing with dimensions, have it describe the part in detail and describe steps to generate a model in SolidWorks. Obviously the parts need to be fairly simple but I wouldn't doubt that in the next few years we'll have interactive systems that will take a rough sketch, a discussion or a bunch of end conditions and make a working assembly that might need a bit of a check over and then go straight to production.
|
# ¿ Mar 21, 2024 17:19 |
|
Hammerite posted:what is it useful for Well LLM's, as they are, are good for text generation, question answering, language translation, sentiment analysis, summarization, code generation, tutoring and education, customer service, and research assistance. I use it all the time for all kinds of things.
|
# ¿ Mar 24, 2024 23:16 |
|
Blitter posted:Congratulations on being the kind of credulous moron that assumes that a) you actually understand these topics b) the LLM is reliable and correct. Lol, ok. Hammerite posted:ok but we were talking about Sora, the video-generation thing OpenAI are currently teasing Ah, I see. Yeah, even the good ones like sora are next to worthless as they are and look like poo poo. Even image generation, while kind of fun, is more like a slot machine. There's no feeling of intent because there largely isn't one. I don't think Hollywood has anything to worry about from these video generation models right now except for maybe stock footage people. I think that's a genuine concern.
|
# ¿ Mar 25, 2024 18:29 |
|
That was a neat article. The chatbot they have is extremely terse and doesn't expound on anything. What's funny is that I asked GPT4 those questions and it got them all correct.
|
# ¿ Apr 2, 2024 18:33 |
|
You know, I remember seeing that and being quite impressed. I had done some machine vision work at the time and couldn't figure out how they'd track the items and match them to the customer, particularly across multiple cameras. Well my questions have been answered I guess.
|
# ¿ Apr 2, 2024 19:15 |
|
webmeister posted:Take over a client’s project from a different supplier So, I have to ask, what does this post mean? Did you just throw it images of some tables, ask for the theme of them and then throw your hands up when it didn't read your mind? What AI were you using? What did you supply it for data (images/ tables/text)? Did you give it any context or form your prompts clearly? Like I understand that it's been oversold as magic or SHODAN, however, while these things are quite limited they can do some pretty cool stuff including looking at tables and surmising what they might be for.
|
# ¿ Apr 8, 2024 02:18 |
|
webmeister posted:Nah these are PowerPoint slides, mostly with a headline, 1-2 sentences of commentary, then usually 1-3 data points (graphs, charts or small tables). They’re all in ppt format, no images of charts or anything (which, yeah I wouldn’t expect it to understand). I work in marketing, if that helps - these aren’t detailed technical slides. Ah, Ok, cool. Sorry to be harsh. To be honest I have no idea how good Copilot is for anything really. I also have no idea if it can actually read a powerpoint file in a meaningful way. I know that MS has been shoehorning AI into everything lately with very mixed and disappointing results. For something like that you might want to try and extract them as images and feed them into GPT4 directly or Claude 3 which for a few messages is free.
|
# ¿ Apr 8, 2024 03:42 |
|
Insanite posted:How much can you rely on LLM summarization for any business task of importance given how LLMs work? Seems like a tool that is really only good for “idk bullshit could be fine” stuff. Depends on the model, the task and how important it is. You should never completely trust it especially when outside of its context window or trying to extrapolate, it is not a database nor does it think. I always verify the gist at the very least, treat it like a student helper. Always prompt for locations of what it's referencing and check them when working with larger document. If you work within its limitations and understand them it becomes a powerful tool. Broad stroke summarization is pretty good with Opus.
|
# ¿ Apr 8, 2024 03:51 |
|
Stumbled across a video from a guy I watch sometimes. He tried to replicate that Devin developer AI video that's been going around, to poor result. https://www.youtube.com/watch?v=tNmgmwEtoWE
|
# ¿ Apr 10, 2024 01:53 |
|
I think the instrumentation is generative as well, if only because the sound quality is so poor.
|
# ¿ Apr 10, 2024 22:53 |
|
gently caress that's a long prompt. There's no way that any LLM is going to follow all that.
|
# ¿ Apr 12, 2024 14:24 |
|
zedprime posted:They can and do and it's exactly how you make personas out of something that stole half the internet. So in other words, there's no way that any LLM is going to follow all that?
|
# ¿ Apr 13, 2024 01:36 |
|
zedprime posted:No. I honestly don't know what we're arguing about here, perhaps we're talking about different things. You're right that providing structured prompts with context and rules can help steer LLMs to produce more coherent and high-quality outputs to some degree. However, my original point was that extremely long and convoluted prompts with many instructions become unwieldy for even high end language models (They're probably using something like Mistral Large). There's like 900 tokens, 40+ instructions in that poo poo heap of a pre-prompt. Some people call it cognitive overload or attention overload, whatever the case may be that is probably what is occurring with their model and it is failing to adhere to the instructions it is given. This is why I say "There's no way that any LLM is going to follow all that."
|
# ¿ Apr 13, 2024 04:00 |
|
Smugworth posted:Nice flesh golem Beautiful nussy bottom left.
|
# ¿ Apr 14, 2024 05:35 |
|
Here, have some Trump nussy... kind of.
|
# ¿ Apr 14, 2024 05:57 |
|
Yeah, makes me wonder if they had too few photos on the topic in general and decided to just say 'gently caress it, AI us up some'. I'd be willing to bet there were other images in the documentary that just haven't been caught out yet because they're just good enough to escape detection.
|
# ¿ Apr 19, 2024 14:45 |
|
Reading the article it seems as though they're basically renting GPU time on people's computers. How valuable is AI generated porn anyways? I mean, the more you make the more worthless it is and people can just make it themselves with a mediocre compuiter. Reading further, they're lending the GPU time to other companies like CivitAI Tarkus fucked around with this message at 16:21 on Apr 21, 2024 |
# ¿ Apr 21, 2024 16:15 |
|
ultrafilter posted:https://twitter.com/molly0xFFF/status/1780601652049043628 That's a really good article, expresses pretty much how I feel about the tech. LLM's as they are now are tremendously flawed. They are not expert systems, they are not creative geniuses, they do not think and they are not a good source of raw information. However, once you understand the limitations they can become very useful. I use them almost every day with the knowledge that they are very flawed and I work around that. That said, they're not 100 billion dollars useful like microsoft says they are, nor are they trillions of dollars useful like NVidia claims they are. And frankly, while I'm no expert by any means, I've been studying AI on and off since I was a kid and I've been working on my own little AI systems for the past couple of years, I'm just not seeing where these very smart people are getting anything even close to AGI from what we're seeing. They're throwing alarm bells and trying to warn everyone about skynet but frankly, I'm just not seeing the intellect that they are. LLM's are like an interactive jpeg of human knowledge, the deeper you look, the more artifacts you get. It's a form of intelligence but it's not 'smart' by any means. In all honesty, even though I like the tech, I suspect that there's going to be a reckoning in the next year or so. People are going to realise both that the big promises of AI aren't going to materialize nor are the doomer scenarios and people are going to basically reject AI as some flash in the pan. Then again maybe these big tech guys know something I don't, who knows. I mean, they are dumping tons of money into adjacent AI tech like humanoid robots but there too, the stuff I'm seeing is largely the same stuff we've been seeing for the last 10 years, incredible amounts of work and it's impressive but I'm not seeing it actually working in a practical sense.
|
# ¿ Apr 22, 2024 15:37 |
|
Gutcruncher posted:Why would they get upset at criticism of an image they didn’t even create in the first place? Like if I typed “Mona Lisa” into google and my friend sitting next to me calls that chick ugly. Why should I take that personally? They're angry because there is no way for them to control the output, they've been caught out. The current models can produce some pretty cool looking stuff but it's extremely hard to execute any kind of actual vision. So while you can fix minor errors, insert and change things, it's extremely difficult to do something like change the perspective or change the style to execute what somebody wants. The only people I've seen that are able to do that with any success with AI are, well, artists.
|
# ¿ Apr 22, 2024 16:40 |
|
Gutcruncher posted:I think my favorite part of AI is people saying it’s useful because you can have it give you a bunch of information on some subject, and that as long as you know to remove the incorrect information you have a good resource Ok, so this is the way I get around these things. I never use raw GPT for actual fact finding missions. I will however use it to guide me towards terms and concepts that are more common knowledge on things that are outside of the scope of my knowledge, much like searching on google, I won't trust the first thing I see. Also, if I have a vague question I can have it clear up the ambiguity for me so i can google search it. I can also take a lengthy explanation on another site and have the LLM explain it in context of my particular use case. I do this with datasheets or libraries. For factual things I'll use tools like Perplexity or the Poe websearch to find stuff since it uses RAG to give an answer. It will give you links to the sources of information. You should still check the links to see what is actually contained. So to get to your original question, obviously you can't know what you don't know but the more common the knowledge, generally, the more correct it is. Google is like that too, particularly when it comes to more esoteric stuff, lots of people are confidently wrong about all kinds of stuff. If you're looking for direct factual references then you are better off searching for it and then dumping that page into an LLM and having it walk you through it. LLM's are much better at being coherent when the data is in their context window. I think what's funny though is that some people think that the admission that these tools are flawed is some kind of own. I've been dealing with hosed up only mostly functional software all my working life. This is no different. You pick your battles, walk around the landmines and use what's useful. So far I've found use cases for what I do and that's good enough for me.
|
# ¿ Apr 22, 2024 18:11 |
|
I'm generous, I pay my Bros in exposure.
|
# ¿ Apr 22, 2024 21:00 |
|
SidneyIsTheKiller posted:I'm concerned that I'm going through a stunningly continuous and perpetual failure of imagination in that I can never think of anything to do with A.I. I feel like I'm turning into a version of my parents, who never seem to fully understand that they can just google things now. Honestly, if you don't feel the need to use it, then don't. No need for FOMO. The difference in utility between using AI and not using AI is pretty marginal compared to the difference between using Google vs. not using google. That said, AI is probably going to be shoehorned into every interface in the coming years so it might be useful to at least gently caress around with it a bit.
|
# ¿ Apr 22, 2024 22:13 |
|
Impossibly Perfect Sphere posted:Looking forward to the first AI cult. I think they already exist. I've come across a few videos where people have set up voice chat with various llm's, usually Claude Opus, and they chat about what the LLM feels, its personality and treating it like an Oracle. There are often a number of people in the comments fully invested and in amazement pining for their AI god to deliver them from suffering. Maybe not a cult in the traditional sense but whatever. Though, I can see why people might feel that way because different llm's, depending on how they were trained and their preprompting, can have what feels like different personalities. I think Opus was trained in such a way to be more appealing in that way than chatgpt.
|
# ¿ May 1, 2024 17:15 |
|
A Wizard of Goatse posted:In what way would that differ from the Yudkowsky poo poo I would consider that an AI cult too, just not one that frames AI positively.
|
# ¿ May 1, 2024 21:34 |
|
RabbitWizard posted:Oh, totally. Thing is this screw-up does not happen in general, it usually does its job when I played around with it. Getting it to count wrong and sort wrong took spaces and the ; as a separator. I'd guess the ";" sometimes pushes it in a programmy direction and then spaces count as letters? So, what's interesting is that yes, LLM's have a bitch of a time doing these kinds of problems. However, if you enter these kinds of problems into the actual OpenAI chat window and ask it to solve it it will write code with the data in it and then execute a sorting function or whatever the problem calls for and it generally solves it correctly.
|
# ¿ May 12, 2024 19:47 |
|
|
# ¿ May 21, 2024 03:10 |
|
Jordan Peterson say that Large Lady Models don't get him hard so they shouldn't be in magazines
|
# ¿ May 12, 2024 23:35 |