|
Tei posted:Hope corporations lose control of the AI. In practice, stacks are quickly becoming far more complicated than just standing up an LLM and pointing a chat box at it. You're probably going to see the industry fragment into a few different areas: SaaS offerings that are full stack and designed to do a specific thing; "AIaaS" ("AI as a service", or maybe they'll call it "Model as a Service") offerings that are basically just an AI model on a stick with an API like the current ChatGPT stuff; and something more like a platform that can be hosted or possibly on prem, and require full time specialists or consultants to properly build and maintain. I expect that to also be in order from most-common to least-common, in terms of market share. There will probably always be open source models that fall into the third category, but like everything else they'll be useful for consumer use mostly as a toy for nerds because the secret sauce will be in the stack. KillHour fucked around with this message at 00:29 on Nov 21, 2023 |
# ? Nov 21, 2023 00:25 |
|
|
# ? Jun 8, 2024 06:48 |
|
The whole Altman / OpenAI kerfurkle is weird. So now Microsoft will have under his right wing Altman and big group of people. And in the other wing OpenAI. Somehow I think this will also slow down OpenAI research since people will get distracted from the drama or pause while moving to a new job. //. Things completelly stall if lawyers are involved.
|
# ? Nov 21, 2023 02:20 |
|
I think OpenAI figured out good memory retrieval as part of doing the GPTs thing, and they used that to create autonomous agents that can complete a new class of tasks with no human intervention. Then, Sam started demoing it to potential investors without board approval. Ilya saw Sam hocking a proto-agi without even consulting the board, got spooked, and pulled the ripcord.
|
# ? Nov 21, 2023 03:09 |
|
Lucid Dream posted:I think OpenAI figured out good memory retrieval as part of doing the GPTs thing, and they used that to create autonomous agents that can complete a new class of tasks with no human intervention. Then, Sam started demoing it to potential investors without board approval. Ilya saw Sam hocking a proto-agi without even consulting the board, got spooked, and pulled the ripcord. Do you have evidence for any of this or is it just speculation?
|
# ? Nov 21, 2023 03:18 |
|
KillHour posted:Do you have evidence for any of this or is it just speculation? Nah, pure speculation. I used the GPTs thing enough to see that it has the ability for it to re-write its own system prompt, and it has the ability to retrieve arbitrary data from uploaded files for use in the output. Right after the GPTs thing was announced I remember using an early UI of it or something where it had a log on the side that was showing it doing agent stuff, so I'm just sorta putting the pieces together and extrapolating. I'm guessing they figured out a way to write their own "memories" and then retrieve them in a way that's scalable enough to tackle more complex problems. Lucid Dream fucked around with this message at 03:40 on Nov 21, 2023 |
# ? Nov 21, 2023 03:35 |
|
I am way better programmer than ChatGPT. Sometimes I get frustrated how obtuse mediocre is ChatGPT. At the same time when I need something mediocre like "give me a curl request sending photo1 and photo2 to /face_recognition", I get the resulting code in about 3 seconds. Way faster than I would even being. But theres something worse, much worse. I could say something horribly poorly, with spelling horrors, using the wrong word, mistaking a few parts.. and I still get a good answer. ChatGPT is better at receiving broken information than me. ChatGPT would understand a customer much better than I can. In that sense ChatGPT is much better programmer than I can possibly be. My problem is when I receive conflicting information, I get super frustrated, angry, and don't know what to do. And people produce naturally this type of feedback all the time "We want all buttons in our webpage to be permanently disabled when the user double click". <-me-> "Also the logout button?" . "No, no that one". "And the search button?". "no that on exclused". "And the filter buttons?". "no these buttons not". In user speak all is not all. First is not first. Left is not left. Right is not right. Yes is not yes. No is not not. They start talking about what they want, jump to how they want it, describe a broken mechanic has desirable. Want things to work. All of this frustrate me to no end. But ChatGPT get this poo poo, and produce a correct code, in 3 seconds. Maybe I am too much like a robot. And ChatGPT too much like a empatic human being. If they ever get a ChatGPT to write good programs (not just "Code"), the hilarious thing is that will be better at decyphering users broken feedback, than many of us programmers. Tei fucked around with this message at 01:49 on Nov 23, 2023 |
# ? Nov 23, 2023 01:46 |
|
Lucid Dream posted:I think OpenAI figured out good memory retrieval as part of doing the GPTs thing, and they used that to create autonomous agents that can complete a new class of tasks with no human intervention. Then, Sam started demoing it to potential investors without board approval. Ilya saw Sam hocking a proto-agi without even consulting the board, got spooked, and pulled the ripcord. I know you were speculating, and this article is pretty speculative as well, but this may be closer to the truth than I was willing to believe a few days ago. https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/ Seems like Ilya might have been scared by the progress and wanted to proceed with more caution...though this is all based on hearsay so take it with a grain of salt.
|
# ? Nov 23, 2023 02:54 |
|
Someone please explain to me how Q* solving math for 10-year olds is dangerous, where Wolfram Alpha aceing "what is six over four?" isn't.
|
# ? Nov 23, 2023 13:17 |
|
Heran Bago posted:Someone please explain to me how Q* solving math for 10-year olds is dangerous, where Wolfram Alpha aceing "what is six over four?" isn't. The G in AGI is for general. Wolfram Alpha is not General, is centered around math. Wolfram Alpha is not going to turn the entire solar system into paperclips to maximize the shareholders value, whatever is Q* maybe might if somebody stupid enough ask it that. AGI is a human classifiction. Maybe between the intelligence of a Calculator and the Inteligence of AGI theres many stages and systems. We may not invent a AGI, but we may invent one of the intermediate stages that lead to AGI, with a destructive potential a fraction of AGI, but still enough to kill us all. Tei fucked around with this message at 13:44 on Nov 23, 2023 |
# ? Nov 23, 2023 13:42 |
|
Heran Bago posted:Someone please explain to me how Q* solving math for 10-year olds is dangerous, where Wolfram Alpha aceing "what is six over four?" isn't. It wasn't trained on math for ten year olds
|
# ? Nov 23, 2023 14:25 |
|
Heran Bago posted:Someone please explain to me how Q* solving math for 10-year olds is dangerous, where Wolfram Alpha aceing "what is six over four?" isn't. OpenAI have an apparently incredibly accurate method of predicting capabilities of their models before they are fully trained. Better math on a smaller model using a new architecture might suggest a fully trained model would have dramatically more capabilities than GPT4.
|
# ? Nov 23, 2023 15:16 |
|
It sure is a good thing that OpenAI lives up to its name and publishes their research and findings so external experts can't weigh in and validate their findings / interpret the importance and implications of the results. What's that? We're all reading tea leaves and have no idea what is going on except it might be SHODAN or completely bunk or anything in between? Sweet. I love that.
|
# ? Nov 23, 2023 15:29 |
|
We've been poking at it for about a week in the GBS AI art thread, but I figure those who've been avoiding going there might want to know something that's slipped under the radar this last month. AI generated music has made a fairly significant leap forward. Suno.ai takes song style suggestions and either generated or BYO lyrics and has (basic) extension and stitching capabilities for creating full songs. The quality is absurdly high compared to what came before, a lot are comparing it to Dall-E but music. Self serving example generated by me: https://voca.ro/1hZS3KPqg6k3 This thread has more. Originally the generator didn't let you use your own lyrics and didn't have the capability to extend songs, that was added just recently, so most of what is in the thread is fairly short and with pretty wonky AI generated lyrics. You could suggest a sentence worth, but it was just a starting prompt. Now that's changed it's become a lot more useful, to the point where people in the music industry might start to notice. I'm curious how they'll react. SCheeseman fucked around with this message at 15:50 on Nov 23, 2023 |
# ? Nov 23, 2023 15:46 |
|
SCheeseman posted:Self serving example generated by me: Ahaha that's fantastic. Still has some minor acoustic artifacts, but goddrat all the same.
|
# ? Nov 23, 2023 16:17 |
|
Oh my god, this thing is addictive. I foresee millions of weird songs everywhere.
|
# ? Nov 23, 2023 22:18 |
|
Tei posted:I am way better programmer than ChatGPT. Sometimes I get frustrated how obtuse mediocre is ChatGPT. That's only obvious. A generative AI can never be better than the average of its inputs, and there's a ton of lovely inputs.
|
# ? Nov 28, 2023 04:52 |
|
Liquid Communism posted:That's only obvious. A generative AI can never be better than the average of its inputs, and there's a ton of lovely inputs. Not necessarily true, a knowledgeable person can look at 9 garbage outputs and 1 great output and just use the great output.
|
# ? Nov 28, 2023 18:27 |
|
What if you get 10 lovely outputs, like I did yesterday because I couldn't remember the syntax on a datapager. I had to go to stack overflow! What is this 2021!?
|
# ? Nov 30, 2023 09:12 |
|
SaTaMaS posted:Not necessarily true, a knowledgeable person can look at 9 garbage outputs and 1 great output and just use the great output. If I recall correctly, in a machine learning contest about a decade ago by (I think?) netflix for video recommendation, the winning entry was just to recommend the most popular overall videos and ignore trying to tailor recommendations to individuals by their viewing history.
|
# ? Nov 30, 2023 11:13 |
|
I think algorithms and sites like Netflix should do a better effort at presenting viewers with artist. To have viewers engage with the creators of the movies / tv series. Like sex is better with the people you love than a prostitute. A movie is better when you understand the authors. The idea to present TV series and Movies like products in a supermarket have this flaw. Imho. That it does nothing to entice people to understand the artist behind. That word, "Content" is everywhere, and people is deluded into thinking about movies has "products". We are here, because the people in power treat people like numbers, art like content, and so on. But is wrong. I think old TV kinda did this better, with shows where before or after the movie you would have a debate with people talking about the movie. A movie recomendation algorithm should be more the guy that have watched all the black and white Poland movies. Currently, I think these recomendation algorithms answer to simple questions like this: "Of this 20 shows in promotion, if the viewer where able to make a choice between them, what show will likelly watch first?". "What show or movie is more likelly to be watched till the end, by this viewer?". Better questions to suggest to the algorithm would be: "Taking into account this list of movies the player enjoyed, the type of viewer, and the time he have/current attitude, what movie he does not know will be a incredible pleasure to him, or will open this viewer to other interesting movies" Netflix can turn people into movies experts that want to watch all types of movies, and just love cinema. Create a need, instead of just fullfilling it. Teach people, not just entertain them. Tei fucked around with this message at 13:52 on Nov 30, 2023 |
# ? Nov 30, 2023 13:50 |
|
I'm not intimately familiar with any of Netflix's algorithms, but "WriterName" and "DirectorName" seem like they would be incredibly easy things to add into pretty much any machine learning model, so I'd place some solid money on them already trying that.
|
# ? Nov 30, 2023 15:09 |
|
This is the kind of AI implementation I'm really interested and afraid about - interpereting and dynamically manipulating objects in 3D space. https://arstechnica.com/information-technology/2023/11/mother-plucker-steel-fingers-guided-by-ai-pluck-weeds-rapidly-and-autonomously/ quote:Mother plucker: Steel fingers guided by AI pluck weeds rapidly and autonomously I assume it's not just a different implementation of the classic suction robot arms arranging chips on a conveyor belt. Reducing pesticide use and number of humans subjected to spraying them is a pretty good thing. There's stuff in this article to be genuinely happy and excited for. But when AI can make basic crop-vs-weed judgements and get handsy about it I worry. Stocking shelves and flipping burgers probably aren't that far off.
|
# ? Nov 30, 2023 15:19 |
|
We've had these sorts of technologies for ages, there are examples that exist in the wild https://youtu.be/ZNVuIU6UUiM?feature=shared The issue is the mechanical side more then the AI. They are big and expensive to run and maintain and often have only one use. A under paid worker might not be as effecient but they can do multiple tasks given to them. And it's not like machinery hasn't been reducing the need for humans for farmwork for the last few centuries. There has also been a burger flipping robot for ages https://youtu.be/KJVOfqunm5E?feature=shared Apparently it breaks down constantly and is a loss leader Mega Comrade fucked around with this message at 15:31 on Nov 30, 2023 |
# ? Nov 30, 2023 15:28 |
|
It's still cheaper to mass produce and replace humans for a lot of modern manual labor jobs than hand bots. It's why they keep fretting about the birth rate. And even then, I expect a sudden interest in reviving human cloning among the wealth class to solve that problem before investing in a bunch of strawberry picking Terminators.
|
# ? Nov 30, 2023 17:42 |
|
Liquid Communism posted:That's only obvious. A generative AI can never be better than the average of its inputs, and there's a ton of lovely inputs. This is not remotely true, and it's such a blatantly false/low-effort take that I don't feel it's even worth refuting. Just google any number of LLM benchmark papers on semantic scholar.
|
# ? Nov 30, 2023 18:29 |
|
Tree Reformat posted:It's still cheaper to mass produce and replace humans for a lot of modern manual labor jobs than hand bots.
|
# ? Nov 30, 2023 18:46 |
|
Tree Reformat posted:It's still cheaper to mass produce and replace humans for a lot of modern manual labor jobs than hand bots. It's the same story about every individual piece of technology for the past several decades. Robots never replace workers overnight. It takes years for the tech to mature.
|
# ? Nov 30, 2023 19:14 |
|
The part that no one talks about is that you could replace 80% of the executives and 95% of management consultants with AIs today and they'd do a better job.
|
# ? Nov 30, 2023 19:28 |
|
This could do a better job
|
# ? Nov 30, 2023 19:38 |
|
Rogue AI Goddess posted:The part that no one talks about is that you could replace 80% of the executives and 95% of management consultants with AIs today and they'd do a better job. This is true, but by the same token, you could drop 90% of executives and 100% of management consultants into a composter, and they would serve humanity better as fertilizer.
|
# ? Dec 1, 2023 01:38 |
|
Bug Squash posted:I'm not intimately familiar with any of Netflix's algorithms, but "WriterName" and "DirectorName" seem like they would be incredibly easy things to add into pretty much any machine learning model, so I'd place some solid money on them already trying that. the point is not to "mirror" the desires of the public with the perfect selection, but teaching the public to appreciate the artist in background, so they apreciate more the art in foreground this can be done by changing the question the algorithm answer Tei fucked around with this message at 09:03 on Dec 1, 2023 |
# ? Dec 1, 2023 09:00 |
|
Streaming services basically killing the extra features of DVDs like creator commentary and Behind the Scenes while the likes of Disney and the other media conglomerates completely clamping down on post-hoc tell-alls about productions certainly hasn't helped either. These days, if anyone hears anything about the making of their favorite TV show or movie, it's usually just a whistleblower telling them what awful sexpests the director and bosses were, not the joy of the creative process. Which is an indictment of how exploitative traditional mass media production is more than anything, but still.
|
# ? Dec 1, 2023 15:41 |
|
Tei posted:the point is not to "mirror" the desires of the public with the perfect selection, but teaching the public to appreciate the artist in background, so they apreciate more the art in foreground It sounds like this would be better done by YouTube essays rather than an algorithm
|
# ? Dec 1, 2023 16:55 |
|
Anyway, in actual AI chat, this tweet has been making the rounds: https://twitter.com/SashaMTL/status/1730552781323317508 It feels a bit fuzzy to me. I'd prefer to see these measurements in watts, and maybe a direct comparison to common computing tasks. The idea that a single generation on my RTX 3060, which takes less than half a minute (although I haven't tested SDXL yet), somehow consumes more electricity in that time than an entire hour of playing, say, Horizon Zero Dawn for hour with all the settings cranked up on that same machine seems absurd. I feel if that were the case I'd be tripping my circuit breaker with every generation. I don't have a Kill-A-Watt to actually test this myself, unfortunately.
|
# ? Dec 1, 2023 19:46 |
|
Tree Reformat posted:Anyway, in actual AI chat, this tweet has been making the rounds: The paper says that's per 1000 inferences. Reading 2.9 kWh per 1,000 in the paper = 10k Joules each ~ 350W@30 seconds. Paper doesn't say specifically how many inferences per generation or how those are specifically defined esquilax fucked around with this message at 20:27 on Dec 1, 2023 |
# ? Dec 1, 2023 20:20 |
|
Lord Of Texas posted:This is not remotely true, and it's such a blatantly false/low-effort take that I don't feel it's even worth refuting. Just google any number of LLM benchmark papers on semantic scholar. My dude, you're trusting an AI-based research tool to provide you with accurate summaries of scientific literature, when the technology is well known for outright making stuff up that is trivially wrong? Continually yelling 'nuh uh' is not a refutation.
|
# ? Dec 6, 2023 08:29 |
|
https://www.technologyreview.com/2023/12/14/1085318/google-deepmind-large-language-model-solve-unsolvable-math-problem-cap-set/quote:The researchers started by sketching out the problem they wanted to solve in Python, a popular programming language. But they left out the lines in the program that would specify how to solve it. That is where FunSearch comes in. It gets Codey to fill in the blanks—in effect, to suggest code that will solve the problem. The people who dismiss the entire concept of LLMs because they hallucinate when asked about obscure or recent facts are missing the point. As the test benchmarks have shown, they do very well on established facts, and even hallucinations are useful when looking for creative solutions as opposed to facts.
|
# ? Dec 16, 2023 19:09 |
|
Nobody is suggesting that these systems don't have practical uses. This isn't the first time this has happened either - AI improved the best known matrix multiplication algorithm a couple years ago. The problem is that people are taking news like this and extrapolating to an idea that these systems will produce only good code and be usable by people with no development knowledge to replace developers. It's like claiming that calculators will replace mathematicians.
|
# ? Dec 16, 2023 20:19 |
|
KillHour posted:Nobody is suggesting that these systems don't have practical uses. This isn't the first time this has happened either - AI improved the best known matrix multiplication algorithm a couple years ago. The problem is that people are taking news like this and extrapolating to an idea that these systems will produce only good code and be usable by people with no development knowledge to replace developers. It's like claiming that calculators will replace mathematicians. It's more that people point to hallucinations as a reason for why LLMs can't be considered intelligent.
|
# ? Dec 16, 2023 21:07 |
|
|
# ? Jun 8, 2024 06:48 |
|
SaTaMaS posted:It's more that people point to hallucinations as a reason for why LLMs can't be considered intelligent. I don’t think they’re intelligent. But specific objection not hold up.
|
# ? Dec 16, 2023 21:55 |