|
Jon posted:I don't understand the implication you're making Michio Kaku is a quack science communicator that has no tether to actual reality. He will gladly spout some woo bullshit for cash. We could just as easily ask Avi Loeb if LLMs are aliens
|
# ? Sep 10, 2023 16:11 |
|
|
# ? Jun 1, 2024 23:15 |
|
BRJurgis posted:I don't usually read this thread, but am I correct that usage of the word AI and the ideas people have about it are insane? Those people aren’t talking about AI OP. They’re talking about a deity. They are completely ignorant in what AI is (to be fair everyone has a different definition of what AI is) and it’s limitations and think instead that a magical fairy in the form of a voice in a machine will manifest itself and guide us to a Better Tomorrow. Only, this magical fairy is gonna be totally based off of like science and math, so like it’s not religious. E: And you’re right, we already know what we should and need to do to create that better tomorrow but we aren’t doing it and it’s not because an AI isn’t telling us what to do. Literally Jesus Christ himself could descend down to earth tomorrow and say something like “we will solve the climate crisis if we all stop drinking soda in the US, like literally that’s all you have to do, nothing else, just stop manufacturing and drinking soda in America, that’s all. The rest of the world can keep drinking soda.” and you would have half of the US rioting about fake news culture wars. Changing this directive from literally Jesus Christ to a Jarvis like machine isn’t going to change that. Boris Galerkin fucked around with this message at 16:20 on Sep 10, 2023 |
# ? Sep 10, 2023 16:12 |
|
Heck Yes! Loam! posted:Michio Kaku is a quack science communicator that has no tether to actual reality. He will gladly spout some woo bullshit for cash. Yann LeCun is quoted in that article saying the same thing
|
# ? Sep 10, 2023 16:16 |
|
BRJurgis posted:I don't usually read this thread, but am I correct that usage of the word AI and the ideas people have about it are insane? lol you should see if these co workers think buttcoins or meme stocks are good or bad.
|
# ? Sep 10, 2023 16:50 |
|
BRJurgis posted:I don't usually read this thread, but am I correct that usage of the word AI and the ideas people have about it are insane? The core problem here is a particular strain of tech-loving amateur philosophers and sci-fi fans who've spent years theorizing about super-technological breakthroughs creating fundamental changes in human life and leading to a utopian super-society. In their theories, which are often just sci-fi fanfictions cloaked in a veneer of seriousness, one of the most fundamental breakthroughs is an AI smarter than humans. The idea is that not only could this AI invent better technology than humans can, but if humans can build an AI smarter than they are, then surely that AI would be able to build another AI smarter than it is. And then that even smarter AI would be able to build an AI even smarter than it is, and so on and so forth. Any material or social issues preventing this endless march of improvement would naturally be solved by these hyper-intelligent computers, who would eventually rule us as technological genius-kings. This endless loop of hyper-improvement would eventually leading to the eventual creation of a near-omniscient godlike megaintelligence far surpassing anything we know, overcoming all material limitations and ushering in a whole new era of humanity. If that sounds almost religious, that's because it often is. Self-proclaimed technologists who've taken this line of thinking too seriously have reinvented a fair bit of religion. For example, Roko's Basilisk, which is just a small variant of Pascal's Wager, and involves this godlike super-AI spending some of its infinite computing power on running perfect simulations of the world before its invention so that it can subject perfect simulations of AI skeptics to endless torment. Anyway, most of these people aren't half as technologically literate as they claim to be, so they don't understand that LLMs aren't strong AI. Also, they're extremely excited at any hint of AI advances, because for them it's another step toward inventing the machine god who'll bring us all salvation.
|
# ? Sep 10, 2023 18:22 |
|
People unironically quoting Michio Kaku is the real tech nightmare for me
|
# ? Sep 10, 2023 18:35 |
|
It's a way to feel better about participating in capitalism without entertaining those icky leftist ideas. It's okay to endlessly pursue treats because the robot god will make all the consequences disappear. It's extremely funny that we all but abandoned research into decision-making expert systems in favor of text completion. I think someone said in this thread that because humans use language to reason, it's way too easy to assume reasoning from language
|
# ? Sep 10, 2023 18:50 |
|
Main Paineframe posted:Anyway, most of these people aren't half as technologically literate as they claim to be, so they don't understand that LLMs aren't strong AI. Also, they're extremely excited at any hint of AI advances, because for them it's another step toward inventing the machine god who'll bring us all salvation. Ultimately it doesn't matter what random people think. We have seen all this bullshit revolution hype before and it always ends up very different from what the idealists imagine The internet will set information free and everybody can be a broadcaster! Open source will end big tech monopolies and all software will be free! Wikipedia will record the totality of all human knowledge! Crypto will wrest power from governments and banks! I don't know where AI is going to land. I don't need it for my job but I know plenty who do use it. It seems useful like spell or grammar check or excel functions. It's fine. It's not going away but it's also not going to upend society. In a year it's just another thing and no one will care.
|
# ? Sep 10, 2023 20:25 |
|
Main Paineframe posted:The idea is that not only could this AI invent better technology than humans can, but if humans can build an AI smarter than they are, then surely that AI would be able to build another AI smarter than it is. And then that even smarter AI would be able to build an AI even smarter than it is, and so on and so forth.
|
# ? Sep 10, 2023 21:16 |
|
I thought it was a Stanisław Lem story where Ijon Tichy (or one of his other picaresque protagonists) encounters a series of machines in a field and some genius, who says he made the first machine to help him solve a problem he couldn't, but that machine simply made an even better machine, and so forth, and none of them ended up solving the problem he started with.
|
# ? Sep 10, 2023 21:21 |
|
SaTaMaS posted:It seems like after ChatGPT N has been trained on questionable content, it should be able to flag a lot of that content when training ChatGPT N+1? We could call that kind of content a "ChatGPT N-word".
|
# ? Sep 10, 2023 21:56 |
|
Agents are GO! posted:We could call that kind of content a "ChatGPT N-word". hard or soft n
|
# ? Sep 10, 2023 21:58 |
|
Vegetable posted:People unironically quoting Michio Kaku is the real tech nightmare for me
|
# ? Sep 10, 2023 21:58 |
|
Absurd Alhazred posted:I thought it was a Stanisław Lem story where Ijon Tichy (or one of his other picaresque protagonists) encounters a series of machines in a field and some genius, who says he made the first machine to help him solve a problem he couldn't, but that machine simply made an even better machine, and so forth, and none of them ended up solving the problem he started with. Lem's In hot pursuit of happiness has Trurl spin up a virtual clone of himself to solve a problem, only for the clone to start a whole virtual scientific institution and make his role merely relaying their findings (which included setting the optimal amount of genders at 24) it's later revealed the machine used had the capacity to simulate just one person and, because the interface was purely voice-based, the virtual Trurl was feeding the original bullshit to delay both having to work and his erasure could have used a better system prompt
|
# ? Sep 10, 2023 22:27 |
|
people don't listen to "experts" now, who are "smarter" than them about a subject, why would people listen to an AI spitting out solutions they don't understand or like the vibes of?
|
# ? Sep 10, 2023 22:42 |
|
Agents are GO! posted:We could call that kind of content a "ChatGPT N-word". The GPT stands for Gamer Profanities/Terms
|
# ? Sep 10, 2023 22:47 |
|
I’m not going to believe anything AI tells me until someone can make autocorrect work reliably.
|
# ? Sep 10, 2023 22:49 |
|
starkebn posted:people don't listen to "experts" now, who are "smarter" than them about a subject, why would people listen to an AI spitting out solutions they don't understand or like the vibes of? Experts are people and therefore are fallible, whereas
|
# ? Sep 10, 2023 23:20 |
|
Mister Facetious posted:The GPT stands for Gamer Profanities/Terms Gamer-Preferred Terminology?
|
# ? Sep 11, 2023 00:38 |
|
withak posted:I’m not going to believe anything AI tells me until someone can make autocorrect work reliably. this is literally what apple is using ai for first and i've got to be honest it seems like the first useful thing i've seen anyone come up with
|
# ? Sep 11, 2023 00:42 |
|
ive found chatgpt saves me time on writing vba, which i do once every like six months so i forgot a lot and have to google stuff that is kind of like what i wanna do and then try to apply it Whereas with chatgpt it gives me an answer thats close to what i want without having to go to like 3 or 4 webpages
|
# ? Sep 11, 2023 00:47 |
|
OctaMurk posted:ive found chatgpt saves me time on writing vba, which i do once every like six months so i forgot a lot and have to google stuff that is kind of like what i wanna do and then try to apply it okay i will also accept "ai helps me not kill myself when i have to write vba in loving 2023 like a goddamn caveman who hasn't figured out fire yet"
|
# ? Sep 11, 2023 00:53 |
|
BRJurgis posted:I don't usually read this thread, but am I correct that usage of the word AI and the ideas people have about it are insane? At the end of the day 'AI' is just curve fitting. 'Learn' and 'Train' are misleading words. A more accurate one would be 'Fit'. You have some human-made equation that maps input=>output with a bunch of unknown coefficient placeholder slots, a bunch of (input=>output) data points, and then you find the coefficients that make the equation output best match the example output. Having GPT-3 generate some text or DALL-E generate a picture is just plugging the new prompt input into that equation+coefficients and seeing what comes out. All of machine learning is essentially the same thing as high school science class "For a line y=mx+b, and some data points, find the m and b that minimize the sum-of-squared-error except: - The equation being fit is more complicated than a line - There are many more coefficients to fit instead of just 2 - The inputs and outputs are multidimensional instead of just single numbers. Like for DALL-E, the input is the sequence of words in the prompt and the output is a rectangular array of pixel colors. Nonlinear optimization problems like this don't have closed form solutions. Unlike the line fitting, there's no direct way to get the best coefficients. Instead, you do an iterative process where you go through each coefficient and calculate the partial derivate of the error with respect to that coefficient (essentially "If I held everything else constant and just nudged this coefficient a tiny bit, how much would the error change?"). Then you apply all those changes and repeat until error stops getting better. That gets you to some error minimum that is hopefully close to the global minimum. Neural net machine learning are a more specific subcategory where the model equation you're finding coefficients for has a particular structure that makes it easy to do the iteration step in fitting. The partial derivatives work out so that you can reuse work from previous ones instead of starting each one from scratch. That lets you do each iteration much faster than you would otherwise, which lets you run more fitting cycles or have more parameters in the same amount of computation time Recent neural net AI improvements are all from progressive work about the details or how exactly you design your model equation, how exactly you do the fitting iterations, and cheap GPU computation hardware to fit faster
|
# ? Sep 11, 2023 01:57 |
|
The most important takeaway is that none of this is "AI". However, having a computer spit out grammatically correct natural language lets people anthropomorphise the machine much easier even while its internals are conceptionally just a souped-up Markov Chain.
|
# ? Sep 11, 2023 11:28 |
And, despite the core algorithm being really simple, if you throw enough data and computing power at it you can make a program that can approximate understandimg human language. Like, the fact I can tell a program "Write me a rendition of Gangster's paradise in the style of Halmet" and it will respond with text that actually matches my prompt is kinda insane, even when it's clear there's not a lick of intelligence behind it.
|
|
# ? Sep 11, 2023 11:29 |
|
Antigravitas posted:However, having a computer spit out grammatically correct natural language lets people anthropomorphise the machine much easier even while its internals are conceptionally just a souped-up Markov Chain. Like that guy from Google who made a big deal about how their AI was sentient and then either quit or got fired. Unless of course making noise was the whole point since that story got a bit of runtime in prominent news media even.
|
# ? Sep 11, 2023 11:59 |
|
Nothingtoseehere posted:Like, the fact I can tell a program "Write me a rendition of Gangster's paradise in the style of Halmet" and it will respond with text that actually matches my prompt is kinda insane, even when it's clear there's not a lick of intelligence behind it. I'm not sure how things like that actually work, considering the limitations discussed here. How does this work if it's just fancy word chains ? Is it just stealing an example of somebody doing this exact thing before, or actually creating something new ?
|
# ? Sep 11, 2023 13:22 |
|
Rand Brittain posted:Is there a good neutral source for figuring out what kind of solar installation is worth it on your own home? I've been trying to figure out lately whether I should spend money on solar panels, and if so, how much, but it's hard to get information from sources that aren't also selling it (admittedly it's also hard because you're basically asking people to tell you the future). RETScreen https://natural-resources.canada.ca/maps-tools-and-publications/tools/modelling-tools/retscreen/7465 you put in all your info and it will tell you the roi and payback period. works for all energy projects and mixes.
|
# ? Sep 11, 2023 13:34 |
|
MonikaTSarn posted:I'm not sure how things like that actually work, considering the limitations discussed here. How does this work if it's just fancy word chains ? Is it just stealing an example of somebody doing this exact thing before, or actually creating something new ? Well, imagine infinite monkeys on infinite typewriters. Except instead of the typewriters having letters, they have buttons that reproduce common "bits" of language -- words in some cases, character clusters in others. Now you need a way to decide "is something a rendition of Gangster's Paradise" and also "is something in the style of Hamlet?" (which would be odd, because I don't believe Hamlet was known for his writing ) and some time. With enough computing power, you've reduced infinite monkeys on infinite typewriters to something finite enough to appear roughly at the speed of normal human typing or so. I've probably misunderstood large parts of it, but I think this is close enough to how it works to approximate the idea that there's no inherent intelligence or creativity to the process.
|
# ? Sep 11, 2023 13:44 |
|
MonikaTSarn posted:I'm not sure how things like that actually work, considering the limitations discussed here. How does this work if it's just fancy word chains ? Is it just stealing an example of somebody doing this exact thing before, or actually creating something new ? The technology you want to read up on/search for is “transformer model” (the “T” in (Chat)GPT stands for transformer). I don’t know nearly enough to explain it but long story short, the defining feature of the transformer model is that uses an “attention algorithm” that… does some stuff to give you better results? Idk. Another way to think of it is that stylizing a thing in another written style isn’t really any different than translating from one language to another. The original transformer model was developed to translate from English into German or vice versa. Features of the German language include verb conjugations, changing the endings of adjectives and adverbs (declension), and very specific positioning of verbs including moving a verb allllllll the way to the end of a sentence. So whatever language translator you’re developing needs to be able to see the endings of a word and infer what it’s talking about, or see a random verb at the end of a super long sentence and know what to refer back to.
|
# ? Sep 11, 2023 14:06 |
|
MonikaTSarn posted:I'm not sure how things like that actually work, considering the limitations discussed here. How does this work if it's just fancy word chains ? Is it just stealing an example of somebody doing this exact thing before, or actually creating something new ? It knows to identify parts of a sentence, and using it's dataset find the parts that match. Then it uses statistics and the rules of language to create output. Humans are good at pattern recognition (IE seeing faces in clouds) so we're very good at taking the output and making it fit the prompt and believing it's magic.
|
# ? Sep 11, 2023 14:13 |
|
I agree with everything here and also my job where I’m supposed to write highly technical stuff about our cutting edge products needs me to write 1200 words on “Developer Experience” for an SEO page and you can bet your buttons I’m farming that out to ChatGPT. Yes I’m part of the problem, don’t @ me.
|
# ? Sep 11, 2023 15:01 |
|
Riven posted:I agree with everything here and also my job where I’m supposed to write highly technical stuff about our cutting edge products needs me to write 1200 words on “Developer Experience” for an SEO page and you can bet your buttons I’m farming that out to ChatGPT. Yes I’m part of the problem, don’t @ me. Lmao. Getting one model to produce material to be picked up by the second. Both ruining the services they rely on to function. Love it.
|
# ? Sep 11, 2023 15:41 |
|
there are hundreds of billions of dollars being invested into what will end up being, at best, a decent technical writing and homework assistant that still requires a human to validate the output
|
# ? Sep 11, 2023 15:46 |
|
nachos posted:there are hundreds of billions of dollars being invested into what will end up being, at best, a decent technical writing and homework assistant that still requires a human to validate the output Eh, its already delivered on taking most of the grunt work out of language translation. That and automated gore content filtering (as pointed out by another poster) seems like well worth the investment of billions.
|
# ? Sep 11, 2023 16:02 |
|
HootTheOwl posted:Lmao. The AI Centipede
|
# ? Sep 11, 2023 16:22 |
|
nachos posted:there are hundreds of billions of dollars being invested into what will end up being, at best, a decent technical writing and homework assistant that still requires a human to validate the output I think you're largely right when it comes to text output—more or less a "brainstorming assistant" is where this text generation is heading, and where it's already useful. Image and video generators have the ability to be more disruptive, but even then it will mostly affect the lives of graphics professionals. Audio, it will put a small group of people out of a job, but mostly it will just be used by people who are already creating video and audio content to raise the bar of mid-quality. It isn't and won't be a "push button get finished art" for 90% of uses, especially commercial uses, but simply another tool used by people to get a desired result in less time with less effort. The barrier to entry of making something acceptable will be lower, so the need for good content that rises above auto-generated output will become more and more significant. It will also have significant use in some automation based on pattern recognition, especially around large voulmes of data that would take humans an unreasonable amount of manpower to get through manually. For code it will remain a sidekick for devs to be more efficient, and will get better but still require human guidance. For everyday people, these will be fun novelty toys that certain people enjoy playing around with, and most people touch once or twice. For people who want to make fanart and fan fiction, they will be able to live out their wildest Harry Potter/Peppa Pig slashfic dreams with ease. For young creatives looking to break into, say, filmmaking or video game design, it will make it that much easier for a single individual to create something that feels polished. It's already doing that today—I'm using it to make far more impressive visual effects than I've ever had the time or capacity to do before as we speak. It still involves a ton of manual work using traditional skills and methods, though. Also, porn. It will be used a whole lot for porn, unless it gets regulated out of existence from a Protect Are Children perspective. And scams. e: The thing is, it is already all of these things, for those who want to put in the effort to utilize them. The future of these tools is just a more user-friendly, efficient, tactical version of what exists today. AI isn't going to replace commercial artists, but commercial artists who use AI tools will largely replace those who don't. For artists who don't want to use the tools, they can continue not to. But its no different than deciding that you didn't want to use Photoshop in the 90s. Some people still make money in commercial art not using it! But the majority of folks use it when it's the best tool for the job. feedmyleg fucked around with this message at 18:27 on Sep 11, 2023 |
# ? Sep 11, 2023 18:07 |
|
Antigravitas posted:The most important takeaway is that none of this is "AI". However, having a computer spit out grammatically correct natural language lets people anthropomorphise the machine much easier even while its internals are conceptionally just a souped-up Markov Chain.
|
# ? Sep 11, 2023 18:24 |
|
I feel like artificial intelligence was always expected to be surprisingly emergent from deceptively simple rules, its not like our neurons are that hot poo poo individually At this stage its massively overhyped and misunderstood tho
|
# ? Sep 11, 2023 18:32 |
|
|
# ? Jun 1, 2024 23:15 |
|
LLMs are pretty good at giving draft translations. They suck at tone and subtleties and domain specific stuff, but they do well with boilerplate.
|
# ? Sep 11, 2023 19:37 |