Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
How many quarters after Q1 2016 till Marissa Mayer is unemployed?
1 or fewer
2
4
Her job is guaranteed; what are you even talking about?
View Results
 
  • Post
  • Reply
Heck Yes! Loam!
Nov 15, 2004

a rich, friable soil containing a relatively equal mixture of sand and silt and a somewhat smaller proportion of clay.

Jon posted:

I don't understand the implication you're making

Michio Kaku is a quack science communicator that has no tether to actual reality. He will gladly spout some woo bullshit for cash.

We could just as easily ask Avi Loeb if LLMs are aliens

Adbot
ADBOT LOVES YOU

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

BRJurgis posted:

I don't usually read this thread, but am I correct that usage of the word AI and the ideas people have about it are insane?

Like some younger guys at work (and credulous tech journal readers of any age) literally use the word like magic. AI gonna change everything! Solve all problems! Anything you can imagine and also all the things you can't!

Was talking about sustainability and limits of growth with one guy and he's like "AI is gonna give us the answer!"

"But we know the answer, it's stop doing what we're doing yesterday and we're still not stopping. AI can't "solve" physical limitations and thermodynamics and such".

"We don't know that it can't! AI doesn't have limitations like that!"

Those people aren’t talking about AI OP. They’re talking about a deity. They are completely ignorant in what AI is (to be fair everyone has a different definition of what AI is) and it’s limitations and think instead that a magical fairy in the form of a voice in a machine will manifest itself and guide us to a Better Tomorrow.

Only, this magical fairy is gonna be totally based off of like science and math, so like it’s not religious.

E: And you’re right, we already know what we should and need to do to create that better tomorrow but we aren’t doing it and it’s not because an AI isn’t telling us what to do. Literally Jesus Christ himself could descend down to earth tomorrow and say something like “we will solve the climate crisis if we all stop drinking soda in the US, like literally that’s all you have to do, nothing else, just stop manufacturing and drinking soda in America, that’s all. The rest of the world can keep drinking soda.” and you would have half of the US rioting about fake news culture wars. Changing this directive from literally Jesus Christ to a Jarvis like machine isn’t going to change that.

Boris Galerkin fucked around with this message at 16:20 on Sep 10, 2023

Jon
Nov 30, 2004

Heck Yes! Loam! posted:

Michio Kaku is a quack science communicator that has no tether to actual reality. He will gladly spout some woo bullshit for cash.

We could just as easily ask Avi Loeb if LLMs are aliens

Yann LeCun is quoted in that article saying the same thing :shrug:

PhazonLink
Jul 17, 2010

BRJurgis posted:

I don't usually read this thread, but am I correct that usage of the word AI and the ideas people have about it are insane?

Like some younger guys at work (and credulous tech journal readers of any age) literally use the word like magic. AI gonna change everything! Solve all problems! Anything you can imagine and also all the things you can't!

Was talking about sustainability and limits of growth with one guy and he's like "AI is gonna give us the answer!"

"But we know the answer, it's stop doing what we're doing yesterday and we're still not stopping. AI can't "solve" physical limitations and thermodynamics and such".

"We don't know that it can't! AI doesn't have limitations like that!"

lol you should see if these co workers think buttcoins or meme stocks are good or bad.

Main Paineframe
Oct 27, 2010

BRJurgis posted:

I don't usually read this thread, but am I correct that usage of the word AI and the ideas people have about it are insane?

Like some younger guys at work (and credulous tech journal readers of any age) literally use the word like magic. AI gonna change everything! Solve all problems! Anything you can imagine and also all the things you can't!

Was talking about sustainability and limits of growth with one guy and he's like "AI is gonna give us the answer!"

"But we know the answer, it's stop doing what we're doing yesterday and we're still not stopping. AI can't "solve" physical limitations and thermodynamics and such".

"We don't know that it can't! AI doesn't have limitations like that!"

The core problem here is a particular strain of tech-loving amateur philosophers and sci-fi fans who've spent years theorizing about super-technological breakthroughs creating fundamental changes in human life and leading to a utopian super-society.

In their theories, which are often just sci-fi fanfictions cloaked in a veneer of seriousness, one of the most fundamental breakthroughs is an AI smarter than humans. The idea is that not only could this AI invent better technology than humans can, but if humans can build an AI smarter than they are, then surely that AI would be able to build another AI smarter than it is. And then that even smarter AI would be able to build an AI even smarter than it is, and so on and so forth. Any material or social issues preventing this endless march of improvement would naturally be solved by these hyper-intelligent computers, who would eventually rule us as technological genius-kings. This endless loop of hyper-improvement would eventually leading to the eventual creation of a near-omniscient godlike megaintelligence far surpassing anything we know, overcoming all material limitations and ushering in a whole new era of humanity.

If that sounds almost religious, that's because it often is. Self-proclaimed technologists who've taken this line of thinking too seriously have reinvented a fair bit of religion. For example, Roko's Basilisk, which is just a small variant of Pascal's Wager, and involves this godlike super-AI spending some of its infinite computing power on running perfect simulations of the world before its invention so that it can subject perfect simulations of AI skeptics to endless torment.

Anyway, most of these people aren't half as technologically literate as they claim to be, so they don't understand that LLMs aren't strong AI. Also, they're extremely excited at any hint of AI advances, because for them it's another step toward inventing the machine god who'll bring us all salvation.

Vegetable
Oct 22, 2010

People unironically quoting Michio Kaku is the real tech nightmare for me

Ruffian Price
Sep 17, 2016

It's a way to feel better about participating in capitalism without entertaining those icky leftist ideas. It's okay to endlessly pursue treats because the robot god will make all the consequences disappear.

It's extremely funny that we all but abandoned research into decision-making expert systems in favor of text completion. I think someone said in this thread that because humans use language to reason, it's way too easy to assume reasoning from language

Owling Howl
Jul 17, 2019

Main Paineframe posted:

Anyway, most of these people aren't half as technologically literate as they claim to be, so they don't understand that LLMs aren't strong AI. Also, they're extremely excited at any hint of AI advances, because for them it's another step toward inventing the machine god who'll bring us all salvation.

Ultimately it doesn't matter what random people think. We have seen all this bullshit revolution hype before and it always ends up very different from what the idealists imagine The internet will set information free and everybody can be a broadcaster! Open source will end big tech monopolies and all software will be free! Wikipedia will record the totality of all human knowledge! Crypto will wrest power from governments and banks!

I don't know where AI is going to land. I don't need it for my job but I know plenty who do use it. It seems useful like spell or grammar check or excel functions. It's fine. It's not going away but it's also not going to upend society. In a year it's just another thing and no one will care.

Arsenic Lupin
Apr 12, 2012

This particularly rapid💨 unintelligible 😖patter💁 isn't generally heard🧏‍♂️, and if it is🤔, it doesn't matter💁.


Main Paineframe posted:

The idea is that not only could this AI invent better technology than humans can, but if humans can build an AI smarter than they are, then surely that AI would be able to build another AI smarter than it is. And then that even smarter AI would be able to build an AI even smarter than it is, and so on and so forth.
As was predicted by the well-respected expert, Douglas Adams.

Absurd Alhazred
Mar 27, 2010

by Athanatos
I thought it was a Stanisław Lem story where Ijon Tichy (or one of his other picaresque protagonists) encounters a series of machines in a field and some genius, who says he made the first machine to help him solve a problem he couldn't, but that machine simply made an even better machine, and so forth, and none of them ended up solving the problem he started with.

Agents are GO!
Dec 29, 2004

SaTaMaS posted:

It seems like after ChatGPT N has been trained on questionable content, it should be able to flag a lot of that content when training ChatGPT N+1?

We could call that kind of content a "ChatGPT N-word".

Arivia
Mar 17, 2011

Agents are GO! posted:

We could call that kind of content a "ChatGPT N-word".

hard or soft n

Heck Yes! Loam!
Nov 15, 2004

a rich, friable soil containing a relatively equal mixture of sand and silt and a somewhat smaller proportion of clay.

Vegetable posted:

People unironically quoting Michio Kaku is the real tech nightmare for me

Ruffian Price
Sep 17, 2016

Absurd Alhazred posted:

I thought it was a Stanisław Lem story where Ijon Tichy (or one of his other picaresque protagonists) encounters a series of machines in a field and some genius, who says he made the first machine to help him solve a problem he couldn't, but that machine simply made an even better machine, and so forth, and none of them ended up solving the problem he started with.

Lem's In hot pursuit of happiness has Trurl spin up a virtual clone of himself to solve a problem, only for the clone to start a whole virtual scientific institution and make his role merely relaying their findings (which included setting the optimal amount of genders at 24) it's later revealed the machine used had the capacity to simulate just one person and, because the interface was purely voice-based, the virtual Trurl was feeding the original bullshit to delay both having to work and his erasure
could have used a better system prompt :v:

starkebn
May 18, 2004

"Oooh, got a little too serious. You okay there, little buddy?"
people don't listen to "experts" now, who are "smarter" than them about a subject, why would people listen to an AI spitting out solutions they don't understand or like the vibes of?

Mister Facetious
Apr 21, 2007

I think I died and woke up in L.A.,
I don't know how I wound up in this place...

:canada:

Agents are GO! posted:

We could call that kind of content a "ChatGPT N-word".

The GPT stands for Gamer Profanities/Terms

withak
Jan 15, 2003


Fun Shoe
I’m not going to believe anything AI tells me until someone can make autocorrect work reliably.

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

starkebn posted:

people don't listen to "experts" now, who are "smarter" than them about a subject, why would people listen to an AI spitting out solutions they don't understand or like the vibes of?

Experts are people and therefore are fallible, whereas godAI is all knowing and all seeing in ways you can’t even comprehend, and therefore is incapable of making mistakes. You’d be a fool not to listen to godAI.

Agents are GO!
Dec 29, 2004

Mister Facetious posted:

The GPT stands for Gamer Profanities/Terms

Gamer-Preferred Terminology?

Arivia
Mar 17, 2011

withak posted:

I’m not going to believe anything AI tells me until someone can make autocorrect work reliably.

this is literally what apple is using ai for first and i've got to be honest it seems like the first useful thing i've seen anyone come up with

OctaMurk
Jun 21, 2013
ive found chatgpt saves me time on writing vba, which i do once every like six months so i forgot a lot and have to google stuff that is kind of like what i wanna do and then try to apply it

Whereas with chatgpt it gives me an answer thats close to what i want without having to go to like 3 or 4 webpages

Arivia
Mar 17, 2011

OctaMurk posted:

ive found chatgpt saves me time on writing vba, which i do once every like six months so i forgot a lot and have to google stuff that is kind of like what i wanna do and then try to apply it

Whereas with chatgpt it gives me an answer thats close to what i want without having to go to like 3 or 4 webpages

okay i will also accept "ai helps me not kill myself when i have to write vba in loving 2023 like a goddamn caveman who hasn't figured out fire yet"

Foxfire_
Nov 8, 2010

BRJurgis posted:

I don't usually read this thread, but am I correct that usage of the word AI and the ideas people have about it are insane?

At the end of the day 'AI' is just curve fitting. 'Learn' and 'Train' are misleading words. A more accurate one would be 'Fit'.

You have some human-made equation that maps input=>output with a bunch of unknown coefficient placeholder slots, a bunch of (input=>output) data points, and then you find the coefficients that make the equation output best match the example output. Having GPT-3 generate some text or DALL-E generate a picture is just plugging the new prompt input into that equation+coefficients and seeing what comes out.

All of machine learning is essentially the same thing as high school science class "For a line y=mx+b, and some data points, find the m and b that minimize the sum-of-squared-error except:

- The equation being fit is more complicated than a line
- There are many more coefficients to fit instead of just 2
- The inputs and outputs are multidimensional instead of just single numbers. Like for DALL-E, the input is the sequence of words in the prompt and the output is a rectangular array of pixel colors.

Nonlinear optimization problems like this don't have closed form solutions. Unlike the line fitting, there's no direct way to get the best coefficients. Instead, you do an iterative process where you go through each coefficient and calculate the partial derivate of the error with respect to that coefficient (essentially "If I held everything else constant and just nudged this coefficient a tiny bit, how much would the error change?"). Then you apply all those changes and repeat until error stops getting better. That gets you to some error minimum that is hopefully close to the global minimum.

Neural net machine learning are a more specific subcategory where the model equation you're finding coefficients for has a particular structure that makes it easy to do the iteration step in fitting. The partial derivatives work out so that you can reuse work from previous ones instead of starting each one from scratch. That lets you do each iteration much faster than you would otherwise, which lets you run more fitting cycles or have more parameters in the same amount of computation time

Recent neural net AI improvements are all from progressive work about the details or how exactly you design your model equation, how exactly you do the fitting iterations, and cheap GPU computation hardware to fit faster

Antigravitas
Dec 8, 2019

Die Rettung fuer die Landwirte:
The most important takeaway is that none of this is "AI". However, having a computer spit out grammatically correct natural language lets people anthropomorphise the machine much easier even while its internals are conceptionally just a souped-up Markov Chain.

Nothingtoseehere
Nov 11, 2010


And, despite the core algorithm being really simple, if you throw enough data and computing power at it you can make a program that can approximate understandimg human language.

Like, the fact I can tell a program "Write me a rendition of Gangster's paradise in the style of Halmet" and it will respond with text that actually matches my prompt is kinda insane, even when it's clear there's not a lick of intelligence behind it.

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

Antigravitas posted:

However, having a computer spit out grammatically correct natural language lets people anthropomorphise the machine much easier even while its internals are conceptionally just a souped-up Markov Chain.

Like that guy from Google who made a big deal about how their AI was sentient and then either quit or got fired. Unless of course making noise was the whole point since that story got a bit of runtime in prominent news media even.

MonikaTSarn
May 23, 2005

Nothingtoseehere posted:

Like, the fact I can tell a program "Write me a rendition of Gangster's paradise in the style of Halmet" and it will respond with text that actually matches my prompt is kinda insane, even when it's clear there's not a lick of intelligence behind it.

I'm not sure how things like that actually work, considering the limitations discussed here. How does this work if it's just fancy word chains ? Is it just stealing an example of somebody doing this exact thing before, or actually creating something new ?

Philman
Jan 20, 2004

Rand Brittain posted:

Is there a good neutral source for figuring out what kind of solar installation is worth it on your own home? I've been trying to figure out lately whether I should spend money on solar panels, and if so, how much, but it's hard to get information from sources that aren't also selling it (admittedly it's also hard because you're basically asking people to tell you the future).

RETScreen

https://natural-resources.canada.ca/maps-tools-and-publications/tools/modelling-tools/retscreen/7465

you put in all your info and it will tell you the roi and payback period.

works for all energy projects and mixes.

PT6A
Jan 5, 2006

Public school teachers are callous dictators who won't lift a finger to stop children from peeing in my plane

MonikaTSarn posted:

I'm not sure how things like that actually work, considering the limitations discussed here. How does this work if it's just fancy word chains ? Is it just stealing an example of somebody doing this exact thing before, or actually creating something new ?

Well, imagine infinite monkeys on infinite typewriters. Except instead of the typewriters having letters, they have buttons that reproduce common "bits" of language -- words in some cases, character clusters in others. Now you need a way to decide "is something a rendition of Gangster's Paradise" and also "is something in the style of Hamlet?" (which would be odd, because I don't believe Hamlet was known for his writing ;) ) and some time. With enough computing power, you've reduced infinite monkeys on infinite typewriters to something finite enough to appear roughly at the speed of normal human typing or so.

I've probably misunderstood large parts of it, but I think this is close enough to how it works to approximate the idea that there's no inherent intelligence or creativity to the process.

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

MonikaTSarn posted:

I'm not sure how things like that actually work, considering the limitations discussed here. How does this work if it's just fancy word chains ? Is it just stealing an example of somebody doing this exact thing before, or actually creating something new ?

The technology you want to read up on/search for is “transformer model” (the “T” in (Chat)GPT stands for transformer).

I don’t know nearly enough to explain it but long story short, the defining feature of the transformer model is that uses an “attention algorithm” that… does some stuff to give you better results? Idk.

Another way to think of it is that stylizing a thing in another written style isn’t really any different than translating from one language to another. The original transformer model was developed to translate from English into German or vice versa. Features of the German language include verb conjugations, changing the endings of adjectives and adverbs (declension), and very specific positioning of verbs including moving a verb allllllll the way to the end of a sentence.

So whatever language translator you’re developing needs to be able to see the endings of a word and infer what it’s talking about, or see a random verb at the end of a super long sentence and know what to refer back to.

HootTheOwl
May 13, 2012

Hootin and shootin

MonikaTSarn posted:

I'm not sure how things like that actually work, considering the limitations discussed here. How does this work if it's just fancy word chains ? Is it just stealing an example of somebody doing this exact thing before, or actually creating something new ?

It knows to identify parts of a sentence, and using it's dataset find the parts that match.
Then it uses statistics and the rules of language to create output.
Humans are good at pattern recognition (IE seeing faces in clouds) so we're very good at taking the output and making it fit the prompt and believing it's magic.

Riven
Apr 22, 2002
I agree with everything here and also my job where I’m supposed to write highly technical stuff about our cutting edge products needs me to write 1200 words on “Developer Experience” for an SEO page and you can bet your buttons I’m farming that out to ChatGPT. Yes I’m part of the problem, don’t @ me.

HootTheOwl
May 13, 2012

Hootin and shootin

Riven posted:

I agree with everything here and also my job where I’m supposed to write highly technical stuff about our cutting edge products needs me to write 1200 words on “Developer Experience” for an SEO page and you can bet your buttons I’m farming that out to ChatGPT. Yes I’m part of the problem, don’t @ me.

Lmao.
Getting one model to produce material to be picked up by the second. Both ruining the services they rely on to function.
Love it.

nachos
Jun 27, 2004

Wario Chalmers! WAAAAAAAAAAAAA!
there are hundreds of billions of dollars being invested into what will end up being, at best, a decent technical writing and homework assistant that still requires a human to validate the output

Electric Wrigglies
Feb 6, 2015

nachos posted:

there are hundreds of billions of dollars being invested into what will end up being, at best, a decent technical writing and homework assistant that still requires a human to validate the output

Eh, its already delivered on taking most of the grunt work out of language translation. That and automated gore content filtering (as pointed out by another poster) seems like well worth the investment of billions.

Xand_Man
Mar 2, 2004

If what you say is true
Wutang might be dangerous


HootTheOwl posted:

Lmao.
Getting one model to produce material to be picked up by the second. Both ruining the services they rely on to function.
Love it.

The AI Centipede

feedmyleg
Dec 25, 2004

nachos posted:

there are hundreds of billions of dollars being invested into what will end up being, at best, a decent technical writing and homework assistant that still requires a human to validate the output

I think you're largely right when it comes to text output—more or less a "brainstorming assistant" is where this text generation is heading, and where it's already useful. Image and video generators have the ability to be more disruptive, but even then it will mostly affect the lives of graphics professionals. Audio, it will put a small group of people out of a job, but mostly it will just be used by people who are already creating video and audio content to raise the bar of mid-quality. It isn't and won't be a "push button get finished art" for 90% of uses, especially commercial uses, but simply another tool used by people to get a desired result in less time with less effort. The barrier to entry of making something acceptable will be lower, so the need for good content that rises above auto-generated output will become more and more significant.

It will also have significant use in some automation based on pattern recognition, especially around large voulmes of data that would take humans an unreasonable amount of manpower to get through manually. For code it will remain a sidekick for devs to be more efficient, and will get better but still require human guidance.

For everyday people, these will be fun novelty toys that certain people enjoy playing around with, and most people touch once or twice. For people who want to make fanart and fan fiction, they will be able to live out their wildest Harry Potter/Peppa Pig slashfic dreams with ease. For young creatives looking to break into, say, filmmaking or video game design, it will make it that much easier for a single individual to create something that feels polished. It's already doing that today—I'm using it to make far more impressive visual effects than I've ever had the time or capacity to do before as we speak. It still involves a ton of manual work using traditional skills and methods, though.

Also, porn. It will be used a whole lot for porn, unless it gets regulated out of existence from a Protect Are Children perspective. And scams.

e: The thing is, it is already all of these things, for those who want to put in the effort to utilize them. The future of these tools is just a more user-friendly, efficient, tactical version of what exists today.

AI isn't going to replace commercial artists, but commercial artists who use AI tools will largely replace those who don't. For artists who don't want to use the tools, they can continue not to. But its no different than deciding that you didn't want to use Photoshop in the 90s. Some people still make money in commercial art not using it! But the majority of folks use it when it's the best tool for the job.

feedmyleg fucked around with this message at 18:27 on Sep 11, 2023

Arsenic Lupin
Apr 12, 2012

This particularly rapid💨 unintelligible 😖patter💁 isn't generally heard🧏‍♂️, and if it is🤔, it doesn't matter💁.


Antigravitas posted:

The most important takeaway is that none of this is "AI". However, having a computer spit out grammatically correct natural language lets people anthropomorphise the machine much easier even while its internals are conceptionally just a souped-up Markov Chain.
This is what I've been saying for years, but somehow "souped-up Markov chain" doesn't get you the big startup bucks.

niethan
Nov 22, 2005

Don't be scared, homie!
I feel like artificial intelligence was always expected to be surprisingly emergent from deceptively simple rules, its not like our neurons are that hot poo poo individually

At this stage its massively overhyped and misunderstood tho

Adbot
ADBOT LOVES YOU

Antigravitas
Dec 8, 2019

Die Rettung fuer die Landwirte:
LLMs are pretty good at giving draft translations. They suck at tone and subtleties and domain specific stuff, but they do well with boilerplate.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply