Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
How many quarters after Q1 2016 till Marissa Mayer is unemployed?
1 or fewer
2
4
Her job is guaranteed; what are you even talking about?
View Results
 
  • Post
  • Reply
shoeberto
Jun 13, 2020

which way to the MACHINES?

cinci zoo sniper posted:

GPT-3’s primary improvement came from it being more than 10 times larger than GPT-2. Model architecture has not changed significantly since “GPT-1”, in fact - OpenAI “just” keeps getting better at training and fine-tuning the same model, and keeps getting more money to spend on GPUs.

Yeah, this is my best understanding of what's considered the quantum leap here. The algorithms are not fundamentally all that different from what we have seen for a decade+. More refined maybe, but I haven't seen anything to suggest they have stumbled on to the language model equivalent of fusion energy. It's primarily improved due to the scale and scope of the data being used to train the model.

Probabilistic language models aren't the AI that people think they are. I don't think we'll get there until you can tie it in with some sort of meaningful knowledge graph that can keep the responses on rails, which is a very hard problem to solve. Not impossible, but my personal feeling is that we're a decade out from this being reliable enough to match the expectations being set for the tech today.

Adbot
ADBOT LOVES YOU

Evil Fluffy
Jul 13, 2009

Scholars are some of the most pompous and pedantic people I've ever had the joy of meeting.

cat botherer posted:

Self-driving is nowhere near as good as human drivers. It still only works in ideal conditions and even then fails in all sorts of unpredictable cases. It turns out that driving requires reasoning, which is something no ML system on the horizon can do.

Tesla's self-driving system is also significantly worse and has much higher failure rates than Waymo and a bunch of others who don't get as much attention because they aren't headed by the biggest conman in the world (and their failures are less frequent/catastrophic).

cinci zoo sniper
Mar 15, 2013




shoeberto posted:

Yeah, this is my best understanding of what's considered the quantum leap here. The algorithms are not fundamentally all that different from what we have seen for a decade+. More refined maybe, but I haven't seen anything to suggest they have stumbled on to the language model equivalent of fusion energy. It's primarily improved due to the scale and scope of the data being used to train the model.

Probabilistic language models aren't the AI that people think they are. I don't think we'll get there until you can tie it in with some sort of meaningful knowledge graph that can keep the responses on rails, which is a very hard problem to solve. Not impossible, but my personal feeling is that we're a decade out from this being reliable enough to match the expectations being set for the tech today.

To be fully clear, there’s some truly fascinating work happening on the fronts of preparing the training data, fine-tuning the models (G in GPT stands for general, and “raw” GPT is not at all that magical, consequently), and some other “stuffing the turkey” parts of it, but in the end of the day right now you’re full because the turkey is a Ford F-150 made out of meat.

If you go to the GPT-3 paper*, you’ll basically see that with enough sarcasm applied you can TL;DR it to “large models are better at zero-shot learning than small models because the more poo poo you cram into a model, the less probable it is for it to truly have no context whatsoever for something you would ask, especially if you’re a normal person”. Zero-shot learning is the technical jargon for doing things not represented in training examples, and is the mechanism that accounts for much of its wow-factor for general population.

* https://papers.nips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html

cinci zoo sniper fucked around with this message at 18:42 on Jan 27, 2023

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

Watermelon Daiquiri posted:

Like, ffs, my brain just put that solipsism in there organically and I had to stop and think about how that actually worked in the context. My brain could pull that information together because it had the training for it. Gpt models are far from true sentience, but it does have some intelligence.

Most definitions of intelligence requires some kind of "understanding". GPT does not understand anything it says.

Mega Comrade fucked around with this message at 18:54 on Jan 27, 2023

Mister Facetious
Apr 21, 2007

I think I died and woke up in L.A.,
I don't know how I wound up in this place...

:canada:
No self awareness

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

Mega Comrade posted:

Almost every definition of intelligence requires some kind of "understanding". GPT does not understand anything it says.
Yeah, I'm a data scientist who has used GPT among other large language models to augment/engineer new features for more specialized data for use in downstream models. They are useful in some situations, but there is nothing particularly amazing about them and they give stupid results all the time. All they really do is find some projective subspace where semantically "close" things are close together (again, this often is wildly off). I think non-specialists tend to fill in the gaps of what they don't understand about ML with things that are more optimistic/cooler than reality.

For the vast majority of things in industry, which revolve around structured data, gradient-boosted decision trees are an empirically better approach than the hot sexy big NN models. While still far away from "understanding," they come closer than things like GPT-3.

Main Paineframe
Oct 27, 2010

Electric Wrigglies posted:

eh, it is a bit of a trolley problem because development of automation in driving could well reduce harm to people from driving (not just crashes but energy efficiency, time saving, etc) over the medium to long term. Cruise control is still killing people every day to this day even though it started out as a lovely PID loop and now they are quite refined.

A lot of attacks on automation / AI / machine learning consist of comparing the worst result of the technological solution with the best human outcome. e.g.
Supv: Hey joe, how about using the expert system to run the plant in stable operation?
Joe: What? I can beat that stupid thing every day, it can't even handle fault x without intervention
Supv: ok, what about last shift where feeder two was underfed for seven hours.
Joe: We are busy and tired, it's loving bullshit that you expect us to watch this thing all day long, you loving sit there for hours monitoring everything and not miss something.
Supv: that's what the expert system is for, to tirelessly watch and optim...
Joe: You're a gently caress-knuckle. I can beat that stupid thing every day.

Development of automation in driving could theoretically reduce harm to people from driving, but that doesn't mean that any specific given implementation will. Tesla's cameras-only implementation, which totally rejects non-visual sensors such as radar, is particularly risky. And more importantly, it doesn't mean that automation companies should be given free rein to do whatever they please based on the idea that they will someday eventually develop a self-driving car that's safer than human drivers - the tales of potential future progress are of little comfort to the multiple people who've been decapitated because their Tesla drove under a truck, let alone the pedestrians and emergency vehicles who've been endangered by Autopilot's notorious inability to handle non-standard situations.

And the entire overall concept of using self-driving to reduce the harm caused by automobiles also runs into the same problem as proposals for reducing harm by improving automobiles: the most effective way to reduce the harm caused by cars is to get cars off the road. In the US, it won't ever be practical to eradicate car traffic altogether, of course. But McKinsey thinks that more than $200 billion has been invested into the development of partially or fully automated automobiles (as of October 2020), and that pays for plenty of buses or (under ideal circumstances, assuming no cost overruns) at least a thousand miles of light rail. How many vehicle-miles could be moved out of personal automobiles with that kind of investment?

There's also the problem that if a self-driving software has issues with a particular situation, most or all vehicles running that software will likely have the same issue. That means that a particular set of circumstances or stretch of road can become insanely dangerous overnight for no apparent reason as an entire fleet of cars suddenly becomes unable to handle it safely. And sure, there's intersections and situations that human drivers tend to have a high likelihood of having issues with. But the concentration of automated vehicles in the Bay Area has already provided some demonstrations of how frequent and incomprehensible these issues could become with a large fleet of vehicles on the road running the same software:
https://www.youtube.com/watch?v=2sZnGdBm_fs

And lastly, the worst result of the technological system is what you have to compare against, because the technological system running physical processes in the real world can't usefully self-adjust or self-adapt. If it gets stuck, the only resolution is human intervention.

shoeberto
Jun 13, 2020

which way to the MACHINES?

cat botherer posted:

Yeah, I'm a data scientist who has used GPT among other large language models to augment/engineer new features for more specialized data for use in downstream models. They are useful in some situations, but there is nothing particularly amazing about them and they give stupid results all the time. All they really do is find some projective subspace where semantically "close" things are close together (again, this often is wildly off). I think non-specialists tend to fill in the gaps of what they don't understand about ML with things that are more optimistic/cooler than reality.

Not a data scientist but I am working in a similar context, and that's been my prevailing experience as well.

That's why it feels so much like blockchain to me. Hype because of what people imagine it can do, when it's really only useful in highly specific situations (which are super technical and nitty-gritty). But a lot of people are going to try to get very rich off of it, and may even achieve that, until they have to actually start generating revenue.

cinci zoo sniper
Mar 15, 2013




shoeberto posted:

But a lot of people are going to try to get very rich off of it, and may even achieve that, until they have to actually start generating revenue.

Yeah. Just the other day, I received an email from a popular text-to-image generation service - they were looking for a consultant who could reduce their operating costs.

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.
I love how openai used to be non profit lol

cinci zoo sniper
Mar 15, 2013




Watermelon Daiquiri posted:

I love how openai used to be non profit lol

Nominally it’s still supposed to be that, since the for-profit OpenAI LP is supposed to transfer profits to OpenAI, the original non-profit that still exists. They just switched their language use a few years ago to refer to OpenAI LP with OpenAI.

Evil Fluffy
Jul 13, 2009

Scholars are some of the most pompous and pedantic people I've ever had the joy of meeting.

Main Paineframe posted:

Development of automation in driving could theoretically reduce harm to people from driving, but that doesn't mean that any specific given implementation will. Tesla's cameras-only implementation, which totally rejects non-visual sensors such as radar, is particularly risky. And more importantly, it doesn't mean that automation companies should be given free rein to do whatever they please based on the idea that they will someday eventually develop a self-driving car that's safer than human drivers - the tales of potential future progress are of little comfort to the multiple people who've been decapitated because their Tesla drove under a truck, let alone the pedestrians and emergency vehicles who've been endangered by Autopilot's notorious inability to handle non-standard situations.

And the entire overall concept of using self-driving to reduce the harm caused by automobiles also runs into the same problem as proposals for reducing harm by improving automobiles: the most effective way to reduce the harm caused by cars is to get cars off the road. In the US, it won't ever be practical to eradicate car traffic altogether, of course. But McKinsey thinks that more than $200 billion has been invested into the development of partially or fully automated automobiles (as of October 2020), and that pays for plenty of buses or (under ideal circumstances, assuming no cost overruns) at least a thousand miles of light rail. How many vehicle-miles could be moved out of personal automobiles with that kind of investment?

There's also the problem that if a self-driving software has issues with a particular situation, most or all vehicles running that software will likely have the same issue. That means that a particular set of circumstances or stretch of road can become insanely dangerous overnight for no apparent reason as an entire fleet of cars suddenly becomes unable to handle it safely. And sure, there's intersections and situations that human drivers tend to have a high likelihood of having issues with. But the concentration of automated vehicles in the Bay Area has already provided some demonstrations of how frequent and incomprehensible these issues could become with a large fleet of vehicles on the road running the same software:
https://www.youtube.com/watch?v=2sZnGdBm_fs

And lastly, the worst result of the technological system is what you have to compare against, because the technological system running physical processes in the real world can't usefully self-adjust or self-adapt. If it gets stuck, the only resolution is human intervention.

Self-driving software is also a huge target for a state-level actor. If one day you have 10s of millions of self driving cars on the road one day then you're one high level hack away from potentially millions of (near)simultaneous accidents that cause more injury and death than most wars and a country absolutely paralyzed in ways never seen before.

BiggerBoat
Sep 26, 2007

Don't you tell me my business again.

Antigravitas posted:

It's really no different from licensing any other copyrighted material. If you want to use a certain song in a game or movie you have to license it, and these licenses are typically time limited. The same if often true to fonts used in your publication.

And don't underestimate font maintenance. If you want a font that looks consistent with different scripts used around the world, supports ligatures, right to left writing, proper keming, hinting… you usually have to pay. Fonts are hilariously complex nowadays.

They always have been.

In my line of work, I'd estimate that 95% of the time something didn't reproduce properly, it was a font issue. There's also that lovely habit "designers" have where they don't SEND the loving fonts they used on their production files. Made even worse by colleges teaching graphic design students to use Photoshop for everything so I get entire typeset layouts done with it that the client ALWAYS wants to make a change on and many of them don't understand that I simply can't do that without the fonts.

"But it looks fine on my screen?!"

Hell, half of them don't even know how to collect the fonts, where they're stored or how to create outlines of them so it's not an issue. A lot amateurs use MS Word and Publisher for layout and, hell, that software just substitutes whatever the hell loving font it wants to without telling you they changed it.

I might have a different version of the same font by a different manufacturer and even that can gently caress up a layout.

God I'm glad to be out of that poo poo.

PT6A
Jan 5, 2006

Public school teachers are callous dictators who won't lift a finger to stop children from peeing in my plane

BiggerBoat posted:

They always have been.

In my line of work, I'd estimate that 95% of the time something didn't reproduce properly, it was a font issue. There's also that lovely habit "designers" have where they don't SEND the loving fonts they used on their production files. Made even worse by colleges teaching graphic design students to use Photoshop for everything so I get entire typeset layouts done with it that the client ALWAYS wants to make a change on and many of them don't understand that I simply can't do that without the fonts.

"But it looks fine on my screen?!"

Hell, half of them don't even know how to collect the fonts, where they're stored or how to create outlines of them so it's not an issue. A lot amateurs use MS Word and Publisher for layout and, hell, that software just substitutes whatever the hell loving font it wants to without telling you they changed it.

I might have a different version of the same font by a different manufacturer and even that can gently caress up a layout.

God I'm glad to be out of that poo poo.

Yeah, the technical side of it is a massive pain in the rear end. But the licensing/economic side of it? Who fuckin' cares? Either pay for the font you really, really want to use, and pay for everyone else who's involved to be able to use it as well, or pick a different font and quit bitching. There's tons of free, permissively-licensed fonts in this world, the designers with the super nice fonts are not holding typography hostage just because you have to make due with a slightly different font if you don't want to pay them.

dphi
Jul 9, 2001
It sounds like the answer here is AI-generated fonts

shrike82
Jun 11, 2005

Technically, chat-gpt's not just the GPT-3 architecture - they've added a reward and reinforcement learning component that improves responses by training the overall model to rank potential responses. The interesting thing about these generative DL models (that's a bit more obvious on the stable diffusion side) is that the models are actually sampling responses so there's a lot of room for improvement on that end.

The big jump from markov to rnn to transformer models is how much state they can maintain i.e., the latest architectures can maintain a "memory" across a sequence of thousands of words

Ghost Leviathan
Mar 2, 2017

Exploration is ill-advised.

Mega Comrade posted:

Yeah. These AIs are excellent at appearing intelligent, but they actually have zero understanding of what they are spitting out and can't even understand the most basic of concepts



The trick is that the programs produce one step above word salad, which looks 'smart' to people who don't actually read it, and the kind of people who think 'lots of words = smart and correct'. It's like trying to read narcissist screeds and rejectedparents posts where there's basically nothing but constant dancing around the actual topic and point, except without even the trace of a point to find between the lines.

Of course it doesn't help that as far as most of our society and definitely our politics are concerned, having the trappings of a thing is considered just as good as, and even preferable to, actually having the thing itself.

OddObserver
Apr 3, 2009
It kinda makes me think of a student trying really hard to make sure they get at least some partial credit.

ponzicar
Mar 17, 2008

OddObserver posted:

It kinda makes me think of a student trying really hard to make sure they get at least some partial credit.

"The teacher said three pages, I've run out of content at one and a half. She's already wise to me loving with the margins and font size. Guess it's time to add some redundant padding and pointless bullshit filler."

-Me in highschool

SpeakSlow
May 17, 2004

by Fluffdaddy
Funny. I've never used fonts beyond the standard business set and always pictured them like an overlay for an underlying relational character set like the old ANSI and now UTC formats.

So in my head, it's like: why worry about the shape of the squiggle when the code underneath it is the same 1:1 relation?

Probably not that easy, but I'm a basic font bitch, so what do I know?

Antigravitas
Dec 8, 2019

Die Rettung fuer die Landwirte:

BiggerBoat posted:

God I'm glad to be out of that poo poo.

:shepface: :same:


SpeakSlow posted:

Funny. I've never used fonts beyond the standard business set and always pictured them like an overlay for an underlying relational character set like the old ANSI and now UTC formats.

So in my head, it's like: why worry about the shape of the squiggle when the code underneath it is the same 1:1 relation?

Probably not that easy, but I'm a basic font bitch, so what do I know?

Ligatures and composite glyphs are out to ruin your day. If you want to really feel lost and destitute, read about "unicode normalisation".

Then there's serious wizardry that goes into getting the widths of each character just right, and to also adjust their spacing so the result looks even. Consider a lower case "fe". In many fonts the e will intrude into the f's bounding box. These spacing adjustments for individual letter combinations are usually hand tuned.

And that's not even going into how a font must morph to be readable at low sizes.

SniHjen
Oct 22, 2010

Family Values posted:

If the law bot is allowed by the courts, how long until Louisiana or some other regressive poo poo hole decides that actually, the constitutional right to an attorney can be satisfied by a contract with DoNotPay.com, and shuts down all their public defender offices?

Then poors will get the legal equivalent of:



I'm looking at that, and what I am seeing is that it would consider New York to be the USA capitol.
I'm interested in if it would use the same definition of "capitol" here.

PT6A
Jan 5, 2006

Public school teachers are callous dictators who won't lift a finger to stop children from peeing in my plane

SpeakSlow posted:

Funny. I've never used fonts beyond the standard business set and always pictured them like an overlay for an underlying relational character set like the old ANSI and now UTC formats.

So in my head, it's like: why worry about the shape of the squiggle when the code underneath it is the same 1:1 relation?

Probably not that easy, but I'm a basic font bitch, so what do I know?

As others have said, having a really well designed font that looks "right" across a variety of different styles, character sets, character combinations, sizes, etc, in addition to looking good, is quite an involved process.

It's unreasonable to expect everyone to do that for free. Frankly, it's amazing that a truly significant number of people have done this work for permissibly-licensed fonts. So you either restrict yourself to fonts that are widely available and licensed, or accept a free font with any of its imperfections, or pay and quit whining.

Designers are not owed the ability to use whichever fonts they like for free any more than those designers' customers are owed the designers' work for free.

Family Values
Jun 26, 2007


SniHjen posted:

I'm looking at that, and what I am seeing is that it would consider New York to be the USA capitol.
I'm interested in if it would use the same definition of "capitol" here.

It doesn't 'consider' anything, nor does it have a definition of 'capital'. Istanbul was referred to as Turkey's capital enough times (or its 'cultural capital' or 'financial capital' or whatever) in the training data, and Ankara doesn't get as much coverage, so it just reproduces that error. We interpret its confidence as correctness.

These AIs are just 'many people are saying…' algorithms. They're only fit for purpose of being a Fox News pundit.

Family Values fucked around with this message at 17:37 on Jan 28, 2023

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!
I'm not sure how these obviously cherry-picked examples of ChatGPT being hilariously wrong are the owns that people seem to think they are. I'm 100% positive that for every cherry-picked bad example there's a "holy poo poo this is actually genius" good example out there.

Mister Facetious
Apr 21, 2007

I think I died and woke up in L.A.,
I don't know how I wound up in this place...

:canada:

Boris Galerkin posted:

I'm not sure how these obviously cherry-picked examples of ChatGPT being hilariously wrong are the owns that people seem to think they are. I'm 100% positive that for every cherry-picked bad example there's a "holy poo poo this is actually genius" good example out there.

And what happens when someone uses it for advice on a subject they know nothing about? How isa laymen supposed to recognize good information from bad?
The fact that it CAN be so hilariously wrong is indication that it needs far, far more work in order to be ready for the general population without problems arising in the future.

PT6A
Jan 5, 2006

Public school teachers are callous dictators who won't lift a finger to stop children from peeing in my plane
With no way to know the difference between a genius answer and a hilariously wrong answer, the utility is marginal at best assuming you don’t have some idea yourself of what the answer is. And, reasonably speaking, it’s going to be less accurate at giving genius answers to things which are difficult or unknown to humans, because those things won’t be reflected in the training data as often.

Can AI generate some insights, or even automate certain aspects of writing? Yes, but at this point, it’s not to be trusted. Consider that a key “worry” about these systems is academic dishonesty — an area where there’s already a more knowledgeable person tasked with evaluating the result whether it’s generated by a person or an AI.

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

Mister Facetious posted:

And what happens when someone uses it for advice on a subject they know nothing about? How isa laymen supposed to recognize good information from bad?
The fact that it CAN be so hilariously wrong is indication that it needs far, far more work in order to be ready for the general population without problems arising in the future.

How is this any different to a layperson googling about x and finding wrong information about x on some random idiot conspiracy theorist's blog?

Family Values
Jun 26, 2007


Boris Galerkin posted:

How is this any different to a layperson googling about x and finding wrong information about x on some random idiot conspiracy theorist's blog?

Google isn't trying to sell anyone the service of 'just google a legal defense and then regurgitate it to the judge in an actual loving court case'.

Mzbundifund
Nov 5, 2011

I'm afraid so.
Because the information you read on the conspiracy theorist’s blog has a known source: the blog. Information you get from ChatGPT comes from ???

Perestroika
Apr 8, 2010

Boris Galerkin posted:

I'm not sure how these obviously cherry-picked examples of ChatGPT being hilariously wrong are the owns that people seem to think they are. I'm 100% positive that for every cherry-picked bad example there's a "holy poo poo this is actually genius" good example out there.

They illustrate the core issue/misunderstanding people have with ChatGPT: It doesn't have any understanding of the things it outputs, it is purely an association machine. Yes, if you feed it with enough data and sufficiently refine it, it can be correct an impressive amount of the time by sheer weight of numbers. But there's not even the most low-level sanity check that it's not completely wrong about even the most basic of facts it puts out, and the way it dresses up the answers in fairly verbose texts tends to obscure that facet.

You can already see the first idiots going "Well I asked the Most Advanced AI Of All Time about the thing we're arguing, and it totally said I was correct, so there!". Yes, it can point you in the right direction a lot of the time. But as it stands cannot be considered reliable to the same degree as a manually curated knowledge base.

Harold Fjord
Jan 3, 2004
Probation
Can't post for 42 hours!

Mzbundifund posted:

Because the information you read on the conspiracy theorist’s blog has a known source: the blog. Information you get from ChatGPT comes from ???

Very soon it will come from BuzzFeed articles it wrote.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

shrike82 posted:

Technically, chat-gpt's not just the GPT-3 architecture - they've added a reward and reinforcement learning component that improves responses by training the overall model to rank potential responses. The interesting thing about these generative DL models (that's a bit more obvious on the stable diffusion side) is that the models are actually sampling responses so there's a lot of room for improvement on that end.

The big jump from markov to rnn to transformer models is how much state they can maintain i.e., the latest architectures can maintain a "memory" across a sequence of thousands of words
Sampling is nothing special, and is a simple step beyond embedding - you are just walking around in the derived embedding space. Sampling has been the basis of many/most Bayesian algorithms for decades. It’s not that hard to maintain whatever arbitrary state you want (aside from spatial location) during a sampling procedure.

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

Family Values posted:

Google isn't trying to sell anyone the service of 'just google a legal defense and then regurgitate it to the judge in an actual loving court case'.

My post had nothing to do with that court case. It was entirely about how people are coming in here posting these "lol at this hilariously wrong cherry-picked example therefore everything ChatGPT output is also hilariously bad" takes.

Perestroika posted:

They illustrate the core issue/misunderstanding people have with ChatGPT: It doesn't have any understanding of the things it outputs, it is purely an association machine. Yes, if you feed it with enough data and sufficiently refine it, it can be correct an impressive amount of the time by sheer weight of numbers. But there's not even the most low-level sanity check that it's not completely wrong about even the most basic of facts it puts out, and the way it dresses up the answers in fairly verbose texts tends to obscure that facet.

You can already see the first idiots going "Well I asked the Most Advanced AI Of All Time about the thing we're arguing, and it totally said I was correct, so there!". Yes, it can point you in the right direction a lot of the time. But as it stands cannot be considered reliable to the same degree as a manually curated knowledge base.

When I was younger we were told don't trust everything we read on the internet. When I was older it was "well wikipedia has good information, but be careful about it's sources."

Aren't you all roughly the same age as me? If we survived the times Before Wikipedia when information was on random people's geocities's pages and the times After Wikipedia where we were told to verify sources before taking them as fact, then what's the big loving deal?

Mzbundifund
Nov 5, 2011

I'm afraid so.
Nobody trusted the random geocities pages in the Before Wikipedia times, if you wanted a respected source you got it from the library.

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

Mzbundifund posted:

Nobody trusted the random geocities pages in the Before Wikipedia times, if you wanted a respected source you got it from the library.

That's what I said? In the Before Wikipedia times we had information on random geocities pages and the world didn't end because people knew/were taught not to trust said information?

Family Values
Jun 26, 2007


Boris Galerkin posted:

My post had nothing to do with that court case. It was entirely about how people are coming in here posting these "lol at this hilariously wrong cherry-picked example therefore everything ChatGPT output is also hilariously bad" takes.

If you can't see why getting basic facts correct matters, I don't know what to tell you. Unscrupulous people are rushing to apply this technology to problems that it's wildly unsuited for, hence the "tech nightmare"

Main Paineframe
Oct 27, 2010

Boris Galerkin posted:

My post had nothing to do with that court case. It was entirely about how people are coming in here posting these "lol at this hilariously wrong cherry-picked example therefore everything ChatGPT output is also hilariously bad" takes.

Cherry-picked or not, they're examples of how ChatGPT is unreliable. And more than that, it shows how they're unreliable: because ChatGPT does not actually understand the question and is just mashing together sentences based on how often words are used next to each other on the internet. It's not really much different from Googling stuff and just taking the top result, except that it's billed as AI and rephrases everything into natural language so people think it's actually useful for anything.

hobbesmaster
Jan 28, 2008

Some people at my work are excited about using ChatGPT to spit out rough drafts of marketing copy. All I can think of with that is “ok fair that’s a reasonable thing it can do right now”

Adbot
ADBOT LOVES YOU

Mister Facetious
Apr 21, 2007

I think I died and woke up in L.A.,
I don't know how I wound up in this place...

:canada:

Boris Galerkin posted:

How is this any different to a layperson googling about x and finding wrong information about x on some random idiot conspiracy theorist's blog?

Convenience. Why look poo poo up yourself when you can ask the Voice in the Box to do it for you

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply