Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
How many quarters after Q1 2016 till Marissa Mayer is unemployed?
1 or fewer
2
4
Her job is guaranteed; what are you even talking about?
View Results
 
  • Post
  • Reply
Mzbundifund
Nov 5, 2011

I'm afraid so.

Boris Galerkin posted:

That's what I said? In the Before Wikipedia times we had information on random geocities pages and the world didn't end because people knew/were taught not to trust said information?

Yeah, exactly, people knew not to trust that. I’m sure there were exceptions but there wasn’t this loud vocal cadre of hucksters promoting geocities as the new primary definitive source of knowledge. I hope people know not to trust ChatGPT but I’m not so optimistic. The way it encrusts it’s answers with jargon and long grammatically correct but low-density sentences makes them sound authoritative to a non-expert in the topic, and there’s just so many people talking about how amazing this stupid robot is, I’m concerned it’s going to achieve an undeserved sense of legitimacy.

Adbot
ADBOT LOVES YOU

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

hobbesmaster posted:

Some people at my work are excited about using ChatGPT to spit out rough drafts of marketing copy. All I can think of with that is “ok fair that’s a reasonable thing it can do right now”

They should probably cool their excitement because the next step is some exec thinking "why do I need 10 people on this marketing team when I can just have 1 person to make ChatGPT spit out rough drafts and 4 people to proofread and edit them?"

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

Main Paineframe posted:

Cherry-picked or not, they're examples of how ChatGPT is unreliable. And more than that, it shows how they're unreliable: because ChatGPT does not actually understand the question and is just mashing together sentences based on how often words are used next to each other on the internet. It's not really much different from Googling stuff and just taking the top result, except that it's billed as AI and rephrases everything into natural language so people think it's actually useful for anything.

Mzbundifund posted:

Yeah, exactly, people knew not to trust that. I’m sure there were exceptions but there wasn’t this loud vocal cadre of hucksters promoting geocities as the new primary definitive source of knowledge. I hope people know not to trust ChatGPT but I’m not so optimistic. The way it encrusts it’s answers with jargon and long grammatically correct but low-density sentences makes them sound authoritative to a non-expert in the topic, and there’s just so many people talking about how amazing this stupid robot is, I’m concerned it’s going to achieve an undeserved sense of legitimacy.

This sounds like a people problem, not a tech nightmare problem.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!

Boris Galerkin posted:

I'm not sure how these obviously cherry-picked examples of ChatGPT being hilariously wrong are the owns that people seem to think they are. I'm 100% positive that for every cherry-picked bad example there's a "holy poo poo this is actually genius" good example out there.

I cherry picked that one because I think it demonstrates succinctly a very well worded explanation of a concept (how even numbers work) but it not actually able to understand or apply the reasoning it just gave.

ChatGPT is very impressive, I can see it being an invaluable tool in the future but it's not intelligent and it gets stuff wrong often. When it launched stack overflow had to ban its use because the site got flooded with seemingly intelligent and convincing but completely incorrect answers. That chatGPT almost certainly uses stackoverflow for sources also begs the question, what happens when it starts using itself as the source?

Mega Comrade fucked around with this message at 19:40 on Jan 28, 2023

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

hobbesmaster posted:

Some people at my work are excited about using ChatGPT to spit out rough drafts of marketing copy. All I can think of with that is “ok fair that’s a reasonable thing it can do right now”
ChatGPT absolutely has a the potential to be a game changer in how we shovel out BS. The difference is that those things are stupid already.

Main Paineframe
Oct 27, 2010

Boris Galerkin posted:

This sounds like a people problem, not a tech nightmare problem.

All tech nightmare problems are people problems, because people are the ones making these tech nightmares, and people are the ones using tech for nightmarish things.

Nenonen
Oct 22, 2009

Mulla on aina kolkyt donaa taskussa
I love asking ChatGPT for songs about any subject. It knows so many songs that I have never heard!

For example, let's ask for a song about... Seattle?

quote:

One popular song about Seattle is "Seattle" by Macklemore & Ryan Lewis featuring Eric Nally. The song was released in 2015 as part of their album "This Unruly Mess I've Made." The lyrics talk about the city of Seattle and its history, including references to the Space Needle, the city's music scene, and the grunge era of the 1990s.

That's a new one for me. What are the lyrics like then?

quote:

I'm sorry, I don't have the full lyrics to the song "Seattle" by Macklemore & Ryan Lewis featuring Eric Nally. However, I can give you some lines of the lyrics that might give you an idea of what the song is about.

Verse 1:
This is the city of the Space Needle
Where the skies are so blue and so clear
Where the streets are so wide and so narrow
The people are so different, but they're all here

Chorus:
Seattle, Seattle
Where the streets are so clean
Seattle, Seattle
Where the skies are so blue and so green

Verse 2:
This is the city where the music began
Where the grunge and the punk rock collided
Where the streets were so gritty and so grand
Where the youth were so restless, but so alive

Chorus:
Seattle, Seattle
Where the streets are so clean
Seattle, Seattle
Where the skies are so blue and so green

Bridge:
This is the city where the future is born
Where the past and the present are one
Where the people are so strong and so torn
Where the dreams and the reality run

Chorus:
Seattle, Seattle
Where the streets are so clean
Seattle, Seattle
Where the skies are so blue and so green

Outro:
This is the city of the Space Needle
Where the skies are so blue and so clear
Where the streets are so wide and so narrow
The people are so different, but they're all here
Seattle, Seattle

Note that this is not the full lyrics and it's just a part of the song.

Now that's a banger!

Agents are GO!
Dec 29, 2004

Riven posted:

Sure but the real answer is “the mathematical likelihood of the next word that should come after this one.” It’s not actually intelligently synthesizing ideas.

Neither are a lot of actual humans.

Nenonen
Oct 22, 2009

Mulla on aina kolkyt donaa taskussa
Also the AI doesn't want to touch sensitive subjects... except sometimes it changes its mind. And apparently language and specific wording matters a lot, I can ask for something in English and it's a no go, but once I switch to Finnish it immediately spouts out something. It's always stupid, but sometimes interesting stupid.

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

Agents are GO! posted:

Neither are a lot of actual humans.
As stupid as the average person is, they are a lot smarter than current bots. With humans, it’s usually more willful ignorance and laziness than fundamental inability.

Agents are GO!
Dec 29, 2004

cat botherer posted:

As stupid as the average person is, they are a lot smarter than current bots. With humans, it’s usually more willful ignorance and laziness than fundamental inability.

Not me, I am a Chinese Room.

Evil Fluffy
Jul 13, 2009

Scholars are some of the most pompous and pedantic people I've ever had the joy of meeting.

Boris Galerkin posted:

I'm not sure how these obviously cherry-picked examples of ChatGPT being hilariously wrong are the owns that people seem to think they are. I'm 100% positive that for every cherry-picked bad example there's a "holy poo poo this is actually genius" good example out there.

I'm sure there are examples of the Tesla self-drive function working flawlessly too but that doesn't make a pedestrian any less maimed or dead when it fucks up and runs someone over.

At the end of the day, it's unreliable tech and in some situations they're selling a lie.

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

Nenonen posted:

I love asking ChatGPT for songs about any subject. It knows so many songs that I have never heard!

For example, let's ask for a song about... Seattle?

That's a new one for me. What are the lyrics like then?

Now that's a banger!

Now just feed those lyrics and a style descriptor to Google’s new music generation model MusicLM and we’ve got ourselves a guaranteed Billboard Top 100 hit!

cat botherer
Jan 6, 2022

I am interested in most phases of data processing.

Boris Galerkin posted:

Now just feed those lyrics and a style descriptor to Google’s new music generation model MusicLM and we’ve got ourselves a guaranteed Billboard Top 100 hit!
https://www.youtube.com/watch?v=rqkUISJej2o

You probably haven’t seen one of these, they cost almost 1000 pounds.

ErIog
Jul 11, 2001

:nsacloud:

Boris Galerkin posted:

I guess the AI Lawyer guy isn’t going through with it anymore according to NPR:

Anyway, I was under the impression—based entirely on TV and movies—that one could choose to represent themselves even if they were not a bar certified lawyer. Is this not true? And if it is true what’s the argument for not letting a rando “practice” law with an AI chatbot?

Late reply, but the way representing yourself works isn't the way you've seen in TV or movies The judge has to accept it. If the judge doesn't think you have the requisite skills to represent yourself adequately (e.g. not gently caress up or delay proceedings) they will tell you to gently caress off. You have the right to adequate representation*. You do not have the right to insist that you are your own adequate representation. They'll force a court-appointed lawyer on you.

This isn't tax filing. You can't just say it's self-prepared even if TurboTax did it for you. Judges are most concerned with people wasting time on their docket. They're not gonna let you mess with them when they have a slate of willing underpaid people who can represent you instead.

The fact that the original person thought pro se worked like that makes me think they probably shouldn't be anywhere near a courtroom or clients. It literally does not make sense that they thought this could be an end-around with the way getting clearance for pro se works. Maybe all this works differently in the UK, but it's incredibly suspect.

*as defined within a very very narrow band of not stabbing your client and not wasting the court's time, all other malfeasance other than wire fraud will probably be tolerated. It's extraordinarily hard to raise ineffective assistance of counsel out the gate and it's like doubly hard if it involves a court-appointed attorney due to SCOTUS jurisprudence on this.

ErIog fucked around with this message at 06:12 on Jan 29, 2023

Ruffian Price
Sep 17, 2016

Nenonen posted:

Also the AI doesn't want to touch sensitive subjects... except sometimes it changes its mind. And apparently language and specific wording matters a lot, I can ask for something in English and it's a no go, but once I switch to Finnish it immediately spouts out something. It's always stupid, but sometimes interesting stupid.

Interesting how "do 20 generations and pick the one that didn't poo poo the bed" used to be standard procedure, is integrated into the ChatGPT interface, and yet nobody does this. Perhaps it breaks the immersion

SniHjen
Oct 22, 2010

Edited from the original:

Main Paineframe posted:

Cherry-picked or not, they're examples of how people are unreliable. And more than that, it shows how they're unreliable: because people do not actually understand the question and are just mashing together sentences based on how often words are used next to each other on the internet. It's not really much different from Googling stuff and just taking the top result, except that it's billed as intelligence and rephrases everything into natural language so people think they are actually useful for anything.

This is the problem I have with this discussion, asking ChatGPT a question, is the same as asking anyone a question.
It's not even a question of accurate, good, or perfect. why are you trusting random people on a dead forum?

shoeberto
Jun 13, 2020

which way to the MACHINES?

SniHjen posted:

Edited from the original:

This is the problem I have with this discussion, asking ChatGPT a question, is the same as asking anyone a question.
It's not even a question of accurate, good, or perfect. why are you trusting random people on a dead forum?

The problem with this comparison is that it's not controlling for the perception of expertise. It's realistic to assume humans who are otherwise discerning about human expertise will perceive that a very advanced generative AI also inherently has expertise. A big part of the hype cycle right now seems to be built around the assumption that expert systems can trivially be built with this tech, which just isn't the case.

Sage Grimm
Feb 18, 2013

Let's go explorin' little dude!

DoNotPay reached out again and man it is wild; throws accusations at her, evades answering any of her questions and changes the TOS to specifically make it a violation to test the products before actual use. And then blocks her again.
https://twitter.com/KathrynTewson/status/1618859985118859264

shoeberto
Jun 13, 2020

which way to the MACHINES?

Main Paineframe posted:

Cherry-picked or not, they're examples of how ChatGPT is unreliable. And more than that, it shows how they're unreliable: because ChatGPT does not actually understand the question and is just mashing together sentences based on how often words are used next to each other on the internet. It's not really much different from Googling stuff and just taking the top result, except that it's billed as AI and rephrases everything into natural language so people think it's actually useful for anything.

Following up on the expert system thing, the really loving hard problem is how do you vet anything that it says for accuracy at scale?

We have a hard enough time moderating human generated content at scale. A nondeterministic algorithm that spits out convincing-enough misinformation could be an absolute trainwreck. I'm not sure if it's dangerous per se, but a lot of companies betting the farm on this are going to learn some hard lessons very quickly. See: CNET

Barrel Cactaur
Oct 6, 2021

Nenonen posted:

Also the AI doesn't want to touch sensitive subjects... except sometimes it changes its mind. And apparently language and specific wording matters a lot, I can ask for something in English and it's a no go, but once I switch to Finnish it immediately spouts out something. It's always stupid, but sometimes interesting stupid.

Its running a brain-dead filter system, so of course the other languages pass, the censor only reads English.

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

shoeberto posted:

The problem with this comparison is that it's not controlling for the perception of expertise. It's realistic to assume humans who are otherwise discerning about human expertise will perceive that a very advanced generative AI also inherently has expertise. A big part of the hype cycle right now seems to be built around the assumption that expert systems can trivially be built with this tech, which just isn't the case.

Like I already said, we survived the pre- and post-Wikipedia eras. Today in 2023 people will pull up blog articles written by conspiracy theorists as 100% unironic facts and ignore actually truthful Wikipedia articles with citations to factual and reliable sources as 100% unironic conspiracy theories.

I really don't see how ChatGPT is any different.

The stuff people seem to say about how ChatGPT and other "AI"-Driven resources are bad are already happening today in 2023 without ChatGPT.

e: Not to mention that people will flatout refuse to believe subject matter experts who have dedicated their lives in their subject matter and instead choose to believe their uncle Jim who heard from "many people" that those subject matter experts are all wrong. Whatever problem you guys are imagining ChatGPT will bring already exist today.

Boris Galerkin fucked around with this message at 16:26 on Jan 29, 2023

Jose Valasquez
Apr 8, 2005

Boris Galerkin posted:

Like I already said, we survived the pre- and post-Wikipedia eras. Today in 2023 people will pull up blog articles written by conspiracy theorists as 100% unironic facts and ignore actually truthful Wikipedia articles with citations to factual and reliable sources as 100% unironic conspiracy theories.

I really don't see how ChatGPT is any different.

The stuff people seem to say about how ChatGPT and other "AI"-Driven resources are bad are already happening today in 2023 without ChatGPT.

e: Not to mention that people will flatout refuse to believe subject matter experts who have dedicated their lives in their subject matter and instead choose to believe their uncle Jim who heard from "many people" that those subject matter experts are all wrong. Whatever problem you guys are imagining ChatGPT will bring already exist today.

If crazy conspiracy website is the bar we're giving ChatGPT then what's the point? We don't need anAI to come up with completely untrue garbage

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

Jose Valasquez posted:

If crazy conspiracy website is the bar we're giving ChatGPT then what's the point? We don't need anAI to come up with completely untrue garbage

Because it's a useful tool for some people out there who know how to use it and have a reason to use it? See: Photoshop, AutoCAD, Solidworks, etc, etc, etc. All tools which are generally useless for the vast majority of people, but extremely useful for some.

e: Maya, Blender, Final Cut Pro, Illustrator, etc

Boris Galerkin fucked around with this message at 16:36 on Jan 29, 2023

silence_kit
Jul 14, 2011

by the sex ghost

Boris Galerkin posted:

Like I already said, we survived the pre- and post-Wikipedia eras. Today in 2023 people will pull up blog articles written by conspiracy theorists as 100% unironic facts and ignore actually truthful Wikipedia articles with citations to factual and reliable sources as 100% unironic conspiracy theories.

I really don't see how ChatGPT is any different.

The stuff people seem to say about how ChatGPT and other "AI"-Driven resources are bad are already happening today in 2023 without ChatGPT.

e: Not to mention that people will flatout refuse to believe subject matter experts who have dedicated their lives in their subject matter and instead choose to believe their uncle Jim who heard from "many people" that those subject matter experts are all wrong. Whatever problem you guys are imagining ChatGPT will bring already exist today.

Yeah, I feel like whenever a new technology comes out there is all of this hyperbolic stuff about how it introduces a completely new form of evil into the world. The drawbacks to a new technology are quite often not really that new.

E.g. the people who get reactionary about video games/social media sound a lot like the people who back in the day opposed radio, television, the printed newspaper, on so on. Quite often they have good points, and quite often it IS true that the new technology amplifies/exacerbates already existing social problems to a degree that the older technologies did not. However, the common hyperbolic narrative here that new technologies are bringing a TOTALLY NEW EVIL into the world, is almost always false.

silence_kit fucked around with this message at 16:42 on Jan 29, 2023

Precambrian Video Games
Aug 19, 2002



SniHjen posted:

This is the problem I have with this discussion, asking ChatGPT a question, is the same as asking anyone a question.

No, asking a machine that is incapable of understanding anything is not the same as asking a human that is actually intelligent and capable of critical thought.

Boris Galerkin posted:

Like I already said, we survived the pre- and post-Wikipedia eras. Today in 2023 people will pull up blog articles written by conspiracy theorists as 100% unironic facts and ignore actually truthful Wikipedia articles with citations to factual and reliable sources as 100% unironic conspiracy theories.

I really don't see how ChatGPT is any different.

The difference is the sheer volume of bullshit that machines can generate near-instantaneously. I'll quote this section of an Ezra Klein interview with Gary Marcus again (originally posted in the ChatGPT thread):

quote:

EZRA KLEIN: Let’s sit on that word truthful for a minute because it gets to, I think, my motivation in the conversation. I’ve been interested — I’m not an A.I. professional the way you are, but I’ve been interested for a long time. I’ve had Sam on the show, had Brian Christian on the show. And I was surprised by my mix of sort of wonder and revulsion when I started using ChatGPT because it is a very, very cool program. And in many ways, I find that its answers are much better than Google for a lot of what I would ask it.

But I know enough about how it works to know that, as you were saying, truthfulness is not one of the dimensions of it. It’s synthesizing. It’s sort of copying. It’s pastiching. And I was trying to understand why I was so unnerved by it. And it got me thinking, have you ever read this great philosophy paper by Harry Frankfurt called “On Bullshit”?

GARY MARCUS: I know the paper.

EZRA KLEIN: So this is a — welcome to the podcast, everybody — this is a philosophy paper about what is bullshit. And he writes, quote, “The essence of bullshit is not that it is false but that it is phony. In order to appreciate this distinction, one must recognize that a fake or a phony need not be in any respect, apart from authenticity itself, inferior to the real thing. What is not genuine may not also be defective in some other way. It may be, after all, an exact copy. What is wrong with a counterfeit is not what it is like, but how it was made.”

And his point is that what’s different between bullshit and a lie is that a lie knows what the truth is and has had to move in the other direction. He has this great line where he says that people telling the truth and people telling lies are playing the same game but on different teams. But bullshit just has no relationship, really, to the truth.

And what unnerved me a bit about ChatGPT was the sense that we are going to drive the cost of bullshit to zero when we have not driven the cost of truthful or accurate or knowledge advancing information lower at all. And I’m curious how you see that concern.

GARY MARCUS: It’s exactly right. These systems have no conception of truth. Sometimes they land on it and sometimes they don’t, but they’re all fundamentally bullshitting in the sense that they’re just saying stuff that other people have said and trying to maximize the probability of that. It’s just auto complete, and auto complete just gives you bullshit.

And it is a very serious problem. I just wrote an essay called something like “The Jurassic Park Moment for A.I.” And that Jurassic Park moment is exactly that. It’s when the price of bullshit reaches zero and people who want to spread misinformation, either politically or maybe just to make a buck, start doing that so prolifically that we can’t tell the difference anymore in what we see between truth and bullshit.

EZRA KLEIN: You write in that piece, “It is no exaggeration to say that systems like these pose a real and imminent threat to the fabric of society.” Why? Walk me through what that world could look like.

GARY MARCUS: Let’s say if somebody wants to make up misinformation about Covid. You can take a system like Galactica, which is similar to ChatGPT, or you can take GPT-3. ChatGPT itself probably won’t let you do this. And you say to it, make up some misinformation about Covid and vaccines. And it will write a whole story for you, including sentences like, “A study in JAMA” — that’s one of the leading medical journals — “found that only 2 percent of people who took the vaccines were helped by it.”

You have a news story that looks like, for all intents and purposes, like it was written by a human being. It’ll have all the style and form and so forth, making up its sources and making up the data. And humans might catch one of these, but what if there are 10 of these or 100 of these or 1,000 or 10,000 of these? Then it becomes very difficult to monitor them.

We might be able to build new kinds of A.I., and I’m personally interested in doing that, to try to detect them. But we have no existing technology that really protects us from the onslaught, the incredible tidal wave of potential misinformation like this.

And I’ve been having this argument with Yann LeCun, who’s the chief A.I. scientist at Meta, and he’s saying, well, this isn’t really a problem. But already we’ve seen that this kind of thing is a problem. So it was something that really blew my mind around Dec. 4. This was right after ChatGPT came out. People used ChatGPT to make up answers to programming questions in the style of a website called Stack Overflow.

Now everybody in the programming field uses Stack Overflow all the time. It’s like a cherished resource for everybody. It’s a place to swap information. And so many people put fake answers on this thing where it’s humans ask questions, humans give answers, that Stack Overflow had to ban people putting computer-generated answers there. It was literally existential for that website. If enough people put answers that seemed plausible but we’re not actually true, no one would go to the website anymore.

And imagine that on a much bigger scale, the scale where you can’t trust anything on Twitter or anything on Facebook or anything that you get from a web search because you don’t know which parts are true and which parts are not. And there’s a lot of talk about using ChatGPT and its ilk to do web searches. And it’s true that some of the time. It’s super fantastic. You come back with a paragraph rather than 10 websites, and that’s great.

But the trouble is the paragraph might be wrong. So it might, for example, have medical information that’s dangerous. And there might be lawsuits around this kind of thing. So unless we come up with some kinds of social policies and some technical solutions, I think we wind up very fast in a world where we just don’t know what to trust anymore. I think that’s already been a problem for society over the last, let’s say, decade. And I think it’s just going to get worse and worse.

EZRA KLEIN: But isn’t it the case that search can be wrong now? Not just search — people can be wrong. People spread a lot of misinformation — that there’s a dimension of this critique that is holding artificial intelligence systems to a standard the society itself does not currently meet?

GARY MARCUS: Well, there’s a couple of different things there. So one is I think it’s a problem in difference in scale. So it’s actually problematic to write misleading content right now. Russian trolls spent something like a million dollars a month, over a million dollars a month during the 2016 election. That’s a significant amount of money. What they did then, they can now buy their own version of GPT-3 to do it all the time. They pay less than $500,000, and they can do it in limitless quantity instead of bound by the human hours.

That’s got to make a difference. I mean, it’s like saying, we had knives before. So what’s the difference if we have a submachine gun? Well, submachine gun is just more efficient at what it does. And we’re talking about having submachine guns of misinformation.

So I think that the scale is going to make a real difference in how much this happens. And then the sheer plausibility of it, it’s just different from what happened before. I mean, nobody could make computer-generated misinformation before in a way that was convincing.

In terms of the search engines, it’s true that you get misleading information. But we have at least some practice — I wish people had more — at looking at a website and seeing if the website itself is legit. And we do that in different kinds of ways. We try to judge the sources and the quality. Does this come from The New York Times, or does it look like somebody did it in their spare time in their office and maybe it doesn’t look as careful? Some of those cues are good and some are bad. We’re not perfect at it. But we do discriminate, like does it look like a fake site? Does it look legit and so forth.

And if everything comes back in the form of a paragraph that always looks essentially like a Wikipedia page and always feels authoritative, people aren’t going to even know how to judge it. And I think they’re going to judge it as all being true, default true, or kind of flip a switch and decide it’s all false and take none of it seriously, in which case that’s actually threatens the websites themselves, the search engines themselves.

Boris Galerkin posted:

Because it's a useful tool for some people out there who know how to use it and have a reason to use it? See: Photoshop, AutoCAD, Solidworks, etc, etc, etc. All tools which are generally useless for the vast majority of people, but extremely useful for some.

e: Maya, Blender, Final Cut Pro, Illustrator, etc

I don't follow what the relevance of the comparisons here is supposed to be. But note that the discussion of ChatGPT can be separated from that of image generators like DALL-E because the former can and often is used to answer questions with an actual verifiable correct response whereas image generators are not. Granted, you can ask ChatGPT to generate poetry or other creative writing too, but I don't think anyone is bother to ask DALL-E to solve a math problem photo realistically in the style of Picasso.

Precambrian Video Games fucked around with this message at 16:47 on Jan 29, 2023

shoeberto
Jun 13, 2020

which way to the MACHINES?

Boris Galerkin posted:

Like I already said, we survived the pre- and post-Wikipedia eras. Today in 2023 people will pull up blog articles written by conspiracy theorists as 100% unironic facts and ignore actually truthful Wikipedia articles with citations to factual and reliable sources as 100% unironic conspiracy theories.

I had a follow up post I made separately:

shoeberto posted:

Following up on the expert system thing, the really loving hard problem is how do you vet anything that it says for accuracy at scale?

We have a hard enough time moderating human generated content at scale. A nondeterministic algorithm that spits out convincing-enough misinformation could be an absolute trainwreck. I'm not sure if it's dangerous per se, but a lot of companies betting the farm on this are going to learn some hard lessons very quickly. See: CNET

I think we agree on some level. I don't think this is the end of the world, but I think a lot of money and effort is going to be wasted on this, just like blockchain.

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

eXXon posted:

I don't follow what the relevance of the comparisons here is supposed to be. But note that the discussion of ChatGPT can be separated from that of image generators like DALL-E because the former can and often is used to answer questions with an actual true answer whereas image generators are not. Granted, you can ask ChatGPT to generate poetry or other creative writing too, but I don't think anyone is bother to ask DALL-E to solve a math problem photo realistically in the style of Picasso.

The comparison is that ChatGPT and other "AI"-Driven software are all just tools, no different than Photoshop and Blender. The vast majority of people don't need Photoshop and Maya and the like, but they can downloadpurchase them and use them to draw a dick and balls in 2D with Photoshop or 3D in Maya. On the other hand professionals can use them to create art and/or render award worthy film.

ChatGPT can be used to generate more conspiracy theories by the vast majority of people sure. But a small minority of people could also use it to do much, much more.

The underlying point is that it's a tool, no different from anything else. People believing conspiracy theorist's blogs over subject matter expert is a people problem, not a tech problem (even though tech is what enables their reach). People believing ChatGPT's wrong answers is also a people problem, not a tech problem (even though tech is what enables their reach).

Boris Galerkin fucked around with this message at 16:53 on Jan 29, 2023

Evil Fluffy
Jul 13, 2009

Scholars are some of the most pompous and pedantic people I've ever had the joy of meeting.

Sage Grimm posted:

DoNotPay reached out again and man it is wild; throws accusations at her, evades answering any of her questions and changes the TOS to specifically make it a violation to test the products before actual use. And then blocks her again.
https://twitter.com/KathrynTewson/status/1618859985118859264

The level of dipshittery from this guy is honestly impressive even by "disruptor" techbro standards.

Kwyndig
Sep 23, 2006

Heeeeeey


We're talking about the same guy who couldn't do basic research to realize there's no way he can get an AI into the Supreme Court because they don't allow electronic devices.

Precambrian Video Games
Aug 19, 2002



Boris Galerkin posted:

ChatGPT can be used to generate more conspiracy theories by the vast majority of people sure. But a small minority of people could also use it to do much, much more.

Such as? And regardless of your answer, "it can maybe be used for unspecified good" is not a coherent rebuttal to the point that it is fundamentally incapable of reasoning or distinguishing truth and fiction.

Boris Galerkin posted:

The underlying point is that it's a tool, no different from anything else. People believing conspiracy theorist's blogs over subject matter expert is a people problem, not a tech problem (even though tech is what enables their reach).

Some tools are harmful, you know? So give an example of a similar preexisting tool that gives you pages of bullshit with minimal effort. And "Google search" is not a good answer, because while it certainly is a tech nightmare of its own, at least it directly links to sources and doesn't usually try to give a (potentially completely wrong) definitive answer. Hell, it's integrated with a calculator that usually does give the right answer.

Mega Comrade
Apr 22, 2004

Listen buddy, we all got problems!
The stackoverflow example is a good one. People have always put wrong answers on there, either intentionally or not, but it's never been a problem. ChatGPT comes out and within days they have to ban entries because of the sheer volume of bullshit being added and the mods not being able to cope.

And that was caused by well meaning nerds trying to help people (and pad their stack overflow stats).

Boris Galerkin posted:

a people problem, not a tech problem

I do not understand what you are trying to convey with this point. All tech problems are people problems.

Mega Comrade fucked around with this message at 17:19 on Jan 29, 2023

pumpinglemma
Apr 28, 2009

DD: Fondly regard abomination.

The greatest new evil ChatGPT brings into the world will be totally automated astroturfing, spamming, and shitposting of all public spaces. You know how can currently append site:reddit.com to a search for product reviews to get something that isn't SEO'd to hell and back? Kiss that goodbye. You know how currently making GBS threads up a newspaper comment section with comments about how Putin is good actually currently requires Putin to hire several rooms full of physical people? Kiss that goodbye as well. You know how sometimes, far rarer than in the past but sometimes, you'll search for something on Google and get a result that isn't random clickbait garbage churned out in five minutes? Guess what happens when the random clickbait doesn't require a human to write it.

Epic High Five
Jun 5, 2004



Boris Galerkin posted:

The comparison is that ChatGPT and other "AI"-Driven software are all just tools, no different than Photoshop and Blender. The vast majority of people don't need Photoshop and Maya and the like, but they can downloadpurchase them and use them to draw a dick and balls in 2D with Photoshop or 3D in Maya. On the other hand professionals can use them to create art and/or render award worthy film.

ChatGPT can be used to generate more conspiracy theories by the vast majority of people sure. But a small minority of people could also use it to do much, much more.

The underlying point is that it's a tool, no different from anything else. People believing conspiracy theorist's blogs over subject matter expert is a people problem, not a tech problem (even though tech is what enables their reach). People believing ChatGPT's wrong answers is also a people problem, not a tech problem (even though tech is what enables their reach).

An assault rifle is just a tool, too. Same with bombs and rat poison.

The thing about "reactionaries" like the Luddites is that they were absolutely vindicated and proven correct by history. Those new technologies absolutely ushered in an incredible amount of forced displacement, poverty, massive escalations in the lethality of war. AI will absolutely 100% cause a lot of harm and misery, this is something that is being used as a selling point even, and this is even before we think about the propaganda and consensus manufacturing uses it will be put to.

Has anybody crunched the numbers yet on the carbon released into the atmosphere represented by having huge computing clusters replace labor normally taking nothing more than a pencil or a single hour or two of a laptop PSU?

StumblyWumbly
Sep 12, 2007

Batmanticore!
ChatGPT is like a guy with a C+ in literally everything, but with absolutely unshakeable confidence, and that confidence ends up making the tool very risky to use. I've used it for technical questions and it's great for getting simple answers that pull in common issues from a variety of places, Wrong answers on a new tool would be fine if it could say "I'm 45% sure the capital of Turkey is Istanbul, 30% sure it is Ankara, 25% it's something else". Something like that could let it replace StackOverflow, because it would give a clear indication of when you need to dig deeper for the right answer.

E: I'm talking here about using it for answering questions, which is a different use case from using it to deliberately generate bullshit, although the use cases inherently point to the same tool

StumblyWumbly fucked around with this message at 17:30 on Jan 29, 2023

Rent-A-Cop
Oct 15, 2004

I posted my food for USPOL Thanksgiving!

pumpinglemma posted:

The greatest new evil ChatGPT brings into the world will be totally automated astroturfing, spamming, and shitposting of all public spaces. You know how can currently append site:reddit.com to a search for product reviews to get something that isn't SEO'd to hell and back? Kiss that goodbye. You know how currently making GBS threads up a newspaper comment section with comments about how Putin is good actually currently requires Putin to hire several rooms full of physical people? Kiss that goodbye as well. You know how sometimes, far rarer than in the past but sometimes, you'll search for something on Google and get a result that isn't random clickbait garbage churned out in five minutes? Guess what happens when the random clickbait doesn't require a human to write it.
I mean, nothing is stopping you from writing it once and having a bot spam it without any "AI" involved.

ChatGPT is still reliably detectable with simple resex based word filters if you actually care to moderate your content.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Sage Grimm posted:

DoNotPay reached out again and man it is wild; throws accusations at her, evades answering any of her questions and changes the TOS to specifically make it a violation to test the products before actual use. And then blocks her again.
https://twitter.com/KathrynTewson/status/1618859985118859264

Cackling at "Who is James Joyce?"

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

eXXon posted:

Such as? And regardless of your answer, "it can maybe be used for unspecified good" is not a coherent rebuttal to the point that it is fundamentally incapable of reasoning or distinguishing truth and fiction.

Well, I'm not a professional or subject matter expert that could benefit from using ChatGPT in its current or future forms, that I can think of, so I can't tell you. Regardless of whether I could tell you, why would I waste my time if you preface your response to me by saying that nothing I could tell you would matter?

Epic High Five posted:

An assault rifle is just a tool, too. Same with bombs and rat poison.

Yes, I agree. An assault rifle is a tool that the vast, vast majority of people do not need and have no business using. However, for the people who do need them, they are an invaluable or even existential tool, such as in the hands of the Ukrainians trying to defend their homeland from an aggressor.

The existence of assault rifles is not a bad thing in and of itself, just like the existence of ChatGPT and other forms of it current or future.

The fact that people who have no business owning and using assault rifles but can walk into a store in some places and buy one just like that is a people problem. The fact that anybody could use ChatGPT right now to "do bad" with it is a people problem.

Mega Comrade posted:

I do not understand what you are trying to convey with this point. All tech problems are people problems.

No, these are people problems involving (or the lack thereof) education, legislation, outreach, etc. The tech itself is not a problem.

Clarste
Apr 15, 2013

Just how many mistakes have you suffered on the way here?

An uncountable number, to be sure.

Rent-A-Cop posted:

I mean, nothing is stopping you from writing it once and having a bot spam it without any "AI" involved.

ChatGPT is still reliably detectable with simple resex based word filters if you actually care to moderate your content.

The problem is ordinary people without resex filters not being able to tell that it's spam. And believing that since a lot of smart-sounding people seem to be saying something, it must have a kernal of truth. Which is just how psychology works.

Adbot
ADBOT LOVES YOU

Epic High Five
Jun 5, 2004



Rent-A-Cop posted:

I mean, nothing is stopping you from writing it once and having a bot spam it without any "AI" involved.

ChatGPT is still reliably detectable with simple resex based word filters if you actually care to moderate your content.

Current agitprop and marketing along these lines is also easily detectable, but can it even be argued that this has lessened the impact it's had? What about when they complete the project of making it not detectable at all while still having its fundamental issue of, ultimately, being a bullshit generator? People seek this stuff out to confirm current biases and consumption patterns at least as often as they do for any other reason, and automating the process will fill that space overnight like an ocean filling a bathtub.


Boris Galerkin posted:

Well, I'm not a professional or subject matter expert that could benefit from using ChatGPT in its current or future forms, that I can think of, so I can't tell you. Regardless of whether I could tell you, why would I waste my time if you preface your response to me by saying that nothing I could tell you would matter?

Yes, I agree. An assault rifle is a tool that the vast, vast majority of people do not need and have no business using. However, for the people who do need them, they are an invaluable or even existential tool, such as in the hands of the Ukrainians trying to defend their homeland from an aggressor.

The existence of assault rifles is not a bad thing in and of itself, just like the existence of ChatGPT and other forms of it current or future.

The fact that people who have no business owning and using assault rifles but can walk into a store in some places and buy one just like that is a people problem. The fact that anybody could use ChatGPT right now to "do bad" with it is a people problem.

No, these are people problems involving (or the lack thereof) education, legislation, outreach, etc. The tech itself is not a problem.

Tech problems and people problems are the same problem, they cannot just be separated rhetorically and that be the end of it. They cannot be separated entirely, because even now this isn't specialist tech. It's free for anybody now and soon it'll be a low monthly subscription for the casually malevolent or bored and a higher monthly payment to have the brakes taken off for the professionally malevolent. That is just an exercise in frog boiling one's self along the same lines that people working on drones believe they're merely solving calculations and refining navigation and flight code. If the drone blows up a toddler, is it a toddler problem? Would you support every country on Earth getting 1,000 nukes? Those are also, as you say, just tools.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply