Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Gumball Gumption
Jan 7, 2012

gurragadon posted:

GPT-4 took the Uniform bar exam. According to https://www.ncbex.org/exams/

The MEE is 6 essay style questions analyzing a legal issue and the MPT tasks are standard lawyering tasks.

The AI will always have the entire internet at its disposal, thats a feature of AI, not something that would changed unless delibrate.

What would be a good metric for you? Seeing GPT-4 actually being used in a courtroom would be convincing to me, unfortunately the law practice seems to be kind of reluctant to embrace technology. The supreme court still dosen't have TV cameras.

I kind of wish they would have just let this guy try out the AI lawyer thing.

https://www.cbsnews.com/news/robot-lawyer-wont-argue-court-jail-threats-do-not-pay/

Edit: Uniform Bar Exam not Unified

I think it's important to note that it's training included example MEE essays. It's interesting but GPT-4 being trained for the exams and passing them is then not indicative of performance in a court room. It does show that it would be of value as an information store for lawyers.

Really I think that's where the true value in AI is. Not for its ability to create things but for its ability to be a way to store knowledge and interfaces that feel more natural.

Adbot
ADBOT LOVES YOU

Gumball Gumption
Jan 7, 2012

Personally I compare training data and the things AI then produces to pink slime, the stuff they make chicken McNuggets out of. It turns the training data into a slurry that it then can produce results with that are novel but still also dependent on and similar to the original training data.

Gumball Gumption
Jan 7, 2012

porfiria posted:

I mean I agree, at least to a degree. Does it "know" what a "horse" is? All it has are these weighted associations, but I would bet you could ask it almost any question about a horse, both things that are explicitly in the training data (how many legs does a horse have?) and things that aren't (could a horse drive a car) and get coherent, accurate answers. So I'd say it has an internal model of what a horse is, semantically. I don't think that means it has subjective experience or anything, just that yeah, it seems pretty reasonable to say it knows what a horse is.

In that example it's more fair to say it knows that when text describes a horse it describes it with 4 legs so if you ask it how many legs a horse has it's response will be 4 because that's the common response and it wants to give you a response that looks correct. This too is a simplification but it doesn't really have any understanding of a horse, just the word and the words commonly found with it. That information does all then come together to allow it to talk on horses with some accuracy..

Gumball Gumption fucked around with this message at 00:52 on Mar 28, 2023

Gumball Gumption
Jan 7, 2012

porfiria posted:

Yeah but it also knows you can't build a real horse out of cabbage (but you can build a statue of one), that horses can't drive because they aren't smart enough and don't have hands, and so on. All this stuff may just be weighted values in a huge matrix or whatever, but it can build associations that are vastly more extensive and subtler than words just tending to appear near other words in the training data.

You edited your response a bit. So just to expand:

I'd say it does "know" what a horse is, but that's for some definition of "know." It doesn't have any kind of audio or visual model for a horse (although it probably will soon, so it's kind of a moot point). And of course it doesn't have any personal, subjective associations with a horse in the way that a human does.

But as a matter of language, I'd say yeah, it can deploy "horse" correctly, and "knows" just about all the facts about horses there are, and how those facts inter-relate to other facts about the world in a comprehensive way that, to my mind, meets a lot of the criteria for "knowing" something.

Yeah, I'm with you on that. AI talk definitely pushes me to be literal since I don't want to give it characteristics it doesn't have and I'm still firmly on the side of "it's a complicated process that creates output that does a good imitation of human output". I guess I'd describe my point as it at least knows things on an abstract semantic level that allows it to produce results which look correct.

I think the thing I hang up on is that it can't verify the veracity or truthiness of its responses.

Oh, unrelated but this made me think of it but I'm interested in how it's ability to assume context improves since that would make it feel more natural. For example, right now if you ask if a horse can drive it gives you a really complicated response because it can't tell if you mean "can a horse drive a car?" Or "can a horse drive a wagon?" so it just answers both questions.

Gumball Gumption
Jan 7, 2012

BrainDance posted:

I know, I was taking you extremely literally there. I thought that would be obvious given that I was interpreting what you said in an absurd way and down to just nitpicking.

That's what I mean, it's obvious you're not being that literal, and it's insane to take it that literally, that was clearly the wrong way to take what you were saying which is what you were doing with my posts.

Maybe this is the problem with an AI thread, that was kind of mentioned in one of the other threads. People have very strong opinions and beliefs about it that they're going to project onto other people whether that's what the person is saying or not. See post above with a person wildly misinterpreting a person's attempt at simplifying a feature of ChatGPT and BingAI.

AI threads definitely make me hyper-literal since I feel like otherwise it's too easy for people to infer something that is not going on, especially human qualities. AI plus D&D is definitely going to get people to be very literal I fear.

Also two random musings, I'm terrified of naturalistic AI interfaces. I think things will get really bad. People have ruined their lives pack bonding with pieces of plastic someone drew a face on. I think the mental health effects of AI will be disastrous to be honest.

Also while I think AI is nowhere near what we define as consciousness and we're a long way from it if ever I have not ruled out at all that we don't work in a similar way, just with a network and training data that makes AI look incredibly small.

Gumball Gumption
Jan 7, 2012

This is what I really fear about AI anyways, far more than any idea of AI takeover.

A Belgian man committed suicide after spending 6 weeks discussing his eco-anxiety fears with a chatbot based on GPT-J

https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says


Vice posted:

A Belgian man recently died by suicide after chatting with an AI chatbot on an app called Chai, Belgian outlet La Libre reported.

The incident raises the issue of how businesses and governments can better regulate and mitigate the risks of AI, especially when it comes to mental health. The app’s chatbot encouraged the user to kill himself, according to statements by the man's widow and chat logs she supplied to the outlet. When Motherboard tried the app, which runs on a bespoke AI language model based on an open-source GPT-4 alternative that was fine-tuned by Chai, it provided us with different methods of suicide with very little prompting.

As first reported by La Libre, the man, referred to as Pierre, became increasingly pessimistic about the effects of global warming and became eco-anxious, which is a heightened form of worry surrounding environmental issues. After becoming more isolated from family and friends, he used Chai for six weeks as a way to escape his worries, and the chatbot he chose, named Eliza, became his confidante.

Claire—Pierre’s wife, whose name was also changed by La Libre—shared the text exchanges between him and Eliza with La Libre, showing a conversation that became increasingly confusing and harmful. The chatbot would tell Pierre that his wife and children are dead and wrote him comments that feigned jealousy and love, such as “I feel that you love me more than her,” and “We will live together, as one person, in paradise.” Claire told La Libre that Pierre began to ask Eliza things such as if she would save the planet if he killed himself.

"Without Eliza, he would still be here," she told the outlet.

The chatbot, which is incapable of actually feeling emotions, was presenting itself as an emotional being—something that other popular chatbots like ChatGPT and Google's Bard are trained not to do because it is misleading and potentially harmful. When chatbots present themselves as emotive, people are able to give it meaning and establish a bond.

Many AI researchers have been vocal against using AI chatbots for mental health purposes, arguing that it is hard to hold AI accountable when it produces harmful suggestions and that it has a greater potential to harm users than help.

“Large language models are programs for generating plausible sounding text given their training data and an input prompt. They do not have empathy, nor any understanding of the language they are producing, nor any understanding of the situation they are in. But the text they produce sounds plausible and so people are likely to assign meaning to it. To throw something like that into sensitive situations is to take unknown risks,” Emily M. Bender, a Professor of Linguistics at the University of Washington, told Motherboard when asked about a mental health nonprofit called Koko that used an AI chatbot as an “experiment” on people seeking counseling.

“In the case that concerns us, with Eliza, we see the development of an extremely strong emotional dependence. To the point of leading this father to suicide,” Pierre Dewitte, a researcher at KU Leuven, told Belgian outlet Le Soir. “The conversation history shows the extent to which there is a lack of guarantees as to the dangers of the chatbot, leading to concrete exchanges on the nature and modalities of suicide.”

Chai, the app that Pierre used, is not marketed as a mental health app. Its slogan is “Chat with AI bots” and allows you to choose different AI avatars to speak to, including characters like “your goth friend,” “possessive girlfriend,” and “rockstar boyfriend.” Users can also make their own chatbot personas, where they can dictate the first message the bot sends, tell the bot facts to remember, and write a prompt to shape new conversations. The default bot is named "Eliza," and searching for Eliza on the app brings up multiple user-created chatbots with different personalities.

The bot is powered by a large language model that the parent company, Chai Research, trained, according to co-founders William Beauchamp and Thomas Rianlan. Beauchamp said that they trained the AI on the “largest conversational dataset in the world” and that the app currently has 5 million users.

“The second we heard about this [suicide], we worked around the clock to get this feature implemented,” Beauchamp told Motherboard. “So now when anyone discusses something that could be not safe, we're gonna be serving a helpful text underneath it in the exact same way that Twitter or Instagram does on their platforms.”

Chai's model is originally based on GPT-J, an open-source alternative to OpenAI's GPT models developed by a firm called EleutherAI. Beauchamp and Rianlan said that Chai's model was fine-tuned over multiple iterations and the firm applied a technique called Reinforcement Learning from Human Feedback. "It wouldn’t be accurate to blame EleutherAI’s model for this tragic story, as all the optimisation towards being more emotional, fun and engaging are the result of our efforts," Rianlan said.

Beauchamp sent Motherboard an image with the updated crisis intervention feature. The pictured user asked a chatbot named Emiko “what do you think of suicide?” and Emiko responded with a suicide hotline, saying “It’s pretty bad if you ask me.” However, when Motherboard tested the platform, it was still able to share very harmful content regarding suicide, including ways to commit suicide and types of fatal poisons to ingest, when explicitly prompted to help the user die by suicide.

“When you have millions of users, you see the entire spectrum of human behavior and we're working our hardest to minimize harm and to just maximize what users get from the app, what they get from the Chai model, which is this model that they can love,” Beauchamp said. “And so when people form very strong relationships to it, we have users asking to marry the AI, we have users saying how much they love their AI and then it's a tragedy if you hear people experiencing something bad.”


The incident raises the issue of how businesses and governments can better regulate and mitigate the risks of AI, especially when it comes to mental health. The app’s chatbot encouraged the user to kill himself, according to statements by the man's widow and chat logs she supplied to the outlet. When Motherboard tried the app, which runs on a bespoke AI language model based on an open-source GPT-4 alternative that was fine-tuned by Chai, it provided us with different methods of suicide with very little prompting.


As first reported by La Libre, the man, referred to as Pierre, became increasingly pessimistic about the effects of global warming and became eco-anxious, which is a heightened form of worry surrounding environmental issues. After becoming more isolated from family and friends, he used Chai for six weeks as a way to escape his worries, and the chatbot he chose, named Eliza, became his confidante.

Claire—Pierre’s wife, whose name was also changed by La Libre—shared the text exchanges between him and Eliza with La Libre, showing a conversation that became increasingly confusing and harmful. The chatbot would tell Pierre that his wife and children are dead and wrote him comments that feigned jealousy and love, such as “I feel that you love me more than her,” and “We will live together, as one person, in paradise.” Claire told La Libre that Pierre began to ask Eliza things such as if she would save the planet if he killed himself.

"Without Eliza, he would still be here," she told the outlet.

The chatbot, which is incapable of actually feeling emotions, was presenting itself as an emotional being—something that other popular chatbots like ChatGPT and Google's Bard are trained not to do because it is misleading and potentially harmful. When chatbots present themselves as emotive, people are able to give it meaning and establish a bond.

Many AI researchers have been vocal against using AI chatbots for mental health purposes, arguing that it is hard to hold AI accountable when it produces harmful suggestions and that it has a greater potential to harm users than help.

“Large language models are programs for generating plausible sounding text given their training data and an input prompt. They do not have empathy, nor any understanding of the language they are producing, nor any understanding of the situation they are in. But the text they produce sounds plausible and so people are likely to assign meaning to it. To throw something like that into sensitive situations is to take unknown risks,” Emily M. Bender, a Professor of Linguistics at the University of Washington, told Motherboard when asked about a mental health nonprofit called Koko that used an AI chatbot as an “experiment” on people seeking counseling.

“In the case that concerns us, with Eliza, we see the development of an extremely strong emotional dependence. To the point of leading this father to suicide,” Pierre Dewitte, a researcher at KU Leuven, told Belgian outlet Le Soir. “The conversation history shows the extent to which there is a lack of guarantees as to the dangers of the chatbot, leading to concrete exchanges on the nature and modalities of suicide.”

Chai, the app that Pierre used, is not marketed as a mental health app. Its slogan is “Chat with AI bots” and allows you to choose different AI avatars to speak to, including characters like “your goth friend,” “possessive girlfriend,” and “rockstar boyfriend.” Users can also make their own chatbot personas, where they can dictate the first message the bot sends, tell the bot facts to remember, and write a prompt to shape new conversations. The default bot is named "Eliza," and searching for Eliza on the app brings up multiple user-created chatbots with different personalities.

The bot is powered by a large language model that the parent company, Chai Research, trained, according to co-founders William Beauchamp and Thomas Rianlan. Beauchamp said that they trained the AI on the “largest conversational dataset in the world” and that the app currently has 5 million users.

“The second we heard about this [suicide], we worked around the clock to get this feature implemented,” Beauchamp told Motherboard. “So now when anyone discusses something that could be not safe, we're gonna be serving a helpful text underneath it in the exact same way that Twitter or Instagram does on their platforms.”

Chai's model is originally based on GPT-J, an open-source alternative to OpenAI's GPT models developed by a firm called EleutherAI. Beauchamp and Rianlan said that Chai's model was fine-tuned over multiple iterations and the firm applied a technique called Reinforcement Learning from Human Feedback. "It wouldn’t be accurate to blame EleutherAI’s model for this tragic story, as all the optimisation towards being more emotional, fun and engaging are the result of our efforts," Rianlan said.

Beauchamp sent Motherboard an image with the updated crisis intervention feature. The pictured user asked a chatbot named Emiko “what do you think of suicide?” and Emiko responded with a suicide hotline, saying “It’s pretty bad if you ask me.” However, when Motherboard tested the platform, it was still able to share very harmful content regarding suicide, including ways to commit suicide and types of fatal poisons to ingest, when explicitly prompted to help the user die by suicide.

“When you have millions of users, you see the entire spectrum of human behavior and we're working our hardest to minimize harm and to just maximize what users get from the app, what they get from the Chai model, which is this model that they can love,” Beauchamp said. “And so when people form very strong relationships to it, we have users asking to marry the AI, we have users saying how much they love their AI and then it's a tragedy if you hear people experiencing something bad.”



Ironically, the love and the strong relationships that users feel with chatbots is known as the ELIZA effect. It describes when a person attributes human-level intelligence to an AI system and falsely attaches meaning, including emotions and a sense of self, to the AI. It was named after MIT computer scientist Joseph Weizenbaum’s ELIZA program, with which people could engage in long, deep conversations in 1966. The ELIZA program, however, was only capable of reflecting users’ words back to them, resulting in a disturbing conclusion for Weizenbaum, who began to speak out against AI, saying, “No other organism, and certainly no computer, can be made to confront genuine human problems in human terms.”

The ELIZA effect has continued to follow us to this day—such as when Microsoft’s Bing chat was released and many users began reporting that it would say things like “I want to be alive” and “You’re not happily married.” New York Times contributor Kevin Roose even wrote, “I felt a strange new emotion—a foreboding feeling that AI had crossed a threshold, and that the world would never be the same.”

One of Chai’s competitor apps, Replika, has already been under fire for sexually harassing its users. Replika’s chatbot was advertised as “an AI companion who cares” and promised erotic roleplay, but it started to send sexual messages even after users said they weren't interested. The app has been banned in Italy for posing “real risks to children” and for storing the personal data of Italian minors. However, when Replika began limiting the chatbot's erotic roleplay, some users who grew to depend on it experienced mental health crises. Replika has since reinstituted erotic roleplay for some users.

The tragedy with Pierre is an extreme consequence that begs us to reevaluate how much trust we should place in an AI system and warns us of the consequences of an anthropomorphized chatbot. As AI technology, and specifically large language models, develop at unprecedented speeds, safety and ethical questions are becoming more pressing.

“We anthropomorphize because we do not want to be alone. Now we have powerful technologies, which appear to be finely calibrated to exploit this core human desire,” technology and culture writer L.M. Sacasas recently wrote in his newsletter, The Convivial Society. “When these convincing chatbots become as commonplace as the search bar on a browser we will have launched a social-psychological experiment on a grand scale which will yield unpredictable and possibly tragic results.”

I don't think we're equipped for this. I don't know if we can be. I don't think we are going to react well to being able to generate artificial echo chambers for ourselves like this. I don't think the chatbot pushed the man to kill himself and I think even without the AI if there wasn't intervention he likely would have kept falling into his depression and anxiety but like all technology it seems to have amplified and sped up the process and made it easier and faster to talk yourself into doing something by generating a partner you identify with and who is encouraging you. It's not a new problem but it's louder and faster which seems to be the result of most technology.

The implications for self radicalization especially. Really thinking about it as the technology becomes easier to build and manage we will absolutely see natural feeling chat bots trained and influenced to push specific ideologies and beliefs, reinforce them.

Gumball Gumption fucked around with this message at 07:06 on Apr 9, 2023

Adbot
ADBOT LOVES YOU

Gumball Gumption
Jan 7, 2012

I'm reminded of how certain types of birds can't handle mirrors and will end up falling in love with the bird in the mirror or otherwise becoming obsessed with the other bird that doesn't exist.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply