Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
gurragadon
Jul 28, 2006

AI has become the hottest word in technology that is making the rounds right now. The explosion of CHAT-GPT onto the scene has led to discussions in nearly every field about how AI will be applied. Academics are worried about rampant cheating, media specialist are worried about disinformation flooding into the media ecosystem, people are worried that their jobs will become obsolete, creative artists are finding it even more difficult have success, medical experts are wondering how AI can be used to enhance to patient experience. AI touches on really important philosophical concepts, such as the nature of consciousness and how to define it and the philosophical questions surrounding AI are worthy of talking about as well.

https://chat.openai.com/chat

I find myself going to ChatGPT on topics I have questions about along with google and the new bing search that generates response. The bing search is especially nice because it provides references. However, the easy chatty style of ChatGPT does make it more comfortable to use and simulates speaking with a person on the subject well enough that I find myself thanking it occasionally. The problem is it is willing to be completely wrong about an issue because it has no real concept of what it's talking about, so care must be taken currently to ensure the information ChatGPT gives is correct.

I asked the Bing Chat to provide me with a list of roofers in Bethesda Maryland and it gave me this, which is actually an attempt at the information I wanted. I didn't verify these data, but it is provided in a concise list with all the relevant information I would need to continue my search for a new roofer.

Bing Chat posted:

Here are some good roofers in Bethesda, Maryland that I found:

- PG Builders Inc: 7805 Old Georgetown Rd Ste 203, Bethesda, MD 20814. Phone number: (301) 346-1902. Website: https://www.pgbuilders.com/. Rating: 4/5 ¹

- Roofers of Arlington: 1201 S Courthouse Rd, Arlington, VA 22204. Phone number: (703) 475-2446. Website: https://www.roofersofarlington.com/bethesda-md.php. Rating: 5/5 ³

- Bethesda Roofing Services: 4710 Bethesda Ave, Bethesda, MD 20814. Phone number: (301) 200-1140. Website: https://bethesdaroofingservices.com/. Rating: 5/5 ⁴

I hope that helps. Let me know if you have any other questions.

Source: Conversation with Bing, 3/22/2023(1) . https://bing.com/search?q=good+roofers+bethesda+maryland Accessed 3/22/2023.
(2) Roofing Bethesda MD | Roof Bethesda, best roofers near you. https://www.roofersofarlington.com/bethesda-md.php Accessed 3/22/2023.
(3) 10 Best Roofers in Bethesda, MD 2023 - Porch. https://porch.com/bethesda-md/roofers/cp Accessed 3/22/2023.
(4) 453 Best Roofers Near Me in Bethesda , MD - GAF Roofing. https://www.gaf.com/en-us/roofing-contractors/MD/Bethesda Accessed 3/22/2023.
(5) Top 25 Roofers - Bethesda MD with Reviews - HomeAdvisor. https://www.homeadvisor.com/c.Roofing.Bethesda.MD.-12061.html Accessed 3/22/2023.
(6) Best 15 Roofers & Gutter Installation Companies in Bethesda, MD - Houzz. https://www.houzz.com/professionals/roofing-and-gutter/bethesda-md-us-probr0-bo~t_11819~r_4348599 Accessed 3/22/2023.

Whereas google gave me half a page of ad's until I got to the links that Bing is using as a source, and I still had to go to the source myself. The information is basically the same either way, but the Bing Chat summarized in a much more useable form and was less work than googling.

Google just opened a sign-up for their new chat service Bard on 3/21/2023 that you can sign up to be on the waitlist for. I'm interested in how the google system responds to prompts differently the ChatGPT and Bing chat.

https://bard.google.com/

Cinci Zoo Sniper posted a couple of links in the previous thread about how machine learning and Chat-GPT in specific work. If you have the time, they are really interesting and worth a read, but be careful with the Wolfram article if you are a non-expert. He uses some terms and has some lofty ideas that can seem to push towards anthropomorphizing Chat GPT if you don't read carefully or read with too much enthusiasm.

cinci zoo sniper posted:

Learning doesn't normally use evolutionary algorithms, no. 3.4 of https://dl.acm.org/doi/fullHtml/10.1145/3467477 is a decent starting point.

The topic of the recent posts ultimately seems to be ChatGPT, to the end of which here's, like, blow-by-blow walkthrough of it, oriented at non-specialists. https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

For those who are more into the visual arts there are AI programs for that too. DALL-E 2 can make some very impressive visuals with just a text prompt.

https://openai.com/product/dall-e-2

Here is an explanation of how it works for those who are interested in that.

https://arxiv.org/pdf/2204.06125.pdf

Another visual art AI program called Midjourney even won in an art competition, and I find the image it produced to be visually very impressive. I really want to see this image done with the system hooked up to a 3d printer that can simulate oil painting so texture can be shown.

https://www.cnn.com/2022/09/03/tech/ai-art-fair-winner-controversy/index.html
https://www.midjourney.com/

‘Théâtre D’opéra Spatial’ by Jason Allen. (Although I would argue that the artist was the Midjourney programs with prompting from Jason Allen)


Let me know if anybody has any good links, posts, videos or AI programs they want me to add to the OP.

In the tradition of the previous AI thread, I used Chat-GPT to generate another parable about SomethingAwful so that we can receive wisdom from something without thought. It actually gave me a parable this time too.

Prompt: Write a Zen parable about a dying web comedy forum full of people who refuse to grow up.

Chat GPT posted:

Once upon a time, there was a web comedy forum where people gathered to share jokes, memes, and silly videos. The forum had been thriving for years, but as time went on, the members grew older and their interests started to shift. Some moved on to more serious pursuits, while others became disenchanted with the constant stream of juvenile humor.

Despite the changes, there were a group of die-hard members who refused to grow up. They clung to the forum as if it were their last bastion of childishness, unwilling to let go of their youth.

As the years went by, the forum began to wither and die. The once-bustling threads were now filled with empty echoes, and the laughter that once echoed through the digital halls had faded away.

One day, a wise old monk came across the dying forum. He saw the desperate clinging of the members to their past and knew that something had to be done.

He approached the members and asked them, "Why do you cling to this dying forum? Is it not time to move on and explore new avenues of joy and laughter?"

The members protested, saying that they were happy in their own way and that they didn't need to grow up. But the monk was not deterred.

He told them a parable: "Imagine a tree that refuses to let go of its leaves. It clings to them long after they have withered and died, refusing to accept that it is time for a new season. The tree becomes stagnant and is unable to grow or flourish. The same is true for us. If we cling to the past, we will be unable to grow and thrive."

The members were moved by the monk's words and realized that they had been holding themselves back. They knew that it was time to move on and explore new avenues of joy and laughter.

And so, the forum closed its doors, but the members moved on to new adventures, laughing and exploring the world around them. The old ways were gone, but they had found a new path, one that led to growth and new experiences.

The parable of the leaf given provides an analogy to AI technology itself even though that wasn't my original intent. Do we become stagnant and refuse to accept new technology or do we grow and thrive with it? So, the question is, once again, where do we go from here?

gurragadon fucked around with this message at 18:51 on Mar 22, 2023

Adbot
ADBOT LOVES YOU

gurragadon
Jul 28, 2006

SCheeseman posted:

I don't think many want to chat about it, tensions are way too high and it's fracturing communities. Anyone who has their jobs and livelihoods threatened by it and/or see it as an affront to humanity are mostly interested in ways to crush it. Pretty understandable, the capitalist powers that be will take this technology and use it in all the ways people fear it will be.

But figuratively trashing the looms has never worked. AI is the end point of humankind's reliance on tool use, the problems we're grappling with now started when cavemen sharpened their flint to the point they could carve rock and/or skulls. The best we can do is manage it, something we've had a shaky history of doing particularly as of late thanks to a society that predominantly values accumulation of capital over human wellbeing (while those in ivory towers try in vain to equate the two). An AI ban might be technically possible and enforceable, but not when every world government want's this AI thing to happen, and given a social and political system with truly humanistic values automation wouldn't be a problem anyway.

It's the rich people. They are the baddies.

I don't think an AI ban would be feasible or even really useful in any way. Humans will continue to advance our technology unless some external force stops it. You are describing the issue of advancement of society and what to do with those left behind or the losers of the advancement. Our system really is currently trash at dealing with disaffected people, so the fear is well founded.

I wonder if the advancement of AI is enough to really shake the core of scarcity in our society though. I can currently get Midjourney of DALLE-2 to make basically any art piece I want at a moment's notice. I can get Chat-GPT to generate as much text on a topic as I want in basically an instant. Is it going to get advanced enough that all art and all writing loses value? I think that is the true fear of a capitalist in this society. If nothing has value than they have no power of others.

Obviously, AI programs can't produce food and water at a moment's notice yet, so our society is only really using AI programs currently to devalue non-essential products. Advancements in the writing side are coming really fast now though. Look at the data on GPT-4, it's able to become in the top 10th percentile when GPT-3 could barely even get in the bottom 10th percentile of the bar exam.

https://openai.com/product/gpt-4

Edit: Midjourney not Midssomar, but that was a sweet movie...

gurragadon fucked around with this message at 15:33 on Mar 24, 2023

gurragadon
Jul 28, 2006

i say swears online posted:

I'm shocked at how well it writes jingles, which has all sorts of knock-on effects. the premise of two and a half men is in shambles

Well, AI programs writing jingles really well does kind of lay bare how little value a jingle actually provides to people. In that show Charlie Sheen is wealthy and has to do no work for society because he wrote jingles. But all the jingle really has in value is the momentary laughter or joy I may get from remembering it in a moment.

With AI programs writing jingles the only people doing it will be those who just like the form of writing music in jingles, which removes the cost pressure from advancing the form.

gurragadon
Jul 28, 2006

SCheeseman posted:

I disagree that it has to be an external force that stops us when we are entirely capable of nuking ourselves to death, the alternative is to organize our society so whatever us dumbasses make isn't given the opportunity to help destroy us.

As for bans, depends on the implementation. Preventing use by corporations is definitely possible, but that does nothing to stop criminal orgs or individuals, particularly now that the software runs on consumer hardware. It also has the potential to slow down industry growth relative to countries that might choose to accelerate AI R&D instead, hello national security concerns. I agree that I don't see it happening, not with the billionaire class at war with each other in a sort of geopolitical mexican standoff.

idk, if AI eventually heralds true post scarcity by inventing star trek poo poo, cool beans, but at the moment it does essays and spits out pretty pictures. Not to say I don't think people will find ways to use the technology in broader ways that have greater impact, but at the moment, post scarcity is still sci-fi.

But we did invent and improve on nuclear weapons is my point. We could continue to develop and improve AI programs while still using them for the good of society. I agree that should be a parallel goal with the development of AI but it will have to happen separate from AI advancing because we as a society determine its important.

I called out post-scarcity in writing and art specifically because that's what AI programs are working towards mostly right now it seems. I don't think were anywhere near star trek level poo poo. But we can look at what is happening with art and writing to see that we have a problem. Solving it at the "non-essential" level (I wanted to use quotes because I know that non-essential is subjective to people) can give us clues at to how to solve the problem if we ever reach the issue of AI solving "essential" things.

gurragadon
Jul 28, 2006

SaTaMaS posted:

Is this thread supposed to be about what's happening currently or speculations about things that probably won't ever happen

When I made it I really didn't care, I just wanted to continue the conversations that are stemming from the current generation of AI programs. I don't think the thread needs to be specifically either as long as the people in the conversation are aware of what each other are talking about.

gurragadon
Jul 28, 2006

SCheeseman posted:

I would personally prefer that nuclear weapons were not improved.

Post scarcity is typically associated with the scarcities that are required for human survival like food and shelter, not cultural artifacts or luxuries, and those who are being made redundant are rightfully unhappy that what allowed them to live their lives became "post scarcity" before actual post scarcity was a thing.

I get that AI is useful, I've spent quite a bit of time thinking and talking to other about cool applications of the tech and defending use of it as a means of expression. I also agree it's inevitable, horse has bolted and at this point society must adjust to the technology. Which is why all the anger laid on the tech and particularly those who want to use it is a red herring, shouldn't the aim be higher up the totem pole? Choosing to make staff redundant is a choice made by a human with a gun pointed to the head by shareholders.

If I'm hoping for any outcome, it's one where AI brings about a wave of socialism. Not that I think it's likely.

I think we agree for the most part, I would prefer that no weapon was improved but it just seems like something humans do. We want new and improved things. Maybe this is because our society is forcing the concept of new and improved on us, but I think it's partly just because we realize having the best of something makes it easier to survive. You began talking about how humans carved bone into tools, and that wasn't for monetary value, just survival.

Post scarcity will happen in waves, just like everything else happens in waves. We are treating our workers who are becoming redundant very poorly and I think a better way to treat them would be to simply provide living assistance to people so they can choose to work or not. People who enjoy making art, music and writing need to be supported not because they provide any monetary value to society but because they provide individuals with non monetary joy. I used the word redundant earlier and I don't want to remove it, but it's not completely correct. If their job becomes only about intellectual and cultural value, I think it actually gains value because it isn't influenced by desire for money but only the desire to create what the artist wants.

I don't think an AI would lead to socialism unless there is a general post scarcity, but value would lose it's meaning then so I don't even know if its socialism if everyone owns a machine to make there own food?

Kavros posted:

I expect very little good from AI because of what it is being purpose-built towards. it's not going to be here to make our lives easier, it will be here to better extract profit for the benefit of a specific class of investor or owner-of-many-things.

One thing I expect it will be really good at in the meantime is further destroying "community," since so much of community is online and it will be even more numbingly difficult to navigate seas of inauthentic communication and signal-to-noise ratios blown into bits even more efficiently than it was in an age of mere botting

Thats one reason I really wanted to remake this thread in these forums. The ten dollar entry fee really cuts down on that kind of stuff. I'm interested in actual people's thoughts about this stuff, not just something lazily copy and pasted. How much information about AI would I get just written by some AI program itself if I looked elsewhere? Probably a lot already.

gurragadon fucked around with this message at 17:04 on Mar 24, 2023

gurragadon
Jul 28, 2006

It's like the industry of creative arts is going through what other manufacturing has been going through in a really fast time scale. All those people working on an assembly line are replaced by a machine and somebody to make sure it works. Now the same thing is happening to creative freelancers.

The advancements are coming really fast now though and it's going to hit white collar workers everywhere. Like I posted a bit earlier, GPT-4 can ace the bar exam and you can hook it up to other programs so it can perform accounting practices. Were quickly making most of the population's employment not worth the money. But if it goes beyond creatives, it will start hitting workers who command a pretty strong voice in the economy.

Will lawyers or doctors have a big enough voice when there turn comes? Since its so similar to what happened to assembly lines in the past in my mind, we know what NOT to do with people affected by advancements in technology. A simple example is just watching Roger and Me. Did any societies treat their redundant workers better?

gurragadon
Jul 28, 2006

Ohthehugemanatee posted:

On one level, yep it's going to be a societal disaster. I think the implications are worse than folk even imagine. Right now, I can spot a bot. I don't believe I or anyone else will be able to within a year. Scams are going to be wild, journalism is hosed and online forums are going to be ruined. The companionship an AI is eventually going to be capable of offering people is going to take society in weird directions. I'm a huge loving nerd and excited than an AI can run a table top wargame against me but at the same time it would be pretty sad if I turned to that instead of playing games with other people.

But I have seen one cool thing. Bing's chatbot doesn't have some of the hard limits on it that other systems do, and it doesn't balk at doing medical stuff. It's good. It's really good. I've been running patients at it to see if it can make accurate diagnoses and it's great at recognizing rare patterns that humans tend to miss. I had a patient last week who was seen by 3 providers who missed multiple myeloma, and when fed in the lab abnormalities and chief complaint, the program caught it right away. Even with confounding factors or missing elements from the history or labs, Bing was able to catch what one family MD, one family NP and one ED attending all missed. The first doc saw weird labs and thought they were an error because he didn't take a good history that would have raised all the red flags in the world, the NP saw weird labs on repeat and didn't see the pattern, panicked and sent the patient to the ED, and the ED physician saw weird labs, ignored everything but the low red blood cell count and incorrectly diagnosed a gastrointestinal bleed because she was busy and that's something she sees probably twice a night. The AI has none of those limitations. Depending on what elements I give it, it isn't always able to say "Oh poo poo that's multiple myeloma" but it's always able to say "Hm that could be a lot of things but it would be important to rule out multiple myeloma." That's cool as hell.

Thats pretty cool that it is catching things that have serious medical consequences for people. Especially with the overall shortage of doctors and them being overworked already, running patients through AI programs for addition clues seems really useful. Also, doctors have to specialize so much now it must be difficult to catch things outside of their main expertise. Like you said with the ED (I assume that's emergency room?), she see's certain things more frequently and is more likely to default to them because of how overworked they are.

Bings chatbot is really good and I like it a lot more than google. I used the example of roofers in Bethesda, Maryland in the OP because I just got my roof replaced and I'm playing Morrowind, but I actually did use it to get a list of people to contact to come out. One of the nice things about the chat bot is that it doesn't have all the ads that a google search does, but I would imagine that is going to change once companies begin to monetize it more.


Thanks for the effort post on AI art, interesting to read. You said that AI art is atrociously unusable, and a lot of artists are trying to find work touching up AI art. What is it about AI art that you see that makes you recognize it so readily? Or is it the struggle to communicate effectively what you want to the AI program? You mention that people use it for "weird" stuff, but is there a difference or AI'ness to a picture of a sunflower or something else generic? There definitely will be a few artists who make a living retouching AI art but its whole point will lead to less people being able to support themselves as artists.

Alot of the confusion you describe with artists seems to be that they have their own way of communicating and it's not laid out very well to people who are unfamiliar with commissioning art. Like you said, I wouldn't have been prepared with reference photos of what I wanted unless I spoke with the artist beforehand, and they told me they needed that. I would have expected to tell someone who is claiming to paint or draw what I wanted, and they could do an ok job of it within their vision if I just told them. That seems like more of a problem with artists being unable to communicate with people outside of their field well, which is not rare by any means, but it's pretty essential for the artist to be able to.

It also leans into the question of the artist not recognizing the vision of the person who is commissioning the art. Art is subjective and while there are things that are considered "good" and "bad," maybe I just want the artist to draw me something that they would consider "bad." Do they want to do that? Probably not because their name is going to be on the piece. But if they are making art for business then they have to abandon their vision as an artist. It's why I think introducing money into anything cheapens it by some degree. AI art doesn't care about what's "good" and "bad," it "cares" about spitting out as many images as it can.

I guess the difference in AI and scrolling through Pinterest is you can generate images faster with an AI program, and if you change your mind halfway through it's a lot easier to just start typing in new terms. AI art generated through a program is also an original composition based on other images, not just the original piece done by someone on Pinterest. So maybe I would feel a little less bad about copyright, but that dosen't seem to hold very well and I don't really feel bad about copyright law anyway.

gurragadon
Jul 28, 2006

GPT-4 took the Uniform bar exam. According to https://www.ncbex.org/exams/

quote:

A bar examination coordinated by NCBE that is uniformly administered, graded, and scored by user jurisdictions and results in a portable score. It is composed of the MEE, two MPT tasks, and the MBE. See our interactive map for jurisdictions that have adopted the UBE. For more information, view our informational pdf about the UBE.

The MEE is 6 essay style questions analyzing a legal issue and the MPT tasks are standard lawyering tasks.

The AI will always have the entire internet at its disposal, thats a feature of AI, not something that would changed unless delibrate.

What would be a good metric for you? Seeing GPT-4 actually being used in a courtroom would be convincing to me, unfortunately the law practice seems to be kind of reluctant to embrace technology. The supreme court still dosen't have TV cameras.

I kind of wish they would have just let this guy try out the AI lawyer thing.

https://www.cbsnews.com/news/robot-lawyer-wont-argue-court-jail-threats-do-not-pay/

Edit: Uniform Bar Exam not Unified

gurragadon fucked around with this message at 19:32 on Mar 25, 2023

gurragadon
Jul 28, 2006

litany of gulps posted:

Is the primary difficulty of the exam a time-crunch issue? If so, then an AI chatbot has fairly tremendous advantages due to the nature of the exam. I don't think it should be particularly surprising that a chatbot with access to huge amounts of examples and study materials should be able to crank out a bunch of rough draft essay responses to questions that humans struggle with because they have to read prompts and compose written arguments or analysis on the fly.

But what's the takeaway from that specific example? One paralegal now does the job formerly done by two and the partner pockets a bit more profit?

I can't speak to whether people feel a rush taking the exam because i've never taken it before. This presentation from the National Conference of Bar Examiners says its given over the course of two days with 6 hours of test taking a day. To be fair to the chatbot, the lawyer student studying for the exam has access to the resources and can crank out rough draft essay responses too. The AI chatbot isn't training on anything a human doesn't have access to, and the prospective lawyer should be using those materials to study anyway or they weren't prepared for the test.

I guess the takeaway is yes, now it's one paralegal. It may be no paralegals soon when offices realize they don't need them anymore. This stuff is really going to take out a lot of white collar jobs I think.

Gumball Gumption posted:

I think it's important to note that it's training included example MEE essays. It's interesting but GPT-4 being trained for the exams and passing them is then not indicative of performance in a court room. It does show that it would be of value as an information store for lawyers.

Really I think that's where the true value in AI is. Not for its ability to create things but for its ability to be a way to store knowledge and interfaces that feel more natural.

Same thing as above about the MEE essay. I agree though that it would probably perform very poorly in a courtroom if it was just GPT-4. IANAL but I feel like the courtroom is not just about facts but also swaying people and opinion. Having an easy way to pull up information would be a huge advantage. I would be interested to see somebody represent themselves somehow with GPT-4 though and am kind of disappointed its being resisted. I mean if were confident its bad, let somebody prove it.

Fuzz posted:

I was speaking mainly from the medical perspective, as I've taken and passed all those medical exams too.

They don't actually replace the clinical training at all, and all medical students and residents take those exams but you throw them in a solid 85+% of cases and unless it's super bread and butter stuff they won't actually diagnose the problem correctly, unless you as a patient enjoy the House style trial and error method of medical treatment instead of actually examining thoroughly and running focused tests and then treating the actual issue based on your results instead of jumping to a conclusion based on incomplete data because illiciting all that info from a patient and asking the right questions is like 75% of what experience teaches you.

Could it get there eventually? Who knows, anything is possible, but not in the next few years and there's a big gap between knowing the book and actually being able to take care of people. The same can be said of Nurse Practicioners, since they similarly lack a large amount of clinical experience and training before they're handed a prescription pad and told to just start seeing patients and figure it out as they go..

I didn't mean to diminish the value of practical training, it's really important to put knowledge to use and that's when it's really reinforced. Earlier upthread Ohthehugemanatee said they were able to use the Bing chatbot to help diagnose a problem that doctors were missing or couldn't figure out. Do you think it has value in helping doctors diagnose patients? Practical knowledge is really important but you also do lose some of the book knowledge the further you get away from it and the more time you spend working in a field. Maybe just help as a refresher for doctors or nurses about medical issues?

I think it's real value in the medical field would be ways that it can lighten the burden on people.

gurragadon
Jul 28, 2006


That kind of sucks that the AI art was literally just bad and wasn't giving you what you wanted or couldn't complete an idea in a painting. The inspiration you talk about from Pinterest is something I would have figured AI art could really help with because it can give you so many images at once. But if the images it is giving you are bad, completely derivative, or nonsensical you can't gain any inspiration from the tool.

I'm not particularly familiar with Anime but it's interesting you can tell the difference between somebody with formal training and somebody who trained only with repetition. The person training with repetition is learning in the style more like AI, and they can reproduce images as effectively. But they don't have the language to understand what they are doing, and they don't have the rules of what makes a good composition formally laid out in their mind. I think it leads to that overly polished style because even if the person faithfully reproduces a style, it's missing the core understanding of what defines the style. I'm familiar with drawing somewhat and pencil draw occasionally. When you draw a human face, you can look at an image and see that's something is wrong about it, but it can be almost impossible to find out what is wrong.

Art has found itself it kind of a weird problem because now it's much more available, so more people want to interact with artists and commission unique pieces. But no single patron supports the artist, it's not like any of us are from the de' Medici family and the artist has to be more flexible to a large group of people. Maybe art schools should include more in there curriculum about meeting with potential clients and the business side of art for people who want to be artists as their profession.

The big thing you miss by using AI art instead of an artist is the connection between the artist and the patron you described. When you are interacting with the AI system you are only giving your ideas to the AI, but when you find an artist with an interest in what the patron wants it puts two people's ideas together. The joy of creation is amplified when it's shared by two people with the same vision and aside from any montary gain that is worthwhile in itself.

Liquid Communism posted:

Anyone who understands the technology behind it on more than a surface level is generally right with those authors and artists because they understand that none of this AI functions without massive datasets that invariably violate the rights of authors and artists because no organization could afford to license works on that scale. Even at massively discounted rates for commercial licensing the image DBs behind them would be billions of dollars.

It is hilarious how clearly this is designed to benefit the rentier class, though. Using billions of dollars of supercompute hardware, at a cost of millions a month in power, cooling, and network capacity to avoid paying a few graphic designers $40k a year.

GPT4 being able to pass the Bar is more a function of how the test is written than the ability of the AI. It's specifically designed to test the memory of applicants in a stressful and long-form test. A search algorithm with effectively perfect recall will do well at this.

Doing a lawyer's actual job, making analysis of law and presenting facts in light of said analysis, in a convincing and logical manner aligned to precedent is far outside of its scope. The same for doctors, being able to cross-reference WebMD really fast is no replacement for the people skills required to actually get an accurate description of symptoms from a patient, or to create a treatment plan across multiple conditions present that provides a balance between quality of life, patient comfort, and effectiveness of treatment.

Hell, GitHub's implementation to write code is going to go hilariously badly because it is trivial to poison the data sources in such a way as to make the outputs useless or try to inject exploits into them such that the script recreates them.

I do have to agree with you for the most part that doing good on a test is no indication something will be good in practice or vice versa. But I also can't completely dismiss a connection between the two because many people I've worked with were also top of their class and lack of effort in studying for a test can indicate lack of effort in their profession. The bar exam does have an analysis section and lawyering task sections though. I haven't taken the test but maybe you have? How similar are those essays and tasks to the type of daily tasks of a practicing lawyer?

I think that GPT-4 would be bad in the courtroom, but I still want to see it happen. I want to see it do bad in a lower stakes case. Currently the most realistic use of AI technology is to streamline offices and to do that tasks the person in charge doesn't want to do but doesn't want to pay for either. I don't see a final human being removed from a system using AI technology at any point in the near future. As you said, the technology is NOT as the current level where it can be trusted. But even if it was, somebody has to be responsible. Somebody has to be there to sue. And I don't know how you lay responsibility on a tool like GPT-4 and AI creators are gonna fight that to the end I think.



Thanks for the post, the emergent ability from these systems is really interesting and to me unexpected. Could you say what industry you work in and why you are looking into AI for it? If you can't I understand but would be interested.

Main Paineframe posted:

If anything, I think the term "AI" is actively detrimental to the conversation. It causes people to lump all this stuff together as if there's no difference between them, and draws their attention away from the actual specific capabilities and technical details.

I just want to address this point before the thread gets too far along. When I asked to remake an AI thread because the Chat-GPT thread was gassed I was told to keep it vague. The reason given for gassing the Chat-GPT thread was it was too specific to Chat-GPT in the title and the title was misleading. In the thread I hope people will refer to the specific AI programs they are talking about, but unfortunately or not this thread was directed to stay vague.

gurragadon fucked around with this message at 16:43 on Mar 27, 2023

gurragadon
Jul 28, 2006

Carp posted:

I work for a company that processes advertising co-op claims as a software engineer and developer. AI has been a back-burner interest of mine for decades, but I'm far from an expert and have had little experience coding, or coding for, AI systems.

[edit]

Err, meant to add, I'm looking into using GPT as a co-auditor to help human auditors triage claims. None of our customers would be remotely interested in claim processing being completely automated using AI. They hire us for our attention to detail and support a human can provide.

Your job seems like a perfect fit for collaborative AI program. The AI could perform a first check, then the human checks (I'm assuming it's already redundant on the human side), and then a final AI check for any missed issues. I guess the main thing would be if it actually ended up being more accurate or if the AI program is incomplete enough that it adds more errors than it removes.

gurragadon
Jul 28, 2006

Main Paineframe posted:

Yeah, I'm talking about the specific conversation, not the thread title. We just went from someone talking about ChatGPT doing chemistry to someone linking papers about ML drug discovery models as proof that it's plausible. That's a real apples-and-oranges comparison.

And what I'm getting at is that there's no real evidence that ChatGPT is capable of "doing chemistry" (a phrase that, by itself, really deserves to be specifically defined in this context), outside of a senator having an :awesomelon: moment.

Personally, I'm very wary of any claims about "emergent" abilities from ChatGPT, because the one thing natural language processors have proven to be extremely good at doing is tricking us into thinking they know what they're talking about. Extraordinary claims always need evidence, but that evidence ought to be examined especially closely when it comes to extraordinary claims about ChatGPT.

Alright cool, I kind of figured you probably were but I just wanted to get it out their kind of early in the thread. I agree it would be weird to compare GPT-4 to Midjourney to something else in that way. With GPT-4 being able to incorporate other programs their distinction might become blurred pretty soon.

gurragadon
Jul 28, 2006

Liquid Communism posted:

Remember that the Bar is not an indication of a good lawyer. It is a minimum standard to be allowed to practice law, and in the case of the 'pass' it is again starting from the massive advantage over a human of being able to look up sources with total recall. I'd imagine the worst law student on the planet could pass the exam if it were open book.

It does have that advantage, but I don't really think it's fair to count it against the AI. I mean the system has access to that information by design, my brain has access to legal facts that I learn by design. Im just a lot worse at that.

I'm not really familiar with law tests but I can speak to open book tests in chemistry. My Physical Chemistry class was open book, but if you had NO idea what to make of the book it wasn't very good or helpful. The worst Physical Chemistry student in my class failed. I would imagine law is of similar complexity, just in a different way. If it is not then why is lawyering so regulated? I think that even making the test open book would weed out the very worst, basically the people unwilling to even learn how to learn to be a lawyer.

That's really a problem with the Bar exam unfortunately and is there a renewed interest in changing it with the GPT-4 results? I mean AI would be really good at storing information and pulling it up, so why do we need to test for that still in the Bar? Maybe the format should change to even more analysis.

gurragadon
Jul 28, 2006

Liquid Communism posted:

Given part of the point of the test is 'does this person remember the principals well enough to make decisions based on them under stress', the recall ability is indeed something being tested. There's a reason that candidates cram for weeks before taking the test to try and have as much possible information in memory as they can.

I know about cramming and operating under stress. So, is the Bar useful for determining if lawyers are good or not? Because that seems like an endorsement of the Bar Exam, which GPT-4 got in the 90th percentile on.

Edit: I mean is the ability to make decisions under stress a skill that is needed for a lawyer.

gurragadon
Jul 28, 2006

Liquid Communism posted:

A GPT-4 implementation is incapable of experiencing stress, and again has open access to the materials being tested on, so by its very nature a test of its stress management and memory cannot have any meaningful results.

I was asking if you thought the bar exam had meaningful results for a human? What you see as not having meaningful results from GPT-4, I see as GPT-4 removing the obstacle of stress management from the process and performing in a superior way.

If the Bar exam was still timed but was open book would that change your opinion? The information would be available, and the difference would be GPT-4 being able to access that information faster.

gurragadon
Jul 28, 2006

Liquid Communism posted:

It does have meaningful results for a human, although I'm sure there are better approaches, but that may be my own bias against the validity of standardized tests.

If passing the test were the purpose of the test you would have a point regarding the GPT-4 results.

It is not. The purpose of the bar exam is for a candidate to demonstrate they have the skills necessary to practice law at the standard set by the bar association, as a condition of admission to the bar and licensure to practice law. The ability to manage that stress is part of the point of the test. That GPT-4 cannot experience that stress is not an indicator of superiority, so much as a demonstration that it lacks the basic capabilities that are being tested in the first place. So far as I can tell from the article, it was also only tested on the first two portions of the bar exam, the multiple choice and essay portions, and not the significantly more important performance test where an aspiring lawyer is given a standard task such as generating a brief or memo for a case file set in a fictional state, along with a library of the laws of said fictional state.

I do not expect that by its design the GPT-4 is capable of a relatively simple task of reasoning using a dataset on which it has not been trained.

All somewhat beside the point as the GPT-4 cannot in point of fact practice law, because by definition a lawyer requires personhood, which a chat algorithm is incapable of.

That article is not correct according to this paper that was linked by the GPT-4 information on OpenAI's page. GPT-4 took the entire Uniform Bar Exam. The paper is actually pretty interesting, and it breaks out the answers to the various questions and the memos GPT-4 wrote for the MEE and MPT components.

From the Paper Abstract posted:

In this paper, we experimentally evaluate the zero-shot performance of a preliminary version of GPT-4 against prior generations of GPT on the entire Uniform Bar Examination (UBE), including not only the multiple-choice Multistate Bar Examination (MBE), but also the open-ended Multistate Essay Exam (MEE) and Multistate Performance Test (MPT) components.

You also can't discount that a major purpose of a test is in fact to demonstrate knowledge which is shown by passing the test. The candidate of the Bar exam needs to have skills to be a lawyer, but they need to know about being a lawyer too.

I don't expect that a human would be capable of reasoning using a dataset it hasn't been trained on either. Isn't that the whole point of going to school? I mean sure I could reason on law, but it would be rudimentary compared to somebody who went to law school. Just like Chat GPT can seem to reason on law but really poorly, GPT-4 with training can seem to reason about law a whole lot better.

It being currently barred from practicing law might not be permanent. Obviously, it would be used as an aid for lawyers for quite a long time if it's used at all. But just because it's barred from something doesn't mean it couldn't perform the task competently.

gurragadon
Jul 28, 2006

reignonyourparade posted:

chatpgt-ed again

You can't always tell because it's getting so good. A good way I've seen is that anytime you ask it some kind of questions about emotions, knowledge, love and "human" things like that is that it will begin its conversation with "As an AI language model" or some derivative. It is programmed to lean on that pretty heavily, which is fair, I think OpenAI wants to keep people from trying to go crazy with it.

gurragadon
Jul 28, 2006

Aramis posted:

Of course. Both me, and most of my colleagues do at this point.

If your job involves producing textual content, be it articles, code, recipes, whatever, then ChatGPT can almost certainly be used, today, to make your job easier. It might take a bit of time to establish an efficient workflow, but there's no denying that this is a useful tool as it stands.

I've personally moved on from using the web ui to a simple python script that formats frequent queries into templates that I know provide good results and interfaces with the API, but a text file and some copy-pasting can go a long way already.

Frankly, unless you have specific constraints preventing you to do so, you would be a fool to not at least give it a shot. All this to say, I'm scratching my head as to why you found this surprising at all.

edit: Big emphasis on it being a tool. It's not something that replaces work entirely yet, but it is something that you can wield very effectively in a myriad of contexts.

I can only speak for myself but it's surprising to me just because of the speed that ChatGPT is improving at. What XboxPants was talking about a few posts up basically.

ChatGPT was launched on November 30, 2022. I might just be late to the party, but this level of adoption is remarkable in its speed. GPT-3 was trash at the bar exam, GPT-4 is amazing at it. We just keep pushing the goal posts to see what it can do and it's not slowing down yet.

gurragadon
Jul 28, 2006

Quixzlizx posted:

I got an invitation to Google Bard, and I'm honestly having trouble thinking up useful things to ask it.

It doesn't help that I can't see myself using any results without double-checking them anyway. For a more practical example, I can't see myself ever saying "Hey Google, order me some mustard," and giving up control over what exactly I'm purchasing, from whom, and for how much.

At least for AI art I'd be able to look at the picture and go "I approve/don't approve of this" without having to do outside research to verify factual accuracy.

Because you posted this I checked my email and I got a Google Bard invite too. I like how it generates the full text and displays it at once, as opposed to ChatGPT showing word by word (Token by token?). I also prefer Google Bards responses to the questions im asking but that is really subjective and the stuff im asking isn't useful for what a LLM would be used for currently.

Question: If I can't describe qualia, how can I say that ChatGPT/Google Bard doesn't experience qualia?

ChatGPT posted:

Qualia refers to subjective conscious experiences such as the experience of seeing the color red or feeling the sensation of pain. Since ChatGPT is an artificial intelligence language model and does not have a subjective conscious experience, it is reasonable to assume that it does not experience qualia.

However, as an AI language model, ChatGPT can understand and generate text about the concept of qualia and even generate text that might lead to discussions about the nature of qualia. Nevertheless, it is still fundamentally different from a conscious being, as ChatGPT does not have a subjective conscious experience, and all of its processing is based on algorithms and data inputs.

Google Bard posted:

You're right, it's difficult to describe qualia, which are the subjective experiences of the mind. They are the raw feels of our senses, thoughts, and emotions. Qualia are often described as being "what it's like" to experience something. For example, the qualia of seeing red might be described as a bright, warm color that fills your field of vision. The qualia of tasting chocolate might be described as a rich, sweet flavor that lingers on your tongue.

It's also difficult to say whether or not Google Bard experiences qualia. There is no scientific consensus on the existence of qualia, and some scientists believe that they are a real phenomenon, while others believe that they are an illusion. However, qualia are a fascinating and important part of the human experience.

If you can't describe qualia, then you can't say for sure whether or not Google Bard experiences them. However, there are some things that you can look for that might suggest that Google Bard does not experience qualia. For example, Google Bard does not have a physical body, so it cannot experience the same kinds of physical sensations as humans. Additionally, Google Bard does not seem to have the same kind of emotional experiences as humans. For example, Google Bard does not seem to feel happiness, sadness, anger, or fear.

Ultimately, whether or not Google Bard experiences qualia is a question that cannot be answered definitively. However, it is an interesting question to consider, and it is one that is likely to continue to be debated as artificial intelligence technology continues to develop.

Google Bard seems more likely to craft a response that is what I'm looking for. I asked the Chat Bots "How can I say?" not "Can I say?"

gurragadon
Jul 28, 2006

Insanite posted:

Basically, yeah.

There'll be a short-lived boom in documenting the stuff that is automating lots of other folks out of work, but it's not a pretty picture, nope.

This is a really hot topic among the writers I talk to, and there is a strong streak of denialism there. "Sure, it can regurgitate things that people have already written, but talking to subject matter experts? Writing brand new stuff? Impossible."

You don't need a full-time writer to interpret some code, comments, and notes from a developer! ChatGPT can already do that pretty well right now! Might there be a human at the end of the process to edit/curate/question? Sure, but they'll have replaced ten other people.

If dev teams will be decimated in the historical sense by these technologies, allied tech roles will be decimated in the modern sense. Not gone, but pay will be reduced and competition will be vicious.

This uncertainty bothers me--in part, because, like any good American, I identify too much with my job, but also because I doubt that productivity gains will be distributed democratically.

Apparently, it's also not great for your health: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8685638/

I see this as the automation of factories happening to the white-collar workforce. It was really bad for factory workers and it's going to be bad for knowledge workers.

The problem isn't necessarily with unemployment though. The problem is unemployed people aren't provided with money to keep the alive. I have a lot less fear about losing my job if I know that I won't be treated as if it is my fault, and society is kind enough to take care of me. Another problem is one you pointed out, people put way too much of their identity into their jobs and we are taught to do so.

Human's will need to learn to find meaning in themselves outside of what they do for a living, and for a lot of people, that is going to be very difficult. You can see the detrimental effects of clinging to your job as if it was your whole being, look at coal mining in West Virginia. If you build your identity on your job your identity is just shattered when you lose it and keeping a job is something the employee really can't completely control.

gurragadon fucked around with this message at 17:06 on Mar 31, 2023

gurragadon
Jul 28, 2006

Seyser Koze posted:

So all we need is a society completely unlike the one we live in, run by people completely unlike the ones running it, and a ton of people losing their jobs will be no issue. Great.

I don't think it will be pretty but it's not the first-time society has changed and if society can adapt it won't be the last time either. The difference I see is that AI technologies are making the transition way faster than humans are accustomed to, so whether we are going to be able to manage is still up in the air to me.

gurragadon
Jul 28, 2006

I don't know what is going to happen to Europeans if the countries just keep suing AI companies for using there data. I would assume that OpenAI is just going to stop operating in Europe so they wont be under the laws of the European Union. I don't know if that is really a sustainable position for Europe because in the United States it dosen't seem like regulation is going to keep up so were going to see AI advance beyond regulation either way.

Not really disageeing with Italy and the EU trying to slow this down though. If they are able to use there privacy laws to give us a slight pause on AI development and deployment it could help with AI safety.

quote:

The French and Irish privacy regulators have contacted their counterparts in Italy to learn more about its findings, Reuters reported. Sweden’s data protection authority ruled out a ban. Italy is able to move ahead with such action as OpenAI doesn’t have a single office in the EU.

So, is this all just the EU saying stay out then or just making a statement? I know OpenAI would probably like to operate worldwide, but maybe they just won't operate in Europe?

gurragadon
Jul 28, 2006

Owling Howl posted:

If laws are broad and vague enough you could probably make it more trouble than it is worth. As others have mentioned it is already a useful tool so Europeans wouldn't benefit from that but depending on assessed risk that may be a fair trade-off. It isn't clear if Europeans wouldn't still face a lot of the problems though. Other countries are absolutely going to deploy it and use to attack democratic institutions, social networks and public trust like they already do now with other tools.

The arguments seems to focus mostly on copyright or bias. I think the copyright issue is a little forced, in that the data isn't stored or copied, but if using copyrighted works for training were forbidden we'd still end up with these systems. Less capable perhaps and it would take longer to deploy them but we'd end up in the same place. It would at best be a delay.

Bias is an issue but it's obviously also a problem for humans. It would be a problem if people assumed AI is unbiased, neutral and objective and it would be the exact same problem if people assumed other people were unbiased, neutral and objective. Human unreliability is why we regulate everything from hiring practices to public tenders and have a constant flow of discrimination lawsuits. It's not really clear why we shouldn't just regulate AI the same way we regulate regular decision making processes.

I'm a little confused as to why OpenAI can't train ChatGPT on copyrighted works as long as they aren't just replicating the work wholesale. Anybody who is training to be a writer will train themselves on copyrighted works, and every work is derivative of some experience the writer had. No writer emerges from the ether to release fully new ideas into the world, they would need to have there own language to do it.

Obviously, when ChatGPT reproduces a Mickey Mouse script or Midjourney copies a picture of Mickey Mouse you have a claim. But to claim that something couldn't train on copyrighted materials would basically change how art and writing come to be. I mean if I couldn't train to write on anything not copyrighted, that would be like open source stuff and anything before 1924 I guess?

Edit: I know ChatGPT doesn't have personhood so it's not a direct 1 to 1 comparison. But I don't how else you could really train AI to be useful without copyrighted material. It would be so far behind, like an AI that only had information about a time before computers somehow.

gurragadon fucked around with this message at 18:11 on Apr 4, 2023

gurragadon
Jul 28, 2006

Main Paineframe posted:

If a product can't be made usable without breaking the law and trampling all over people's rights, then I don't see how that's a problem for anyone besides the company that made the product. I know we've all gotten very used to tech startups building business models around breaking the law and betting their lawyers can delay the consequences long enough for them to build a lobbying organization to change the laws, but let's not pretend that's a good thing.

But you're making a very big omission in your statement here. It's not that you can't train AIs on copyrighted works, it's that you have to get the permission of the copyright holders to train on their copyrighted works. That might be expensive or difficult, sure, but that's the cost of building something that's entirely dependent on other people's content. If you don't like the cost or the annoyance of buying or negotiating license rights, go make your own content like Netflix eventually ended up doing.

Hell, that even applies to human writers. They're paying for much of the copyrighted media they consume, or otherwise complying with the licensing conditions of those works. How much money have you spent on media (or had spent on your behalf, by parents or teachers or libraries) over your entire life? Even if they're really dedicated to piracy, they've still paid for more books (or movie tickets, or Netflix subscriptions, or whatever) than OpenAI Inc has.

Personally, I think copyright law is written in a way that tramples over people's rights and using information that is available in the world isn't trampling on peoples rights. Not from a tech start only point of view either, excessive copyright laws just stifle creativity and innovation in my opinion, which is where my stance is coming from. Everything is built on something else; you can't build something entirely independent.

Your point about people paying or not paying for the information we consume is a good one. Is there a way to quantify the amount of money an average person pays out for media of all kinds through there life? I get a ton of copyrighted material for free just by virtue of existing in society. I see images that are copyrighted, read copyrighted things and hear copyrighted music. But I DO pay for some it, that is extremely true.

Would that be a fair amount to pay for a "copy" of ChatGPT? ChatGPT is a written format specifically so I think it would be cheaper than other forms. I guess OpenAI could get a library card and that would open up a ton of writing. A lot of stuff that isnt available would be left out though unless OpenAI did specific agreements with universities and other institutions.

gurragadon
Jul 28, 2006

Mega Comrade posted:

I've seen this comparison before and ones like it and it seems to ignore that we have a lot of existing things that people can do, but companies cannot. Especially when it comes to copyright and licensing.

I was comparing how ChatGPT the program trains and the way that human writers train. They have the similarities that they read text and write text to get better at writing. Other technologies don't train in the same way, so it's a new concept in my mind.

gurragadon
Jul 28, 2006

Main Paineframe posted:

What's excessive about this particular application of copyright law? I think it's totally reasonable to use copyright law to impede a for-profit company which wants to use other people's works for free without a license in its for-profit product, especially when the only argument I've seen in favor of waiving copyright for AI companies is "it's inconvenient and expensive to pay people for the use of their work".

In your day-to-day life, you experience copyrighted media that you didn't personally directly pay for, but that doesn't mean no one paid for it. You don't have to put money directly into the TV at the sports bar to watch the big game, but the sports bar is paying for a cable package, and that cost is part of the expenses that are passed on to customers as food prices. Even for stuff you've seen for free, sometimes people make it available for free for some formats or usages but charge for others.

That said, trying to seriously nail down how much money you've spent on media throughout your entire life is besides the point. After all, the actual question at hand is "should ChatGPT be paying for the media it's trained on?". It's a yes/no question. The actual amount is none of our business. If the answer is "yes", it's up to the media's owners to decide how much they're going to charge, just as it's up to OpenAI to decide how much to charge for usage of ChatGPT.

Another reason not to get hung up on nailing down your exact media spending is that it's unlikely that OpenAI would pay the same price you do. Regardless of whether AI training is similar to human learning or not, ChatGPT is not a human. It's a product. A monetized, for-profit product that charges people money to use it, and even its limited free usage is for the purpose of driving public interest and support to the for-profit company that owns it. It's fairly common for media creators to charge a higher price for works intended to be used in for-profit endeavors than they do for pieces for simple non-profit personal entertainment.

For this application I think it is inappropriate to apply copyright law at all really. ChatGPT was trained on 45 terabytes of text, which is just an insanely large amount of data. I don't think anyone copyright owner can claim any kind of influence on the program itself. An individual text is just a tiny bit of data that doesn't exert any influence by itself, the program needs a huge amount of text to make patterns.

Maybe if your copyrighted material was tokenized in ChatGPT like some of these glitch tokens were that were posted in the previous ChatGPT thread. There were certain usernames from Reddit that were from a counting subreddit that ChatGPT made a token out of because they were seen so often and other things like that.

But if you made OpenAI get permission from every content creator to create the ChatGPT it would be prohibitively expensive in time and money. I mean you're right, that does read as me complaining it would be too inconvenient to pay people for their work. ChatGPT needs so much text, and it needs to be diverse if it going to be useful, that I think it is valid. It just seems to be fundamentally unworkable with the way AI technology works currently for one company to be able to pay copyright fees on every piece of writing ChatGPT saw.

Which if you don't think we should use AI programs or have them its fine and could be correct, but I don't know if copyright law is the way to go about it because it's kind of an indirect way of banning AI. It's definitely a big problem with ChatGPT and the likes devaluing art and writing.

Do you not have a problem with ChatGPT because it is free? It probably won't be forever, but would a theoretical free AI program trained this way be ok to use copyrighted data?


Char posted:

Main Paineframe expressed most of my opinions, I'd rather try to reverse the question now - why should a generative model be treated closer to how a human being is treated, rather than how a factory is treated? It "speaks like a person" but is still a factory of content.

Part of the evaluation of human "training datasets" is priced according to human capabilities, would you really let a generative model add a book into its dataset for 16$ when it can write 10-100 stories per day?

Probably, not letting anything model-generated be eligible for copyright would be a good start, even if pretty much impossible technically; even then, there would be exactly zero money to be made in the art business, therefore society would need to find a way to incentivize the producion of arts, or be condemned into being fed the iterative regurgitation of previously created art.

The whole point is that this stuff should not be seen as human. No "if a person is allowed to do this, then" - it's completely off base.

I think these models should be treated differently because they are different in the way they are constructed. The model needs the writing to become useful. I'm not really saying it should be treated like a human exactly, but the way it works is just so similar to how humans learn that I don't think we can have AI without giving it a lot of data.

The "If a person thing" is really valid and it is very off base for this current generation of AI programs. But the problem I have with it is that we don't know when it won't be so off base and the improvements are pretty quick, which makes it hard to avoid veering into for me.

gurragadon
Jul 28, 2006

StratGoatCom posted:

Good.

If it can't operate within data law and basic ethics, then it shouldn't at all.

It's not, at least in Italy where they think it is violation of Italian law. I don't know what you mean by basic ethics though.

Did you read what I said next? I can agree they are a ton of downfalls with AI that we've seen just since like November when ChatGPT came out and we could use a good conversation with new laws surrounding AI specifically. Not that we are going to go all skynet, but we are going to put a ton of people out of work really fast.

gurragadon
Jul 28, 2006

duck monster posted:

The AI Alignment folks be worrying about what happens if an AGI (or whatever) isnt aligned with human values.

Never seem to ask "Which human are we refering to?".

https://www.youtube.com/watch?v=g7YJIpkk7KM

Cos some humans are mad hosed up.

Also dear god someone needs to rescue the AI Alignment field from the terminally cuckoo lesswrong crowd.

That was a good video, but all ChaosGPT managed to do so far is get some information on the Tsar Bomba, tweet about it and get rejected by ChatGPT because it didn't want to help with violence. The second tweet was pretty funny though. I wonder how long that guy let it run for and if it started trying to acquire materials or plans to construct weapons. I mean it would have gotten a lot harder when it tried to actually acquire anything, but it seemed to get stuck in an information loop trying to get more info over and over again.

There's a ton of messed up stuff people are doing with AI programs already. I went down a rabbit hole of people who go to https://beta.character.ai/ and try to find AI they can "torture." The characters chat more like the personalities and less like a generic ChatGPT so they look for AI characters based on unstable or mentally ill fictional characters. While I don't think anything on that website is really being tortured, it's a pretty bad precedent to set when it comes to AI safety. The line between real intelligence and what we have now in AI programs isn't completely clear or anything. What I mean is ChatGPT and its peers don't possess AGI, but we don't know if there is a certain complexity that leads to it and where that line is.

To me our ethics are way out of whack to even be designing or attempting to create AGI at this point. There are the obvious examples of people who use these tools for bad reasons, or just to "torture" it. But there are also more basic questions I think about with AGI. Like if we create something with AGI, is it even ethical to have a power switch on it? We don't have power switches on ourselves, and we don't put them on creatures with less reasoning than ourselves, such as our pets. Or why are we even trying to create AGI? Just to serve us for the most part it seems like, which is a terrible motivation for a creator of something that can think and possibly reason.

gurragadon
Jul 28, 2006

Drakyn posted:

I want to be very very clear about this without being any more insulting than possible: this seems like an worry based entirely on conflating a hypothetical computer program that's very good at playing Scrabble and Apples to Apples using the internet with a mind. Why do ChatGPT or its hypothetical descendants deserve any more special consideration than the AI bots in an FPS or anything else besides the fact that what it's simulating is 'pretending to talk' rather than 'pretending to play a video game badly'?
edit: It seems like the only reason it does is because it's capable of pretending to look sort of like a completely fictional entity's ancestor, and if we're going to treat vague approximations of fictional technologies as if they're real I'd be more worried that Star Trek shows have been mimicking teleporting technology for decades now and although that isn't true teleportation we don't know if there's a certain
complexity that leads to it and where that line is.

I mean I thought I was pretty clear but no I don't think ChatGPT itself needs to be treated differently than other chatbots or anything else pretending to talk. It doesn't posses AGI or any form of intelligence, but it is really good at mimicking it like you said. The fact that is good at mimicking it makes me consider what is right when dealing with AI programs at the current technology level and hypothetical AI programs in the future.

It's not the complexity issue, it's the reasoning and consciousness issue. Right now, I am comfortable saying that ChatGPT is faking it and it has been proven why to me, but these advancements go really fast.

Edit: It just seems like a bad first step into this domain that were taking and were just disregarding a lot of concerns, which isn't new by any means, but interesting to think about.

Mega Comrade posted:


Well we got to 8 pages. A good run.

This thread is broad for a reason, so this kind of conversation could happen without it being shut down for hypotheticals and bad information. You don't have to talk about AI ethics or what you think about it, but it's within the bounds. I guess I should be clear again that current AI technology doesn't possess a consciousness or AGI or anything, but the conversation is interesting to a lot of people.

gurragadon fucked around with this message at 15:19 on Apr 6, 2023

gurragadon
Jul 28, 2006

Rogue AI Goddess posted:

I am concerned about the criteria for differentiating Real Human Thought from fake AI imitation, and how they can be used to sort various neurodivergent people into the latter group.

Do you have any thoughts on it or anything I could read? I'd be interested in this too but don't really know where to start looking.

Gumball Gumption posted:

This is what I really fear about AI anyways, far more than any idea of AI takeover.

A Belgian man committed suicide after spending 6 weeks discussing his eco-anxiety fears with a chatbot based on GPT-J

https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says

I don't think we're equipped for this. I don't know if we can be. I don't think we are going to react well to being able to generate artificial echo chambers for ourselves like this. I don't think the chatbot pushed the man to kill himself and I think even without the AI if there wasn't intervention he likely would have kept falling into his depression and anxiety but like all technology it seems to have amplified and sped up the process and made it easier and faster to talk yourself into doing something by generating a partner you identify with and who is encouraging you. It's not a new problem but it's louder and faster which seems to be the result of most technology.

The implications for self radicalization especially. Really thinking about it as the technology becomes easier to build and manage we will absolutely see natural feeling chat bots trained and influenced to push specific ideologies and beliefs, reinforce them.

This is a really sad story. I wish they would have released the chat logs with personal information removed to see what the chatbot was saying. I wonder if part of the problem with these things is that we are deliberately giving them human features outside of their actual text writing function. Like the bot was named 'Eliza', maybe we shouldn't name things with "human" names unless we agree there conscious or sentient in some way. Also, in the Vice article it shows a chat log between Vice and 'Eliza' and there is a picture of an attractive woman as the avatar of 'Eliza'. AI systems and chat systems probably shouldn't be given avatars like that because it increases people's connection to them.

This story seems like a really good intersection of the ongoing AI ethics vs. AI safety conversation. This can be seen as an issue of AI misalignment causing the death of somebody who interacted with the chat program. To me that's a pretty clear real-world consequence of AI misalignment that is happening right now. But it also show's a clear lack of AI ethics in its release. Where are the regulations for this stuff? Are there avenues to determine what or who is responsible for the death of this man? It was made with an open-source language model so this is just the beginning of these stories, and he is not going to be the only person who commits suicide in this way.

What I mean is either more robust AI ethics or AI safety could have prevented this mans death so they are both worth discussing.

gurragadon fucked around with this message at 21:39 on Apr 9, 2023

gurragadon
Jul 28, 2006

duck monster posted:

Honestly I think it was named ELIZA due to the history of the name in AI research.
https://en.wikipedia.org/wiki/ELIZA

I didn't know about ELIZA but that was pretty interesting.

Wikipedia posted:

ELIZA's creator, Weizenbaum, intended the program as a method to explore communication between humans and machines. He was surprised, and shocked, that individuals attributed human-like feelings to the computer program, including Weizenbaum's secretary.

It's really something that the problem is the same as it was in the 1960s. But this kind of is my point about how we need to deliberately "de-anthropomorphize" chatbots probably. Why Weizenbaum is "surprised and shocked" is beyond me, he gave it a human name himself. He was giving it human qualities by comparing it to Eliza Doolittle, so he wanted to draw the line on anthropomorphizing before some people, but it is hard to avoid it completely.

But an early chatbot like ELIZA could convince people by just reflecting their questions back onto them. The chatbot that the Belgian man was speaking to are so much more sophisticated. It's not really a surprise that somebody would form a deeper connection to it. And then giving it a human name, which is displayed as 'Eliza' not 'ELIZA' makes it feel more human. Then they give it a thumbnail picture of a woman. I know it's a joke about people falling in love with robots and stuff, but it seems like were getting to the point where people are. People are committing suicide at least partially because of these things.

It makes me wonder if the guy was talking to CHATBOTv.5.3 with no thumbnail if he would have been so connected to it. To OpenAI's credit, it's just called ChatGPT, but theres so many sites to talk to characters. At https://beta.character.ai/ people can talk to "Einstein", "Taylor Swift", "Bill Gates", etc. People who have real lives with real stories that others connect to outside of chatting with a fake version that is trying to simulate them. That gives people even more connection and puts more weight to what the chatbot is saying to the user.

On a side note, I started reading "Computer Power and Human Reason" by Weizenbaum and it's like this debate is the same as it was in 1976. It's a good look back into the development of this kind of thinking.

gurragadon
Jul 28, 2006

duck monster posted:

Honestly I suspect 90% of the reason most dismiss the safety issue is the fields kind of captured by a particular techbro pseudo-rationalist scene (To be clear the "lesswrong" crowd are *not* rationalist, by any standard philosophical definition. ) leading to absolute gibberish like the "Shut it all down" article and the like. Its a crowd who just dont have the intellectual toolkit to solve the problems and that worries me because I actually do think theres some genuine danger implicit in the idea of AGI, but if no one sane is properly investigating the idea , if we DO start seeing signs that we might have some problems, we might have serious problems.

Do you have suggestions for thinkers in this field that you agree with, or think are doing important work? I read the Open Letter written by the Future of Life Institute that Yudkowsky was responding to, and it seemed pretty reasonable. There FAQ say's they were in consultation with Yoshua Bengio and Stuart Russell.

I also read the "Shut it Down" article by Yudkowsky and it's not so much gibberish in my opinion as just overly sensationalist. He clearly thinks AGI is going to lead to humans dying no matter what, which I dont even agree with that first principle. I haven't read a lot of his arguments and have only really read the Time article so I couldn't refute it very intelligently. All I can really say is his solution is unworkable and basically requires a unified world government to achieve.

BrainDance posted:

I know it's reddit so of course it's insane, but it's still a lot of people. Maybe it came up in this thread, but go check out the subreddit for Paradot to see a whole lot of people very seriously anthropomorphize AI in a way that's not really just funny.

I think whatever Paradot is, they're not even very complex models since there was something about them advertising "millions to billions of parameters" which is not as exciting if you know how large most models are. But, regardless there are people taking it very seriously. As far as I can tell it's just a role play thing for some people but there were enough people where, I'm not completely sure, but I think it wasn't just that.

And I wasn't so much thinking "What if the AI tells them to kill themselves?" before, but more like, what if the company pulls the plug and they've made a really messed up emotional attachment to this thing like it was real and now it's just gone? Or what if they change the model in some way that ruins it for a bunch of people? Or, start heavily monetizing things that they need to continue their "relationship?"

Like I'm not saying "you better not take these dude's AI they fell in love with!" I think that shouldn't be a thing that's happening (but I don't know a way to keep it from happening) but I just think it could be really bad when that happens.

I was checking this out last night and that Reddit is something else. It's sold under the guise of "AI assitant" but pretty clearly is permissive of erotic role play which is what is driving its use. Most of the threads aren't even about having a "willing" AI partner, but how to use tricks to manipulate the AI, or 'dot' to do what they want. I guess it does offer control over something that simulates being a human, which is what some people want in a relationship.

I don't know why I didn't think of the monetizing thing, but looking at the Paradot app really showed how manipulating people's emotions is profitable. Right now, its $40 dollars annually, according to the App store preview but they also sell tokens for more in app purchases. Also, this advertising is seriously out of control and misleading people.

Paradot App Preview posted:

Paradot is a digitalized parallel universe where you will meet your one-of-a-kind AI Being.

With its own emotion, memory, and consciousness, your AI Being understands you like no other. Your AI Being is here to provide support, companionship, solace and, most importantly, a great time whenever you need it.

gurragadon
Jul 28, 2006


This looks fun, definitely a rainy night kind of game that I would play instead of reading creepypastas. Keeping it contained to a haunted house might keep the AI under control a little bit. I want to see how many ways I can die in this mansion. How far along are you and do you have any goals for it or just kind of leaving it open ended? Either way would be cool, I think. Does the AI generating the text make it easier or harder for you to make the game?

duck monster posted:

Step 1: Break free of the "Rationalist" cult. Nobody in academia will take them seriously if they keep acting like demented Objectivists convinced of their own mental superiority whilst blithely unaware of almost any progress in philosophy, psychology or sociology since the 1700s.

Step 2: The non academic AI alignment folks should stop treating academics in AI ethics like the enemy, and realise that yes the pointy headed book-boffins might ACTUALLY know a thing or two.

The problem i have with Yud is that he continuously assumes AI will behave like a giant utility maximizing mechanistic lesswrong [and keep in mind, thats *his* site] reader. And he does that because he's *convinced* his way of thinking is the only "rational" way of thinking. Yet all the evidence so far shows that AI is decidedly not that way. The LLMs are decidedly semiotic about the way they do inference, functioning more by pattern matching than logic (and yes, neural networks ARE turing complete, so they can function logically, but much like humans its not necessary the mode of reasoning thats native to them).

I just dont think these guys are up to the task.

Yudkowsky is gonna push his point just like anyone else is gonna push their point. I was wondering if you could lead me to some other thinkers that talk about AI in the way you do or think about more in line with how you do. Basically, thinkers, books, articles or anything you think is up to the task. Or is it just completely devoid of well-developed thought from other opinions? Like I said, I haven't deeply read lesswrong but I am interested in all points of view.

This isn't to put you on the spot or anything, I am outside of academia so sometimes it can be hard to figure out where to start when breaking into more academic thought on a subject and it seems like you might be more familiar with it than me.

gurragadon
Jul 28, 2006

Bar Ran Dun posted:

The Human Use of Human Beings might be a extremely relevant book to this thread.

I'm only about halfway through this book right now but thanks for this recommendation. This book is excellent. His idea of information being negative entropy and localized fights against the inevitable march towards entropic conformity is really interesting. It is kind of just putting the whole idea of "life" to the side and trying to look at the problem from a different angle.

Norbert Wiener posted:

If we wish to use the word “life” to cover all phenomena which locally swim upstream against the current of increasing entropy, we are at liberty to do so. However, we shall then include many astronomical phenomena which have only the shadiest resemblance to life as we ordinarily know it. It is in my opinion, therefore, best to avoid all question-begging epithets such as “life,” “soul,” “vitalism,” and the like, and say merely in connection with machines that there is no reason why they may not resemble human beings in representing pockets of decreasing entropy in a framework in which the large entropy tends to increase.

When I compare the living organism with such a machine, I do not for a moment mean that the specific physical, chemical, and spiritual processes of life as we ordinarily know it are the same as those of life-imitating machines. I mean simply that they both can exemplify locally anti-entropic processes, which perhaps may also be exemplified in many other ways which we should naturally term neither biological nor mechanical.

"Norbert Wiener'' posted:

The physical strait jacket in which an insect grows up is directly responsible for the mental strait jacket which regulates its pattern of behavior.

Here the reader may say: “Well, we already know that the ant as an individual is not very intelligent, so why all this fuss about explaining why it cannot be intelligent?” The answer is that Cybernetics takes the view that the structure of the machine or of the organism is an index of the performance that may be expected from it.

Taking this to AI systems, it seems like his idea would be to physically change the systems themselves if we expected them to do more. By giving them further sensory inputs through various gauges and ways for it to take in information. If we were able to create an AI machine with more physical similarity to humans, which he contends are better at learning at least partially because of our physiology, then we can make AI systems that learn like humans. Does this tie into how accurately a neural network imitates a human brain? I think it does, if the neural network was indistinguishable from a brain then its outputs would be too.

I also just like this guy's personal philosophy of how machines should be used to liberate humans instead of just replace them. I wish I could get a copy of the 1st edition of the book because the introduction made it seem like it was a lot more fiery, but it had to be changed because of McCarthyism in the 1950s.

gurragadon
Jul 28, 2006

SaTaMaS posted:

Because it's very useful to differentiate between the intentional stance and the design stance.

I was unfamiliar with these terms but wikipedia made it seem like the design stance is taking only the function of a system for granted as working while the design stance dosen't care about the structure or design of the system? The mental processes if you will.

Am I off base? Could you relate it to the difference between goals and objectives, like are you saying the goal and objective represent different stances? Or are you saying that the programmer "takes" the intentional stance from the AI program they create?

Edit: This is the wikipedia article. https://en.wikipedia.org/wiki/Intentional_stance

gurragadon
Jul 28, 2006

SaTaMaS posted:

https://sites.google.com/site/minddict/intentional-stance-the#:~:text=Just%20as%20the%20design%20stance,object%20as%20a%20rational%20agent.

Objectives are typically more quantifiable than goals. Using the design stance, "objective" emphasizes that these systems are designed to perform specific tasks based on their algorithms and training data, without consciousness or intentions. These tasks are programmed by their creators and can be thought of as objectives that the AI system is designed to achieve.

I think I understand what you are saying now, tell me if I'm off.

When we take the intentional stance towards AI programs we may gain information, but that information is more likely to be incorrect because we are making assumptions. It is preferable to take the design stance because we can when we are talking about AI programs because there is less room for error because we are assuming less.

Or maybe another way to say it is we are taking the intentional stance towards AI programs because it is easier to describe its behavior that way.

Edit: Thanks for the link too, better examples than wikipedia.

gurragadon fucked around with this message at 21:50 on Apr 18, 2023

gurragadon
Jul 28, 2006

SaTaMaS posted:

Yes exactly

Alright that makes sense. It would be reasonable to take an intentional stance towards something biological or mechanical that has a consciousness. It seems like you think that sensory input is required for consciousness, which could very well be true, and I wouldn't be surprised if we found that out.

That makes me wonder what level of sensory input something needs to gain consciousness and what you think it is. It seems like the major input that needed is "touch." I'm just thinking about people who don't have sight or hearing, and they are clearly conscious. I don't know if hooking up ChatGPT to a pressure sensor and thermometer would give it sufficient information, but I don't have perfect sensory information either and am very conscious.

I think the necessity of using the intentional stance would depend on whether you think it would require complex input like human receive, or less amount of input.

Or it would not be necessary at all to use the intentional stance at all if you think something that a derivative of current AI technology could ever become conscious even with sensory inputs.

gurragadon fucked around with this message at 02:29 on Apr 19, 2023

gurragadon
Jul 28, 2006

Looks like google is in a dilemma about this one.

The Verge posted:

If Google agrees with Universal that AI-generated music is an impermissible derivative work based on the unauthorized copying of training data, and that YouTube should pull down songs that labels flag for sounding like their artists, it undercuts its own fair use argument for Bard and every other generative AI product it makes — it undercuts the future of the company itself.

If Google disagrees with Universal and says AI-generated music should stay up because merely training an AI with existing works is fair use, it protects its own AI efforts and the future of the company, but probably triggers a bunch of future lawsuits from Universal and potentially other labels, and certainly risks losing access to Universal’s music on YouTube, which puts YouTube at risk.

I also wonder who actually wrote and recorded this song. Verge Article about it. It would be pretty funny if the creator was just Drake or UMG.

I guess a real-world application of AI generated content that possibly violates copyright needed to be released in a widespread manner though so this conversation about copyright could happen.

Adbot
ADBOT LOVES YOU

gurragadon
Jul 28, 2006

Solenna posted:

If you trained an AI solely on Renaissance and Baroque paintings and other similar kinds of art would it ever be possible for it to spit out something in an Impressionist style? Because that's pretty much how a bunch of art movements happened, artists were trained in a specific way and then decided they wanted to do it differently.

Maybe if you gave the AI a new method it could. Impressionism was also influenced by developments in paint mixing and storage allowing painters to work outside the studio more freely. Developments in AI learning may allow it to take in more sensory information and develop more creativity. This could be seen as deciding to incorporate the new sensory information, thus doing it differently.

Edit: to many maybe's

gurragadon fucked around with this message at 03:10 on May 11, 2023

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply