Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
gurragadon
Jul 28, 2006

AI has become the hottest word in technology that is making the rounds right now. The explosion of CHAT-GPT onto the scene has led to discussions in nearly every field about how AI will be applied. Academics are worried about rampant cheating, media specialist are worried about disinformation flooding into the media ecosystem, people are worried that their jobs will become obsolete, creative artists are finding it even more difficult have success, medical experts are wondering how AI can be used to enhance to patient experience. AI touches on really important philosophical concepts, such as the nature of consciousness and how to define it and the philosophical questions surrounding AI are worthy of talking about as well.

https://chat.openai.com/chat

I find myself going to ChatGPT on topics I have questions about along with google and the new bing search that generates response. The bing search is especially nice because it provides references. However, the easy chatty style of ChatGPT does make it more comfortable to use and simulates speaking with a person on the subject well enough that I find myself thanking it occasionally. The problem is it is willing to be completely wrong about an issue because it has no real concept of what it's talking about, so care must be taken currently to ensure the information ChatGPT gives is correct.

I asked the Bing Chat to provide me with a list of roofers in Bethesda Maryland and it gave me this, which is actually an attempt at the information I wanted. I didn't verify these data, but it is provided in a concise list with all the relevant information I would need to continue my search for a new roofer.

Bing Chat posted:

Here are some good roofers in Bethesda, Maryland that I found:

- PG Builders Inc: 7805 Old Georgetown Rd Ste 203, Bethesda, MD 20814. Phone number: (301) 346-1902. Website: https://www.pgbuilders.com/. Rating: 4/5 ¹

- Roofers of Arlington: 1201 S Courthouse Rd, Arlington, VA 22204. Phone number: (703) 475-2446. Website: https://www.roofersofarlington.com/bethesda-md.php. Rating: 5/5 ³

- Bethesda Roofing Services: 4710 Bethesda Ave, Bethesda, MD 20814. Phone number: (301) 200-1140. Website: https://bethesdaroofingservices.com/. Rating: 5/5 ⁴

I hope that helps. Let me know if you have any other questions.

Source: Conversation with Bing, 3/22/2023(1) . https://bing.com/search?q=good+roofers+bethesda+maryland Accessed 3/22/2023.
(2) Roofing Bethesda MD | Roof Bethesda, best roofers near you. https://www.roofersofarlington.com/bethesda-md.php Accessed 3/22/2023.
(3) 10 Best Roofers in Bethesda, MD 2023 - Porch. https://porch.com/bethesda-md/roofers/cp Accessed 3/22/2023.
(4) 453 Best Roofers Near Me in Bethesda , MD - GAF Roofing. https://www.gaf.com/en-us/roofing-contractors/MD/Bethesda Accessed 3/22/2023.
(5) Top 25 Roofers - Bethesda MD with Reviews - HomeAdvisor. https://www.homeadvisor.com/c.Roofing.Bethesda.MD.-12061.html Accessed 3/22/2023.
(6) Best 15 Roofers & Gutter Installation Companies in Bethesda, MD - Houzz. https://www.houzz.com/professionals/roofing-and-gutter/bethesda-md-us-probr0-bo~t_11819~r_4348599 Accessed 3/22/2023.

Whereas google gave me half a page of ad's until I got to the links that Bing is using as a source, and I still had to go to the source myself. The information is basically the same either way, but the Bing Chat summarized in a much more useable form and was less work than googling.

Google just opened a sign-up for their new chat service Bard on 3/21/2023 that you can sign up to be on the waitlist for. I'm interested in how the google system responds to prompts differently the ChatGPT and Bing chat.

https://bard.google.com/

Cinci Zoo Sniper posted a couple of links in the previous thread about how machine learning and Chat-GPT in specific work. If you have the time, they are really interesting and worth a read, but be careful with the Wolfram article if you are a non-expert. He uses some terms and has some lofty ideas that can seem to push towards anthropomorphizing Chat GPT if you don't read carefully or read with too much enthusiasm.

cinci zoo sniper posted:

Learning doesn't normally use evolutionary algorithms, no. 3.4 of https://dl.acm.org/doi/fullHtml/10.1145/3467477 is a decent starting point.

The topic of the recent posts ultimately seems to be ChatGPT, to the end of which here's, like, blow-by-blow walkthrough of it, oriented at non-specialists. https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

For those who are more into the visual arts there are AI programs for that too. DALL-E 2 can make some very impressive visuals with just a text prompt.

https://openai.com/product/dall-e-2

Here is an explanation of how it works for those who are interested in that.

https://arxiv.org/pdf/2204.06125.pdf

Another visual art AI program called Midjourney even won in an art competition, and I find the image it produced to be visually very impressive. I really want to see this image done with the system hooked up to a 3d printer that can simulate oil painting so texture can be shown.

https://www.cnn.com/2022/09/03/tech/ai-art-fair-winner-controversy/index.html
https://www.midjourney.com/

‘Théâtre D’opéra Spatial’ by Jason Allen. (Although I would argue that the artist was the Midjourney programs with prompting from Jason Allen)


Let me know if anybody has any good links, posts, videos or AI programs they want me to add to the OP.

In the tradition of the previous AI thread, I used Chat-GPT to generate another parable about SomethingAwful so that we can receive wisdom from something without thought. It actually gave me a parable this time too.

Prompt: Write a Zen parable about a dying web comedy forum full of people who refuse to grow up.

Chat GPT posted:

Once upon a time, there was a web comedy forum where people gathered to share jokes, memes, and silly videos. The forum had been thriving for years, but as time went on, the members grew older and their interests started to shift. Some moved on to more serious pursuits, while others became disenchanted with the constant stream of juvenile humor.

Despite the changes, there were a group of die-hard members who refused to grow up. They clung to the forum as if it were their last bastion of childishness, unwilling to let go of their youth.

As the years went by, the forum began to wither and die. The once-bustling threads were now filled with empty echoes, and the laughter that once echoed through the digital halls had faded away.

One day, a wise old monk came across the dying forum. He saw the desperate clinging of the members to their past and knew that something had to be done.

He approached the members and asked them, "Why do you cling to this dying forum? Is it not time to move on and explore new avenues of joy and laughter?"

The members protested, saying that they were happy in their own way and that they didn't need to grow up. But the monk was not deterred.

He told them a parable: "Imagine a tree that refuses to let go of its leaves. It clings to them long after they have withered and died, refusing to accept that it is time for a new season. The tree becomes stagnant and is unable to grow or flourish. The same is true for us. If we cling to the past, we will be unable to grow and thrive."

The members were moved by the monk's words and realized that they had been holding themselves back. They knew that it was time to move on and explore new avenues of joy and laughter.

And so, the forum closed its doors, but the members moved on to new adventures, laughing and exploring the world around them. The old ways were gone, but they had found a new path, one that led to growth and new experiences.

The parable of the leaf given provides an analogy to AI technology itself even though that wasn't my original intent. Do we become stagnant and refuse to accept new technology or do we grow and thrive with it? So, the question is, once again, where do we go from here?

gurragadon fucked around with this message at 18:51 on Mar 22, 2023

Adbot
ADBOT LOVES YOU

MixMasterMalaria
Jul 26, 2007
Nm

MixMasterMalaria fucked around with this message at 14:08 on Mar 23, 2023

SCheeseman
Apr 23, 2003

I don't think many want to chat about it, tensions are way too high and it's fracturing communities. Anyone who has their jobs and livelihoods threatened by it and/or see it as an affront to humanity are mostly interested in ways to crush it. Pretty understandable, the capitalist powers that be will take this technology and use it in all the ways people fear it will be.

But figuratively trashing the looms has never worked. AI is the end point of humankind's reliance on tool use, the problems we're grappling with now started when cavemen sharpened their flint to the point they could carve rock and/or skulls. The best we can do is manage it, something we've had a shaky history of doing particularly as of late thanks to a society that predominantly values accumulation of capital over human wellbeing (while those in ivory towers try in vain to equate the two). An AI ban might be technically possible and enforceable, but not when every world government want's this AI thing to happen, and given a social and political system with truly humanistic values automation wouldn't be a problem anyway.

It's the rich people. They are the baddies.

SCheeseman fucked around with this message at 14:49 on Mar 24, 2023

gurragadon
Jul 28, 2006

SCheeseman posted:

I don't think many want to chat about it, tensions are way too high and it's fracturing communities. Anyone who has their jobs and livelihoods threatened by it and/or see it as an affront to humanity are mostly interested in ways to crush it. Pretty understandable, the capitalist powers that be will take this technology and use it in all the ways people fear it will be.

But figuratively trashing the looms has never worked. AI is the end point of humankind's reliance on tool use, the problems we're grappling with now started when cavemen sharpened their flint to the point they could carve rock and/or skulls. The best we can do is manage it, something we've had a shaky history of doing particularly as of late thanks to a society that predominantly values accumulation of capital over human wellbeing (while those in ivory towers try in vain to equate the two). An AI ban might be technically possible and enforceable, but not when every world government want's this AI thing to happen, and given a social and political system with truly humanistic values automation wouldn't be a problem anyway.

It's the rich people. They are the baddies.

I don't think an AI ban would be feasible or even really useful in any way. Humans will continue to advance our technology unless some external force stops it. You are describing the issue of advancement of society and what to do with those left behind or the losers of the advancement. Our system really is currently trash at dealing with disaffected people, so the fear is well founded.

I wonder if the advancement of AI is enough to really shake the core of scarcity in our society though. I can currently get Midjourney of DALLE-2 to make basically any art piece I want at a moment's notice. I can get Chat-GPT to generate as much text on a topic as I want in basically an instant. Is it going to get advanced enough that all art and all writing loses value? I think that is the true fear of a capitalist in this society. If nothing has value than they have no power of others.

Obviously, AI programs can't produce food and water at a moment's notice yet, so our society is only really using AI programs currently to devalue non-essential products. Advancements in the writing side are coming really fast now though. Look at the data on GPT-4, it's able to become in the top 10th percentile when GPT-3 could barely even get in the bottom 10th percentile of the bar exam.

https://openai.com/product/gpt-4

Edit: Midjourney not Midssomar, but that was a sweet movie...

gurragadon fucked around with this message at 15:33 on Mar 24, 2023

i say swears online
Mar 4, 2005

I'm shocked at how well it writes jingles, which has all sorts of knock-on effects. the premise of two and a half men is in shambles

gurragadon
Jul 28, 2006

i say swears online posted:

I'm shocked at how well it writes jingles, which has all sorts of knock-on effects. the premise of two and a half men is in shambles

Well, AI programs writing jingles really well does kind of lay bare how little value a jingle actually provides to people. In that show Charlie Sheen is wealthy and has to do no work for society because he wrote jingles. But all the jingle really has in value is the momentary laughter or joy I may get from remembering it in a moment.

With AI programs writing jingles the only people doing it will be those who just like the form of writing music in jingles, which removes the cost pressure from advancing the form.

SCheeseman
Apr 23, 2003

I disagree that it has to be an external force that stops us when we are entirely capable of nuking ourselves to death, the alternative is to organize our society so whatever us dumbasses make isn't given the opportunity to help destroy us.

As for bans, depends on the implementation. Preventing use by corporations is definitely possible, but that does nothing to stop criminal orgs or individuals, particularly now that the software runs on consumer hardware. It also has the potential to slow down industry growth relative to countries that might choose to accelerate AI R&D instead, hello national security concerns. I agree that I don't see it happening, not with the billionaire class at war with each other in a sort of geopolitical mexican standoff.

idk, if AI eventually heralds true post scarcity by inventing star trek poo poo, cool beans, but at the moment it does essays and spits out pretty pictures. Not to say I don't think people will find ways to use the technology in broader ways that have greater impact, but at the moment, post scarcity is still sci-fi.

SaTaMaS
Apr 18, 2003

SCheeseman posted:

idk, if AI eventually heralds true post scarcity by inventing star trek poo poo, cool beans, but at the moment it does essays and spits out pretty pictures. Not to say I don't think people will find ways to use the technology in broader ways that have greater impact, but at the moment, post scarcity is still sci-fi.

GPT-4 can use external tools like a calculator so a star trek computer is theoretically possible but people are still working on the real-world implementations

SCheeseman
Apr 23, 2003

SaTaMaS posted:

GPT-4 can use external tools like a calculator so a star trek computer is theoretically possible but people are still working on the real-world implementations

I meant more in the sense of replicators rather than API integrations. Post scarcity isn't post scarcity until all people can eat and live without dependence on labour or privilige.

gurragadon
Jul 28, 2006

SCheeseman posted:

I disagree that it has to be an external force that stops us when we are entirely capable of nuking ourselves to death, the alternative is to organize our society so whatever us dumbasses make isn't given the opportunity to help destroy us.

As for bans, depends on the implementation. Preventing use by corporations is definitely possible, but that does nothing to stop criminal orgs or individuals, particularly now that the software runs on consumer hardware. It also has the potential to slow down industry growth relative to countries that might choose to accelerate AI R&D instead, hello national security concerns. I agree that I don't see it happening, not with the billionaire class at war with each other in a sort of geopolitical mexican standoff.

idk, if AI eventually heralds true post scarcity by inventing star trek poo poo, cool beans, but at the moment it does essays and spits out pretty pictures. Not to say I don't think people will find ways to use the technology in broader ways that have greater impact, but at the moment, post scarcity is still sci-fi.

But we did invent and improve on nuclear weapons is my point. We could continue to develop and improve AI programs while still using them for the good of society. I agree that should be a parallel goal with the development of AI but it will have to happen separate from AI advancing because we as a society determine its important.

I called out post-scarcity in writing and art specifically because that's what AI programs are working towards mostly right now it seems. I don't think were anywhere near star trek level poo poo. But we can look at what is happening with art and writing to see that we have a problem. Solving it at the "non-essential" level (I wanted to use quotes because I know that non-essential is subjective to people) can give us clues at to how to solve the problem if we ever reach the issue of AI solving "essential" things.

SaTaMaS
Apr 18, 2003

SCheeseman posted:

I meant more in the sense of replicators rather than API integrations. Post scarcity isn't post scarcity until all people can eat and live without dependence on labour or privilige.

Is this thread supposed to be about what's happening currently or speculations about things that probably won't ever happen

gurragadon
Jul 28, 2006

SaTaMaS posted:

Is this thread supposed to be about what's happening currently or speculations about things that probably won't ever happen

When I made it I really didn't care, I just wanted to continue the conversations that are stemming from the current generation of AI programs. I don't think the thread needs to be specifically either as long as the people in the conversation are aware of what each other are talking about.

SCheeseman
Apr 23, 2003

I would personally prefer that nuclear weapons were not improved.

Post scarcity is typically associated with the scarcities that are required for human survival like food and shelter, not cultural artifacts or luxuries, and those who are being made redundant are rightfully unhappy that what allowed them to live their lives became "post scarcity" before actual post scarcity was a thing.

I get that AI is useful, I've spent quite a bit of time thinking and talking to others about cool applications of the tech and defending use of it as a means of expression. I also agree it's inevitable, horse has bolted and at this point society must adjust to the technology. Which is why all the anger laid on the tech and particularly those who want to use it is a red herring, shouldn't the aim be higher up the totem pole? Choosing to make staff redundant is a choice made by a human with a gun pointed to the head by shareholders.

If I'm hoping for any outcome, it's one where AI brings about a wave of socialism. Not that I think it's likely.

SaTaMaS posted:

Is this thread supposed to be about what's happening currently or speculations about things that probably won't ever happen

It was an extreme example. Maybe AI will help genetic research to create hardier agriculture effectively immune from disease or whatever, but in any case that hasn't happened yet and it's creating imbalances today that need to be rectified was my main point.

SCheeseman fucked around with this message at 16:44 on Mar 24, 2023

Kavros
May 18, 2011

sleep sleep sleep
fly fly post post
sleep sleep sleep
I expect very little good from AI because of what it is being purpose-built towards. it's not going to be here to make our lives easier, it will be here to better extract profit for the benefit of a specific class of investor or owner-of-many-things.

One thing I expect it will be really good at in the meantime is further destroying "community," since so much of community is online and it will be even more numbingly difficult to navigate seas of inauthentic communication and signal-to-noise ratios blown into bits even more efficiently than it was in an age of mere botting

gurragadon
Jul 28, 2006

SCheeseman posted:

I would personally prefer that nuclear weapons were not improved.

Post scarcity is typically associated with the scarcities that are required for human survival like food and shelter, not cultural artifacts or luxuries, and those who are being made redundant are rightfully unhappy that what allowed them to live their lives became "post scarcity" before actual post scarcity was a thing.

I get that AI is useful, I've spent quite a bit of time thinking and talking to other about cool applications of the tech and defending use of it as a means of expression. I also agree it's inevitable, horse has bolted and at this point society must adjust to the technology. Which is why all the anger laid on the tech and particularly those who want to use it is a red herring, shouldn't the aim be higher up the totem pole? Choosing to make staff redundant is a choice made by a human with a gun pointed to the head by shareholders.

If I'm hoping for any outcome, it's one where AI brings about a wave of socialism. Not that I think it's likely.

I think we agree for the most part, I would prefer that no weapon was improved but it just seems like something humans do. We want new and improved things. Maybe this is because our society is forcing the concept of new and improved on us, but I think it's partly just because we realize having the best of something makes it easier to survive. You began talking about how humans carved bone into tools, and that wasn't for monetary value, just survival.

Post scarcity will happen in waves, just like everything else happens in waves. We are treating our workers who are becoming redundant very poorly and I think a better way to treat them would be to simply provide living assistance to people so they can choose to work or not. People who enjoy making art, music and writing need to be supported not because they provide any monetary value to society but because they provide individuals with non monetary joy. I used the word redundant earlier and I don't want to remove it, but it's not completely correct. If their job becomes only about intellectual and cultural value, I think it actually gains value because it isn't influenced by desire for money but only the desire to create what the artist wants.

I don't think an AI would lead to socialism unless there is a general post scarcity, but value would lose it's meaning then so I don't even know if its socialism if everyone owns a machine to make there own food?

Kavros posted:

I expect very little good from AI because of what it is being purpose-built towards. it's not going to be here to make our lives easier, it will be here to better extract profit for the benefit of a specific class of investor or owner-of-many-things.

One thing I expect it will be really good at in the meantime is further destroying "community," since so much of community is online and it will be even more numbingly difficult to navigate seas of inauthentic communication and signal-to-noise ratios blown into bits even more efficiently than it was in an age of mere botting

Thats one reason I really wanted to remake this thread in these forums. The ten dollar entry fee really cuts down on that kind of stuff. I'm interested in actual people's thoughts about this stuff, not just something lazily copy and pasted. How much information about AI would I get just written by some AI program itself if I looked elsewhere? Probably a lot already.

gurragadon fucked around with this message at 17:04 on Mar 24, 2023

Fuzz
Jun 2, 2003

Avatar brought to you by the TG Sanity fund

Kavros posted:

I expect very little good from AI because of what it is being purpose-built towards. it's not going to be here to make our lives easier, it will be here to better extract profit for the benefit of a specific class of investor or owner-of-many-things.

One thing I expect it will be really good at in the meantime is further destroying "community," since so much of community is online and it will be even more numbingly difficult to navigate seas of inauthentic communication and signal-to-noise ratios blown into bits even more efficiently than it was in an age of mere botting

This.


My heart goes out to the graphic designers and small form illustrators who have made their livelihoods as freelancers doing a steady stream of projects to keep food on the table, because their entire profession just got nuked from orbit in under a year and the society is expanding pretty rapidly to devour a lot of those skilled workhorse type art and print production roles.

Corpos are gonna waste as little time as possible to trim the pesky money-wanting people from every facet of their profit treadmill as possible, and it's gonna destroy tens of thousands of people's lives. Their best bet is to get on board and aggressively learn to incorporate AI into their workflow and speed up their production quotas exponentially so they can be the first to stab all their colleagues in the back and be the overseer of the AI that will replace all of them... at least that handful of people won't go hungry.

gurragadon
Jul 28, 2006

It's like the industry of creative arts is going through what other manufacturing has been going through in a really fast time scale. All those people working on an assembly line are replaced by a machine and somebody to make sure it works. Now the same thing is happening to creative freelancers.

The advancements are coming really fast now though and it's going to hit white collar workers everywhere. Like I posted a bit earlier, GPT-4 can ace the bar exam and you can hook it up to other programs so it can perform accounting practices. Were quickly making most of the population's employment not worth the money. But if it goes beyond creatives, it will start hitting workers who command a pretty strong voice in the economy.

Will lawyers or doctors have a big enough voice when there turn comes? Since its so similar to what happened to assembly lines in the past in my mind, we know what NOT to do with people affected by advancements in technology. A simple example is just watching Roger and Me. Did any societies treat their redundant workers better?

StratGoatCom
Aug 6, 2019

Our security is guaranteed by being able to melt the eyeballs of any other forum's denizens at 15 minutes notice


Kavros posted:

I expect very little good from AI because of what it is being purpose-built towards. it's not going to be here to make our lives easier, it will be here to better extract profit for the benefit of a specific class of investor or owner-of-many-things.

One thing I expect it will be really good at in the meantime is further destroying "community," since so much of community is online and it will be even more numbingly difficult to navigate seas of inauthentic communication and signal-to-noise ratios blown into bits even more efficiently than it was in an age of mere botting

It is a technology well tuned to atomize and disenfranchise; and bear in mind, the false image generation abilities are getting pretty good and 2024 is a year and change away. This technology is a societal disaster.

Ohthehugemanatee
Oct 18, 2005
On one level, yep it's going to be a societal disaster. I think the implications are worse than folk even imagine. Right now, I can spot a bot. I don't believe I or anyone else will be able to within a year. Scams are going to be wild, journalism is hosed and online forums are going to be ruined. The companionship an AI is eventually going to be capable of offering people is going to take society in weird directions. I'm a huge loving nerd and excited than an AI can run a table top wargame against me but at the same time it would be pretty sad if I turned to that instead of playing games with other people.

But I have seen one cool thing. Bing's chatbot doesn't have some of the hard limits on it that other systems do, and it doesn't balk at doing medical stuff. It's good. It's really good. I've been running patients at it to see if it can make accurate diagnoses and it's great at recognizing rare patterns that humans tend to miss. I had a patient last week who was seen by 3 providers who missed multiple myeloma, and when fed in the lab abnormalities and chief complaint, the program caught it right away. Even with confounding factors or missing elements from the history or labs, Bing was able to catch what one family MD, one family NP and one ED attending all missed. The first doc saw weird labs and thought they were an error because he didn't take a good history that would have raised all the red flags in the world, the NP saw weird labs on repeat and didn't see the pattern, panicked and sent the patient to the ED, and the ED physician saw weird labs, ignored everything but the low red blood cell count and incorrectly diagnosed a gastrointestinal bleed because she was busy and that's something she sees probably twice a night. The AI has none of those limitations. Depending on what elements I give it, it isn't always able to say "Oh poo poo that's multiple myeloma" but it's always able to say "Hm that could be a lot of things but it would be important to rule out multiple myeloma." That's cool as hell.

Raenir Salazar
Nov 5, 2010

College Slice
I have some Thoughts.

Chapter 1: Whose Fault is it Anyway?

Some core philosophical concepts people should keep in mind:

Technological Determinism: The idea that a technology that comes into being will inevitably proliferate across society; it is impossible to effectively regulate, it will spread, and change all lives it touches. Karl Marx one of the most prominent technological determinists further expands on this to suppose that the inevitable proliferation of technology changes society which results in the development and proliferation of new technologies and so on, "the Sorcerer's Broom" etc. See the possibly apocryphal account of the Catholic Church trying to ban crossbows.

Karl Marx writes:

quote:

Meantime the markets kept ever growing, the demand ever rising. Even manufacturer no longer sufficed. Thereupon, steam and machinery revolutionised industrial production. The place of manufacture was taken by the giant, Modern Industry; the place of the industrial middle class by industrial millionaires, the leaders of the whole industrial armies, the modern bourgeois.

Modern industry has established the world market, for which the discovery of America paved the way. This market has given an immense development to commerce, to navigation, to communication by land. This development has, in its turn, reacted on the extension of industry; and in proportion as industry, commerce, navigation, railways extended, in the same proportion the bourgeoisie developed, increased its capital, and pushed into the background every class handed down from the Middle Ages.

It's easy to see how technological progress and its spread maps onto the above.

Social Constructivism: The idea that technology exists at the behest of humans, is impacted by humans, for humans, is affected by the pre-existing social context, and can be regulated by humans for humans. The historical example here might be Ming China or the Shogunate of Japan.


Some of the debate will shaped by whether you fall into one of the two camps or somewhere in the middle between them.

Personally I lean towards Technological Determinism; too many corporations and Universities and research orgs, including nation-state actors viewing AI as a new arms race have invested a lot of time and money across the globe into AI. There are too many day to day benefits and applications that add to GDP to just not join the race. A similar situation might be stem cell research, the US attempting to ban it merely let other nations get a head start in developing the technology.

The Stakes: Many people are rightfully scarred and concerned about the technology; the massive and unprecedented amount of what is at least arguably a form of theft by data scrapers to form the datasets to train the AI, is concerning. But I kinda feel like some of the online debate and concerns by artists are kinda missing the forest for the trees. Because assuming we do legally settle the issue in favor of artists regarding data scraping that doesn't prevent the AI from eventually getting good enough through a long process of ethically sourced artworks in displacing artists.

There's also the issue where honestly? I'm not sure what the difference is between an artist potentially losing their job and horse drawn carriage drivers 100 years ago losing their jobs, or the craftsman, as Karl Marx wrote 200 years ago:

quote:

The bourgeoisie has stripped of its halo every occupation hitherto honoured and looked up to with reverent awe. It has converted the physician, the lawyer, the priest, the poet, the man of science, into its paid wage labourers.

quote:

The lower strata of the middle class — the small tradespeople, shopkeepers, and retired tradesmen generally, the handicraftsmen and peasants — all these sink gradually into the proletariat, partly because their diminutive capital does not suffice for the scale on which Modern Industry is carried on, and is swamped in the competition with the large capitalists, partly because their specialised skill is rendered worthless by new methods of production. Thus the proletariat is recruited from all classes of the population.

This has been a process that has been ongoing for hundreds of years since the advent of capitalism and the rise of the bourgeoisie. No regulation or law is going to stop capitalism as a socio-economic force from destroying previously unassailable modes of production.

The real solution is probably the adaption of some form of socialist economy; whether the basic needs and necessities are met and large corporations wouldn't exist to devalue the work of craftsmen out of a desire for endless infinite never ending profits. The technology can be used for its good, and not merely for what it devalues.

The problem is always capitalism.

Chapter 2: How much of a problem is it currently?

But personally based on what I've observed as someone who commissions a lot of art, I think artists in general are maybe a tad premature in their doomsday prognostications. I don't doubt the technology might get good enough to be hard to distinguish from a human's hand, but I think this doesn't capture the full nuance of what's going on.

For one thing, insofar as AI Art is still flawed, many artists I've seen resorted themselves to a new niche, as "AI Art Retouchers", taking jobs to fix the AI art and making it "gooder". It's evidence to the counter argument (by liberals typically) that new technology doesn't "destroy" jobs but creates new ones! :shobon: Putting aside that its unclear what the ratio of old job destroyed vs new job created is; this is a something to consider.

Also most of the AI Art I've seen falls into a broad but also paradoxically specific category of AI Art. People Who Don't Know Any Better Without Taste Just Wanting Their Pretty Gaudy Art(tm).

Seriously most people who seem satisfied with AI Art are just extensions of "Look at this weird funny result the AI made, haha isn't that quaint?" to "Look it's my special snowflake Sonic OC! :shobon: " Generally a category of people who probably didn't have much of a budget and probably don't account for a significant share of art commissions (in total value).

For people who actually have a more specific desire, whether it be pose, style, props, costume, and so on; the short comings of AI get really tight and I'm not sure this is something that can ever really get resolved.

As someone with their own indie game project, the few times I've tried using the AI the results were so atrociously unuseable, for as reference material *for* a artist I intend to pay. I think large corporations might even be in the same boat, you're not making the next Halo with AI art and expecting it to be critically acclaimed; many will of course try, and fall flat on their faces but I think there IS a limit to how much of a soulless cash grab a game can be before customers and audiences have enough.

Where things get tricky is the Indie scene, I've seen AI get a lot of usage already; whether it be for code generation (no ethical issues here honestly, programming is just copying and pasting with extra steps) or placeholder assets.

I think the use of AI to generate placeholder assets or aid in kitbashing is clearly something we can see helps some field a lot, but is it at the expense of artists? Can it be done to *not* be at their expense? Can a compromise be met? For example, using AI Art to generate placeholder assets which once you've completed the functionality of the game go and pay artists to generate the for-release assets, stripping out any assets you can't afford to replace?

As someone who commissions art, my process currently is this: Write a Description of what I want, then I go onto Pinterest and Doom Scroll for between 4 and 72 Hours downloading reference images (thonk emoji, I wonder what this step reminds me of?), which I then sort and pick through and then I kitbash the result in a document or an image file which I email to the artist, sometimes with addendums. This is a process that took me many years to polish and is basically its own skill because the artist doesn't want to have to spend a lot of time revising the output right? And I don't want to have to spend more money than what the listed price was.

And as I wrote in the TTRPG thread I think like Piracy, this feels broadly similar to a Convenience issue. A lot of people out there who go out and seek artists for commissions have no idea what they're doing, get frustrated, and this forms the core of the current market for most people who are currently into AI. You need a combination of basic social skills, an eye to distinguish style and skill, and a basic understanding of the limits of your budget and an idea of how to communicate your vision clearly and concisely to the artist.

Most people who go out there getting their commissions are like stupid teenagers with their birthday money or allowance who as I said, no idea what the gently caress they're doing, get easily scarred off or get their heads bitten off because they lack tact or something or talked to an artist who was having a bad idea, and so on. I'm not going to say its the fault of artists, but a lot of artists out there aren't helping themselves, like by not listing their prices! So gasp someone now has to ask you for your price, that's asking for trouble and is too much for most people.

I remember one artist who said in an appeal against using AI art, "Look if budget is an issue, artists will be willing to negotiate!" What the heck, no, that's not even a majority of all artists and many will get very understandably upset if they're being asked to lower their prices. And there's huge debate even among artists about what prices people should set their art at. There's no single set of expectations that will apply to every artist, I discover everyday where I go out and commission new artists just new and all sorts of small or big ways artists differ from each other, like even in things like basic terminology.

Some artists expect references, one artist I met fell onto their knees and cried that I gave references up front and were shocked, every artist is different and that creates a kind of problem for a lot of people who understandably dislike being asked to run through a minefield in order to spend money.

So it is tempting for me to consider AI for generating references, instead of spending *hours* finding pre-existing art with the exact pose I want (albeit this is something programs (hrmm) exists already for), the costumes I want, the props I want, and so on with just a few clicks and maybe at most an hour of my time, to then still pay an artist anyways; which lets me better strategically spend my limited budget; that's a very enticing use case; I haven't personally experienced any AI that can do this that isn't a huge hassle that's more effort than its worth, but I can hypothetically see the use case where ideally the dataset that trained the AI was ethically sourced but I also don't really see a big difference between what I described and scrolling through pinterest?

Chapter 3: TBD, Solutions aside from the complete overthrow of capitalism?

gurragadon
Jul 28, 2006

Ohthehugemanatee posted:

On one level, yep it's going to be a societal disaster. I think the implications are worse than folk even imagine. Right now, I can spot a bot. I don't believe I or anyone else will be able to within a year. Scams are going to be wild, journalism is hosed and online forums are going to be ruined. The companionship an AI is eventually going to be capable of offering people is going to take society in weird directions. I'm a huge loving nerd and excited than an AI can run a table top wargame against me but at the same time it would be pretty sad if I turned to that instead of playing games with other people.

But I have seen one cool thing. Bing's chatbot doesn't have some of the hard limits on it that other systems do, and it doesn't balk at doing medical stuff. It's good. It's really good. I've been running patients at it to see if it can make accurate diagnoses and it's great at recognizing rare patterns that humans tend to miss. I had a patient last week who was seen by 3 providers who missed multiple myeloma, and when fed in the lab abnormalities and chief complaint, the program caught it right away. Even with confounding factors or missing elements from the history or labs, Bing was able to catch what one family MD, one family NP and one ED attending all missed. The first doc saw weird labs and thought they were an error because he didn't take a good history that would have raised all the red flags in the world, the NP saw weird labs on repeat and didn't see the pattern, panicked and sent the patient to the ED, and the ED physician saw weird labs, ignored everything but the low red blood cell count and incorrectly diagnosed a gastrointestinal bleed because she was busy and that's something she sees probably twice a night. The AI has none of those limitations. Depending on what elements I give it, it isn't always able to say "Oh poo poo that's multiple myeloma" but it's always able to say "Hm that could be a lot of things but it would be important to rule out multiple myeloma." That's cool as hell.

Thats pretty cool that it is catching things that have serious medical consequences for people. Especially with the overall shortage of doctors and them being overworked already, running patients through AI programs for addition clues seems really useful. Also, doctors have to specialize so much now it must be difficult to catch things outside of their main expertise. Like you said with the ED (I assume that's emergency room?), she see's certain things more frequently and is more likely to default to them because of how overworked they are.

Bings chatbot is really good and I like it a lot more than google. I used the example of roofers in Bethesda, Maryland in the OP because I just got my roof replaced and I'm playing Morrowind, but I actually did use it to get a list of people to contact to come out. One of the nice things about the chat bot is that it doesn't have all the ads that a google search does, but I would imagine that is going to change once companies begin to monetize it more.


Thanks for the effort post on AI art, interesting to read. You said that AI art is atrociously unusable, and a lot of artists are trying to find work touching up AI art. What is it about AI art that you see that makes you recognize it so readily? Or is it the struggle to communicate effectively what you want to the AI program? You mention that people use it for "weird" stuff, but is there a difference or AI'ness to a picture of a sunflower or something else generic? There definitely will be a few artists who make a living retouching AI art but its whole point will lead to less people being able to support themselves as artists.

Alot of the confusion you describe with artists seems to be that they have their own way of communicating and it's not laid out very well to people who are unfamiliar with commissioning art. Like you said, I wouldn't have been prepared with reference photos of what I wanted unless I spoke with the artist beforehand, and they told me they needed that. I would have expected to tell someone who is claiming to paint or draw what I wanted, and they could do an ok job of it within their vision if I just told them. That seems like more of a problem with artists being unable to communicate with people outside of their field well, which is not rare by any means, but it's pretty essential for the artist to be able to.

It also leans into the question of the artist not recognizing the vision of the person who is commissioning the art. Art is subjective and while there are things that are considered "good" and "bad," maybe I just want the artist to draw me something that they would consider "bad." Do they want to do that? Probably not because their name is going to be on the piece. But if they are making art for business then they have to abandon their vision as an artist. It's why I think introducing money into anything cheapens it by some degree. AI art doesn't care about what's "good" and "bad," it "cares" about spitting out as many images as it can.

I guess the difference in AI and scrolling through Pinterest is you can generate images faster with an AI program, and if you change your mind halfway through it's a lot easier to just start typing in new terms. AI art generated through a program is also an original composition based on other images, not just the original piece done by someone on Pinterest. So maybe I would feel a little less bad about copyright, but that dosen't seem to hold very well and I don't really feel bad about copyright law anyway.

Fuzz
Jun 2, 2003

Avatar brought to you by the TG Sanity fund

gurragadon posted:

It's like the industry of creative arts is going through what other manufacturing has been going through in a really fast time scale. All those people working on an assembly line are replaced by a machine and somebody to make sure it works. Now the same thing is happening to creative freelancers.

The advancements are coming really fast now though and it's going to hit white collar workers everywhere. Like I posted a bit earlier, GPT-4 can ace the bar exam and you can hook it up to other programs so it can perform accounting practices. Were quickly making most of the population's employment not worth the money. But if it goes beyond creatives, it will start hitting workers who command a pretty strong voice in the economy.

Will lawyers or doctors have a big enough voice when there turn comes? Since its so similar to what happened to assembly lines in the past in my mind, we know what NOT to do with people affected by advancements in technology. A simple example is just watching Roger and Me. Did any societies treat their redundant workers better?

Acing the BAR or even the MCAT or USMLEs is a far cry from actually practicing law or medicine. They're multiple choice exams where the AI has the equivalent of the entirety of the internet at its disposal to look up and eliminate incorrect answer choices.

Standardized testing is a dumb metric for gauging AI proficiency.



In terms of whose fault it was, I mean the tech sector thought it could reinvent banking with crypto and we all are seeing how that's going. The tech sector is literally that meme about wizards, when you look at everything in terms of being a math problem, suddenly everything has a solution... Let's just ignore that reality has a lot of remainders and trailing decimal points, we just shave those (people) off in the name of precision.

Fuzz fucked around with this message at 18:54 on Mar 25, 2023

gurragadon
Jul 28, 2006

GPT-4 took the Uniform bar exam. According to https://www.ncbex.org/exams/

quote:

A bar examination coordinated by NCBE that is uniformly administered, graded, and scored by user jurisdictions and results in a portable score. It is composed of the MEE, two MPT tasks, and the MBE. See our interactive map for jurisdictions that have adopted the UBE. For more information, view our informational pdf about the UBE.

The MEE is 6 essay style questions analyzing a legal issue and the MPT tasks are standard lawyering tasks.

The AI will always have the entire internet at its disposal, thats a feature of AI, not something that would changed unless delibrate.

What would be a good metric for you? Seeing GPT-4 actually being used in a courtroom would be convincing to me, unfortunately the law practice seems to be kind of reluctant to embrace technology. The supreme court still dosen't have TV cameras.

I kind of wish they would have just let this guy try out the AI lawyer thing.

https://www.cbsnews.com/news/robot-lawyer-wont-argue-court-jail-threats-do-not-pay/

Edit: Uniform Bar Exam not Unified

gurragadon fucked around with this message at 19:32 on Mar 25, 2023

BrainDance
May 8, 2007

Disco all night long!

I don't know too much about being a lawyer besides Better Call Saul and stuff this woman I went to high school with who went to the worst law school in America posts online, but I imagine there has to be a point where lawyers just realize it's useful at some point in the process, right?

It doesn't have to be an AI siting there in court arguing for the defendant, but if one lawyer realizes an AI can do a ton of work in the process or construct arguments as good as or better than a human before trial, they're gonna use it. Because not using it then would be a disadvantage.

litany of gulps
Jun 11, 2001

Fun Shoe

gurragadon posted:

GPT-4 took the Uniform bar exam. According to https://www.ncbex.org/exams/

The MEE is 6 essay style questions analyzing a legal issue and the MPT tasks are standard lawyering tasks.

The AI will always have the entire internet at its disposal, thats a feature of AI, not something that would changed unless delibrate.

What would be a good metric for you?

Is the primary difficulty of the exam a time-crunch issue? If so, then an AI chatbot has fairly tremendous advantages due to the nature of the exam. I don't think it should be particularly surprising that a chatbot with access to huge amounts of examples and study materials should be able to crank out a bunch of rough draft essay responses to questions that humans struggle with because they have to read prompts and compose written arguments or analysis on the fly.

But what's the takeaway from that specific example? One paralegal now does the job formerly done by two and the partner pockets a bit more profit?

Gumball Gumption
Jan 7, 2012

gurragadon posted:

GPT-4 took the Uniform bar exam. According to https://www.ncbex.org/exams/

The MEE is 6 essay style questions analyzing a legal issue and the MPT tasks are standard lawyering tasks.

The AI will always have the entire internet at its disposal, thats a feature of AI, not something that would changed unless delibrate.

What would be a good metric for you? Seeing GPT-4 actually being used in a courtroom would be convincing to me, unfortunately the law practice seems to be kind of reluctant to embrace technology. The supreme court still dosen't have TV cameras.

I kind of wish they would have just let this guy try out the AI lawyer thing.

https://www.cbsnews.com/news/robot-lawyer-wont-argue-court-jail-threats-do-not-pay/

Edit: Uniform Bar Exam not Unified

I think it's important to note that it's training included example MEE essays. It's interesting but GPT-4 being trained for the exams and passing them is then not indicative of performance in a court room. It does show that it would be of value as an information store for lawyers.

Really I think that's where the true value in AI is. Not for its ability to create things but for its ability to be a way to store knowledge and interfaces that feel more natural.

Fuzz
Jun 2, 2003

Avatar brought to you by the TG Sanity fund

gurragadon posted:

GPT-4 took the Uniform bar exam. According to https://www.ncbex.org/exams/

The MEE is 6 essay style questions analyzing a legal issue and the MPT tasks are standard lawyering tasks.

The AI will always have the entire internet at its disposal, thats a feature of AI, not something that would changed unless delibrate.

What would be a good metric for you? Seeing GPT-4 actually being used in a courtroom would be convincing to me, unfortunately the law practice seems to be kind of reluctant to embrace technology. The supreme court still dosen't have TV cameras.

I kind of wish they would have just let this guy try out the AI lawyer thing.

https://www.cbsnews.com/news/robot-lawyer-wont-argue-court-jail-threats-do-not-pay/

Edit: Uniform Bar Exam not Unified

I was speaking mainly from the medical perspective, as I've taken and passed all those medical exams too.

They don't actually replace the clinical training at all, and all medical students and residents take those exams but you throw them in a solid 85+% of cases and unless it's super bread and butter stuff they won't actually diagnose the problem correctly, unless you as a patient enjoy the House style trial and error method of medical treatment instead of actually examining thoroughly and running focused tests and then treating the actual issue based on your results instead of jumping to a conclusion based on incomplete data because illiciting all that info from a patient and asking the right questions is like 75% of what experience teaches you.

Could it get there eventually? Who knows, anything is possible, but not in the next few years and there's a big gap between knowing the book and actually being able to take care of people. The same can be said of Nurse Practicioners, since they similarly lack a large amount of clinical experience and training before they're handed a prescription pad and told to just start seeing patients and figure it out as they go..

Fuzz fucked around with this message at 04:40 on Mar 26, 2023

gurragadon
Jul 28, 2006

litany of gulps posted:

Is the primary difficulty of the exam a time-crunch issue? If so, then an AI chatbot has fairly tremendous advantages due to the nature of the exam. I don't think it should be particularly surprising that a chatbot with access to huge amounts of examples and study materials should be able to crank out a bunch of rough draft essay responses to questions that humans struggle with because they have to read prompts and compose written arguments or analysis on the fly.

But what's the takeaway from that specific example? One paralegal now does the job formerly done by two and the partner pockets a bit more profit?

I can't speak to whether people feel a rush taking the exam because i've never taken it before. This presentation from the National Conference of Bar Examiners says its given over the course of two days with 6 hours of test taking a day. To be fair to the chatbot, the lawyer student studying for the exam has access to the resources and can crank out rough draft essay responses too. The AI chatbot isn't training on anything a human doesn't have access to, and the prospective lawyer should be using those materials to study anyway or they weren't prepared for the test.

I guess the takeaway is yes, now it's one paralegal. It may be no paralegals soon when offices realize they don't need them anymore. This stuff is really going to take out a lot of white collar jobs I think.

Gumball Gumption posted:

I think it's important to note that it's training included example MEE essays. It's interesting but GPT-4 being trained for the exams and passing them is then not indicative of performance in a court room. It does show that it would be of value as an information store for lawyers.

Really I think that's where the true value in AI is. Not for its ability to create things but for its ability to be a way to store knowledge and interfaces that feel more natural.

Same thing as above about the MEE essay. I agree though that it would probably perform very poorly in a courtroom if it was just GPT-4. IANAL but I feel like the courtroom is not just about facts but also swaying people and opinion. Having an easy way to pull up information would be a huge advantage. I would be interested to see somebody represent themselves somehow with GPT-4 though and am kind of disappointed its being resisted. I mean if were confident its bad, let somebody prove it.

Fuzz posted:

I was speaking mainly from the medical perspective, as I've taken and passed all those medical exams too.

They don't actually replace the clinical training at all, and all medical students and residents take those exams but you throw them in a solid 85+% of cases and unless it's super bread and butter stuff they won't actually diagnose the problem correctly, unless you as a patient enjoy the House style trial and error method of medical treatment instead of actually examining thoroughly and running focused tests and then treating the actual issue based on your results instead of jumping to a conclusion based on incomplete data because illiciting all that info from a patient and asking the right questions is like 75% of what experience teaches you.

Could it get there eventually? Who knows, anything is possible, but not in the next few years and there's a big gap between knowing the book and actually being able to take care of people. The same can be said of Nurse Practicioners, since they similarly lack a large amount of clinical experience and training before they're handed a prescription pad and told to just start seeing patients and figure it out as they go..

I didn't mean to diminish the value of practical training, it's really important to put knowledge to use and that's when it's really reinforced. Earlier upthread Ohthehugemanatee said they were able to use the Bing chatbot to help diagnose a problem that doctors were missing or couldn't figure out. Do you think it has value in helping doctors diagnose patients? Practical knowledge is really important but you also do lose some of the book knowledge the further you get away from it and the more time you spend working in a field. Maybe just help as a refresher for doctors or nurses about medical issues?

I think it's real value in the medical field would be ways that it can lighten the burden on people.

Raenir Salazar
Nov 5, 2010

College Slice
I think the main use case for a "legal" AI is scanning through documents from Discovery, which as I understand it is the most expensive and tedious part of lawyering.


gurragadon posted:

Thanks for the effort post on AI art, interesting to read. You said that AI art is atrociously unusable, and a lot of artists are trying to find work touching up AI art. What is it about AI art that you see that makes you recognize it so readily? Or is it the struggle to communicate effectively what you want to the AI program? You mention that people use it for "weird" stuff, but is there a difference or AI'ness to a picture of a sunflower or something else generic? There definitely will be a few artists who make a living retouching AI art but its whole point will lead to less people being able to support themselves as artists.

To be clear "weird" is more referring to early AI use, when most artists probably dismissed AI as a threat (except for a few exceptions).

Anyways, maybe there's other better programs than the ones I used, but trying to specifically spit out anime or manga style designs just resulted in unusable outputs due to like, massive issues like mismatched eyes and the AI stopping and starting different styles like a picasso painting. From what I do know from people who do get better results it seems like it involves turning a lot of knobs, fiddling with parameters, uploading reference images to use as a base and so on which I feel like defeats the purpose; its me needing to learn to use a different sort of tool and spending probably a nearly equivalent amount of time.

Most importantly though I feel like maybe if I know very specifically what I want I'll get it, and no more. The advantage of being stuck in Pinterest Hell of having 150 tabs open as a result of clicking through and opening "promising" and interesting results is I end up with a LOT of potential sources of inspiration that I subsequently download and categorize (heh, just like them robots) for future use. Ultimately there is a certain enjoyment to this process as well as frustration which is I think the crux of the thing; when AI is enhancing workflows great, if it is helping artists with doing tasks they'd rather not do, great. But when it infringes on parts of the process and the craft artists enjoy doing? No thanks. And the unfortunate thing is a lot of the "AI Bros" on twitter just seem to be very out of touch about that part of what makes learning a creative pursuit so satisfying.

I dunno how to best formulate this thought but there's a certain observation I've made and I'm not sure how to express it that I feel like captures the essence of AI popularity at least in the anime community. There's a certain "style" of what I tend to think of as Overpolished? Anime art where the lighting, shading and so on seems like its pushing the art to its limits but is otherwise uninspiring in terms of pose, anatomy, appeal and so on. There's one artist off the top of my head where I see their art and I think "That's very technically impressive but still kinda... Soulless?" And this spawned a lot of a sort of imitation style of similarly overly refined works that all kinda blend together and look the same... JUST LIKE AI and results in the unfortunate trend of some artists being accused of AI art because they also kinda adapted that style, a style that I think is largely born out of the kinds of anime/manga artist who self-teaches themselves anime from other anime artists and never got formal art education for things like anatomy/composition/etc; just brute forced themselves to a certain level of visual appeal and quality from memorizing the 100s of lighting/shading tutorials and using anatomy posing tools as reference.

For years I felt I could tell the difference between someone who was self taught from anime, self taught but took the time to learn the fundamentals, and a formal art training prior to anime art and it speaks volumes in just how much of a difference it makes in making appealing characters.

That I can't really tell this style of 'self taught from anime' anime art from the AI anime art I think anecdotally kinda says something; on a very broad level I mean. I don't wish to generalize of course and I'm not sure what it says; but the way they seem to pair and complement each other is certainly very interesting to me.

But yeah regarding retouching I think the main point is I think the people willing to pay for art will continue to do so, and the people who whine in artists dms about how unaffordable they are never did and never will.

Like... I primarily see AI art as something that has its biggest audience in the indie adjacent space of "I wanna make a card game like magic the gathering" or "D&D 3rd party content book" with "But I can't afford an artist. :( " where hypothetically the AI is displacing artists but I don't think its displacing very many. The people willing and have the means of paying hundreds of dollars for MTG style/quality art that the AI kinda excels at producing are probably still willing to pay that amount; its the people with sub-100$ budgets who are going to value the ability to self-publish their dream supplement/card game and push on ahead who never had the budget to begin with or only had the budget for art that's basically below minimum wage broken down per hour.

quote:

Alot of the confusion you describe with artists seems to be that they have their own way of communicating and it's not laid out very well to people who are unfamiliar with commissioning art. Like you said, I wouldn't have been prepared with reference photos of what I wanted unless I spoke with the artist beforehand, and they told me they needed that. I would have expected to tell someone who is claiming to paint or draw what I wanted, and they could do an ok job of it within their vision if I just told them. That seems like more of a problem with artists being unable to communicate with people outside of their field well, which is not rare by any means, but it's pretty essential for the artist to be able to.

So putting aside language barriers which is definitely also something I sometimes would come across, its a little crazy to me the idea of trying to ask an artist to draw something with just a written description and nothing else. The superposition of all the possibilities just seems so endless and infinite it would be any wonder the final result remotely resembled what you wanted. But there are some where this is the expectation, I've heard of artists who actually insist on a MAXIMUM of TWO reference photos! Two! That's crazy talk, I provide on average 15 for every aspect of the design to insure the least number of potential revisions while still leaving some wriggle room for the artist to present their own style and vision and input to the process.

So this is a part of the problem, communication goes two ways, clients who don't know what or how to communicate what they want, and are lacking in the experiences to know how to or for how long to be sorting through artists and portfolios to find one that suits them; and artists who need to better present themselves as a business and convey clearly their process, their prices, their portfolio, and expectations of you as a client.

quote:

It also leans into the question of the artist not recognizing the vision of the person who is commissioning the art. Art is subjective and while there are things that are considered "good" and "bad," maybe I just want the artist to draw me something that they would consider "bad." Do they want to do that? Probably not because their name is going to be on the piece. But if they are making art for business then they have to abandon their vision as an artist. It's why I think introducing money into anything cheapens it by some degree. AI art doesn't care about what's "good" and "bad," it "cares" about spitting out as many images as it can.

Usually artists indicate a list of "Do's" and "Don't" in terms of content they draw. It is I think pretty rare for an artist to be presented with a request they morally or stylistically object to; I think some artists I know told me about how a client would pick and insist on absolutely garishly wrong colours that just hurt to look at; I can only kinda vaguely remember an artist suggesting a change to the design but usually I actually do defer to the artist; it's why I'm paying them, to use their skills and knowledge to tell me what parts of the design just won't work, where the physicality is all wrong etc.

So generally most of the time I expect an artist to refuse a commission they aren't comfortable with, I've certainly seen artists react negatively when I asked them like, "Hey do you do anime art?" when they're a semi-realist/realist painter, so I think typically they're very willing and upfront to tell inquiring clients to gtfo if what they're asking for isn't on offer. :v:

The fact is there's so many artists out there, that it isn't difficult at all to find one who does what you want... If you put in the time and vaguely know where to look. Someone commissioning their OC for the first time probably doesn't know where to look and doesn't have an excel sheets of artists they find promising.

There's actually a reason why I track artists which is to in particular remember which artists I enjoyed working with who seemed genuinely interested in the Lore(tm) of my characters/world/game idea/etc. Not just because if I ever got the budget to hire them they're on the short list but also because I'd like to revisit and commission them again because they like my ideas and are clearly firing on all creative cylinders working on it.

Which again I think wraps back around to what's motivating the AI Art people, they mainly experience very negative interactions with artists, they're like what I'mma call Artcels; they have a lot of trouble connecting and vibing well with artists and so basically want to take their OC ball and go home and the AI fills that void which is basically in a way, a lack of social interaction and connection.

quote:

I guess the difference in AI and scrolling through Pinterest is you can generate images faster with an AI program, and if you change your mind halfway through it's a lot easier to just start typing in new terms. AI art generated through a program is also an original composition based on other images, not just the original piece done by someone on Pinterest. So maybe I would feel a little less bad about copyright, but that dosen't seem to hold very well and I don't really feel bad about copyright law anyway.

Pinterest in many ways is pretty similar, I can type in new terms to try to massage the results to be closer to what I want, like "female mage" gets some kind of result and then I gotta click around to find a result that looks in the ballpark and then scrolldown from there. Pinterest has I think probably a similar but more archaic version of the sort of "referencing" and linking together of ideas I think, like the way you click on a mage with a red dress and suddenly its all Chandra from Magic the Gathering.

But yeah to talk more about the idea of "Experience", originally my budget would be pretty low, but gradually I begun to realize I was really spending a lot more than that in actuality. Because often I wouldn't be satisfied with the design, one thing would be right, other aspects wouldn't be. So I'd find a different artist and try again. How many times does this take before I was satisfied with the result? I think there's one character I legitimately have like ten versions of before I got to like final design I was happy with.

The lesson I learned eventually was I was ultimately budgeting too low; and got very varied results, if I wanted better results I needed to pay before, and gradually this increased. I think I now spend closer to four times my original spending limit per character and basically get perfect results. Sometimes I see someone on twitter who is like, "Full character designs for [absurdly low price]" and I see "Huh they're art is actually really good especially for that price, I'll give them a chance." with the knowledge that it's technically a risk and sometimes I get surprised at how good it is.

The point here is its another facet of the commissioning process that can be very fraught, where people also don't know how much this service should be priced at, so I can easily imagine some people who got disappointing results early on and see the AI as something that gives them "good enough" results for that hit of dopamine to not bother learning how much they are actually willing to pay to get the characters brought to life "perfectly".

This isn't to say of course that any of this is particularly means its justified to just blindly use Midjourney or whatever because someone had a bad experience one time talking to an artist; but mainly to posit I think there's maybe a complex web of circumstances regarding the artist/art commission industry that contributed to the popularity AI has which can frame it potentially as being a service driven problem.

Raenir Salazar fucked around with this message at 05:24 on Mar 26, 2023

Fuzz
Jun 2, 2003

Avatar brought to you by the TG Sanity fund

gurragadon posted:

I didn't mean to diminish the value of practical training, it's really important to put knowledge to use and that's when it's really reinforced. Earlier upthread Ohthehugemanatee said they were able to use the Bing chatbot to help diagnose a problem that doctors were missing or couldn't figure out. Do you think it has value in helping doctors diagnose patients? Practical knowledge is really important but you also do lose some of the book knowledge the further you get away from it and the more time you spend working in a field. Maybe just help as a refresher for doctors or nurses about medical issues?

I think it's real value in the medical field would be ways that it can lighten the burden on people.

Definitely could be useful in alleviating/eliminating the need for midlevel intake type stuff. Rather than have a physician with an advanced practice nurse or physician assistant backing them up to help with all the fiddly running around and screening results and double checking and following up stuff, an AI would be able to do a lot of that sort of grunt work and help formulate written records on the fly that are concise but still cohesive and directly centered around the patient rather than a form note with some blanks filled in because the humans don't physically have enough time in the day to get it all done.

I'd actually worry most for pharmacists. Pharmacy has been expanding its role clinically for decades as a liaison to physicians to help them with dosing (always been their role), learning about new drugs, knowing the normograms and resistance rates for things like bacterial resistances in the local population, etc. Literally all of those tasks could ultimately be pretty easily done via an AI, which is scary for both the patients and the pharmacists, but honestly if someone can get a system up and running and show that it makes fewer errors than humans in the same role, you can bet the pharmaceutical companies will push for it to cut costs and eliminate the need to get regional pharmacists "on board" with their new drug when some engi can just do a push update and fiddle with some weights and bam, everyone is getting recommended their new crazy expensive non-formulary drug that's only marginally better (according to internal studies) than the standard treatment.

Carp
May 29, 2002

Over the past month, I have conducted research on ChatGPT, GPT-3, and GPT-4, which are examples of a type of AI that emerged in 2018. These models owe their success to a seminal paper from OpenAI titled "Improving Language Understanding by Generative Pre-Training" (Radford et al., 2018), as well as previous research in the field. While not an entirely new approach, as the size of the model increases, it becomes far more useful than originally thought possible.

GPT models do not store text, in any form, and instead utilize artificial neural networks, which are a simplified model of a brain's neural network. Numerical values are stored, and their purpose is to affect input values as they pass through the different layers and neural nodes to the final output. This process occurs on a very small, sub-word scale.

To some observers, it appears as if we have opened a door to whole new paths towards AGI, or Artificial General Intelligence. However, this is a somewhat speculative claim and runs afoul of current technical understanding. The determination of what constitutes AGI is up for debate, as well, and different disciplines have different definitions. Nevertheless, it is clear that GPT is not merely munging existing text; it is building its output word by word, line by line, and concept by concept. The model exhibits a deep understanding of how concepts relate to each other.

The core of GPT-3 (as well as other variants such as GPT-4), which does not include tuning, reinforcing, and other downstream steps, is primarily based on a pre-trained autoregressive language model. This model is a type of large language model (LLM) used to predict the continuation of presented text. LLMs use neural networks that are trained through self-supervised learning (SSL). During its training, GPT-3's SSL process consumed an enormous corpus of multilingual text (including source code and conlangs), totaling 45 terabytes. This training process determines the weights and biases (i.e., parameters) assigned to nodes, with each path to a node given its own node parameter. The GPT-3 model has approximately 175 billion parameters and around 295 thousand nodes across its various layers. It is important to note that the model does not store the training text corpus in any way, and it does not reference a large database to produce inference output. During training, GPT-3 was given pretext problems to solve that involved finding patterns and relationships in the training data (text corpus). The training process did not start with a prior comprehension of language. Instead, the model was given the capacity to tokenize text, whereby tokens were generated from sequences of approximately four characters in length (words may consist of more than one token). Subsequently, each pretext training task furnished the model with contextual information and rules to facilitate pattern recognition. This process resulted in a vast map of relationships, where correlations, contextual hierarchies, probabilities, domains, concepts, and more were stored. The model developed an understanding on its own, through training, of language parts, such as nouns and sentences, and all the way up to social constructs and other big picture classifications.

What we are left with is a huge map of relationships. During training, GPT-3 was asked to iterate over, to train on predicting missing tokens in a variety of contexts. By predicting what comes next in a series of tokens or filling in gaps, given enough tokens and their use, correlations, contextual hierarchies, probabilities, domains, concepts, and more are stored. Emergent abilities unexpectedly arise from these relationships (plus tuning and safety) as more data and parameters are added.

GPT-3 Emergent Abilities (list generated by ChatGPT):

  • Few-shot learning: GPT-3 is able to learn new tasks quickly with only a few examples, sometimes as few as one or two examples. This is due to the model's ability to generalize from its pre-training and adapt to new tasks with minimal fine-tuning.
  • Natural language understanding: GPT-3 has demonstrated a high level of proficiency in understanding and generating natural language text, including the ability to complete text prompts, answer questions, and generate coherent and relevant text in a conversational setting.
  • Common sense reasoning: Although GPT-3 does not have explicit knowledge of the world, it has demonstrated an ability to reason about common sense knowledge and make logical inferences based on context.
  • Language translation: GPT-3 has shown some ability to translate between languages, although its performance is currently not as good as specialized machine translation models.
  • Creative writing: GPT-3 has been used to generate creative writing, including poetry and fiction, and has demonstrated some ability to mimic the writing style of different authors or genres.
  • Code generation: GPT-3 has also been used to generate code, including HTML, CSS, and JavaScript, and has shown some ability to generate code that is functional and syntactically correct.

GPT-4 emergent abilities (list generated by Bing Chat):

  • Creativity: The ability to generate, edit, and iterate with users on creative and technical writing tasks, such as composing songs, writing screenplays, or learning a user’s writing style.
  • Visual input: The ability to process images as well as text and use them to generate relevant and coherent responses.
  • Longer context: The ability to handle much more nuanced instructions and maintain longer conversations than previous models.
  • Advanced reasoning: The ability to solve difficult problems with greater accuracy, thanks to its broader general knowledge and problem solving abilities.
  • Humour interpretation: The ability to interpret humour and distill the essence of the joke into a clear and logical explanation.
  • Theory of mind: The ability to impute unobservable mental states to others, such as beliefs, desires, intentions, emotions, etc. It is central to human social interactions, communication, empathy, self-consciousness, and morality. A study by Michal Kosinski1 tested several language models using 40 classic false-belief tasks widely used to test theory of mind in humans. The study found that GPT-4 solved nearly all the tasks (95%), suggesting that theory of mind-like ability may have spontaneously emerged as a byproduct of language models’ improving language skills. However, this ability may not be equivalent or comparable to human theory of mind, as it may depend on the input and the task, and may not reflect genuine understanding or empathy.

This is what Bing Chat (based on GPT-4 + search results) had to say about whether artificial general intelligence (AGI) has emerged, or can emerge, from a system like GPT:

"According to the search results, some researchers have claimed that GPT-4 is showing “sparks” of artificial general intelligence (AGI), meaning that it has capabilities that are at or above human level across a variety of domains and tasks. However, this claim is not universally accepted or proven, and there is no clear or agreed-upon definition of what constitutes AGI.

"It is possible that AGI could emerge in a GPT-like model at some point in the future, if the model is scaled up sufficiently and trained on diverse and high-quality data. However, it is not guaranteed or inevitable that this will happen, as there may be fundamental limitations or challenges that prevent GPT-like models from achieving true AGI, such as their reliance on next-word prediction, their lack of causal understanding, their potential for harmful outputs, and their alignment with human values and goals. Moreover, there may be alternative or better approaches to achieving AGI that do not rely on GPT-like models, such as hybrid systems that combine symbolic and sub-symbolic methods, or neuromorphic systems that mimic the structure and function of biological brains.

"Therefore, it is not clear or certain that GPT-like models are a valid or optimal approach to achieving AGI. However, they are still valuable and useful tools for advancing AI research and applications, and for exploring the possibilities and implications of AGI. They also pose significant ethical and social challenges that need to be carefully considered and addressed by researchers, developers, users, and policymakers."

---------------

I put the above together from notes I've been taking, partly for work. I thought you guys might find it useful. The progression towards AGI is fascinating and fills me with hope.

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




SCheeseman posted:

I don't think many want to chat about it, tensions are way too high and it's fracturing communities. Anyone who has their jobs and livelihoods threatened by it and/or see it as an affront to humanity are mostly interested in ways to crush it. Pretty understandable, the capitalist powers that be will take this technology and use it in all the ways people fear it will be.

But figuratively trashing the looms has never worked. AI is the end point of humankind's reliance on tool use, the problems we're grappling with now started when cavemen sharpened their flint to the point they could carve rock and/or skulls. The best we can do is manage it, something we've had a shaky history of doing particularly as of late thanks to a society that predominantly values accumulation of capital over human wellbeing (while those in ivory towers try in vain to equate the two). An AI ban might be technically possible and enforceable, but not when every world government want's this AI thing to happen, and given a social and political system with truly humanistic values automation wouldn't be a problem anyway.

It's the rich people. They are the baddies.

Anyone who understands the technology behind it on more than a surface level is generally right with those authors and artists because they understand that none of this AI functions without massive datasets that invariably violate the rights of authors and artists because no organization could afford to license works on that scale. Even at massively discounted rates for commercial licensing the image DBs behind them would be billions of dollars.

It is hilarious how clearly this is designed to benefit the rentier class, though. Using billions of dollars of supercompute hardware, at a cost of millions a month in power, cooling, and network capacity to avoid paying a few graphic designers $40k a year.

gurragadon posted:

It's like the industry of creative arts is going through what other manufacturing has been going through in a really fast time scale. All those people working on an assembly line are replaced by a machine and somebody to make sure it works. Now the same thing is happening to creative freelancers.

The advancements are coming really fast now though and it's going to hit white collar workers everywhere. Like I posted a bit earlier, GPT-4 can ace the bar exam and you can hook it up to other programs so it can perform accounting practices. Were quickly making most of the population's employment not worth the money. But if it goes beyond creatives, it will start hitting workers who command a pretty strong voice in the economy.

Will lawyers or doctors have a big enough voice when there turn comes? Since its so similar to what happened to assembly lines in the past in my mind, we know what NOT to do with people affected by advancements in technology. A simple example is just watching Roger and Me. Did any societies treat their redundant workers better?

GPT4 being able to pass the Bar is more a function of how the test is written than the ability of the AI. It's specifically designed to test the memory of applicants in a stressful and long-form test. A search algorithm with effectively perfect recall will do well at this.

Doing a lawyer's actual job, making analysis of law and presenting facts in light of said analysis, in a convincing and logical manner aligned to precedent is far outside of its scope. The same for doctors, being able to cross-reference WebMD really fast is no replacement for the people skills required to actually get an accurate description of symptoms from a patient, or to create a treatment plan across multiple conditions present that provides a balance between quality of life, patient comfort, and effectiveness of treatment.

Hell, GitHub's implementation to write code is going to go hilariously badly because it is trivial to poison the data sources in such a way as to make the outputs useless or try to inject exploits into them such that the script recreates them.

Liquid Communism fucked around with this message at 05:24 on Mar 27, 2023

Charlz Guybon
Nov 16, 2010
Things seem to be going...poorly

https://twitter.com/ChrisMurphyCT/status/1640186536825061376

porfiria
Dec 10, 2008

by Modern Video Games

Rutibex?

BrainDance
May 8, 2007

Disco all night long!


There are a bunch of these emergent skills it taught itself.

And it's very cool, yeah some people are gonna blow themselves up following chatgpt's guide to super meth. But then it'll also potentially teach itself biology and pharmaceutical chemistry and create a new cancer drug with synthesis instructions that night actually work.

archduke.iago
Mar 1, 2011

Nostalgia used to be so much better.

BrainDance posted:

But then it'll also potentially teach itself biology and pharmaceutical chemistry and create a new cancer drug with synthesis instructions that night actually work.

This isn't going to happen. The rate limiting steps in pharmaceutical research are all related to experimental validation of efficacy, synthesis, and safety, none of which can be inferred by memorizing and regurgitating textbooks, or even papers.

sinky
Feb 22, 2011



Slippery Tilde

archduke.iago posted:

This isn't going to happen. The rate limiting steps in pharmaceutical research are all related to experimental validation of efficacy, synthesis, and safety, none of which can be inferred by memorizing and regurgitating textbooks, or even papers.

Just ask the AI if the new drug is safe or not.

BrainDance
May 8, 2007

Disco all night long!

archduke.iago posted:

This isn't going to happen. The rate limiting steps in pharmaceutical research are all related to experimental validation of efficacy, synthesis, and safety, none of which can be inferred by memorizing and regurgitating textbooks, or even papers.

It's an example to illustrate a point, Christ.

And it's literally already happening
https://www.bcg.com/publications/2022/adopting-ai-in-pharmaceutical-discovery

https://www.technologyreview.com/2023/02/15/1067904/ai-automation-drug-development/

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7577280/

I am regurgitating (and not very seriously) my childhood friend's thoughts who does exactly that, doctorate in pharmaceutical chemistry who looks for novel cancer drugs at UofM. I'm not literally saying ChatGPT will cure cancer, but that emergent properties of AIs can allow them to discover patterns from the information we give it that we otherwise wouldn't for the creation of new things like potential medicines. Where, for a lot of it, the discovery itself is open to new suggestions.

BrainDance fucked around with this message at 13:18 on Mar 27, 2023

Main Paineframe
Oct 27, 2010

Well, what exactly does he mean by that? I don't think ChatGPT grabbed a bunch of beakers and reagents and started mixing chemicals.

If he asked it a basic chem question and it responded, that doesn't mean it "knows chemistry", let alone that it "taught itself advanced chemistry". There's plenty of mentions of chemistry in its training set, I'm sure. As many of the replies point out, in fact:
https://twitter.com/NireBryce/status/1640259206392545282
https://twitter.com/KRHornberger/status/1640294884845158401

Without more details, I don't think I can take his extraordinary claim on face value. And in the first place, Chris Murphy is neither a chemistry expert nor a tech expert. He's a lawyer-turned-politician. This whole conversation - not only the starting remark, but how easily and uncritically posters believed it - is a great example of how much of the AI discourse is driven by blind hype.

Adbot
ADBOT LOVES YOU

archduke.iago
Mar 1, 2011

Nostalgia used to be so much better.

BrainDance posted:

It's an example to illustrate a point, Christ.

And it's literally already happening
https://www.bcg.com/publications/2022/adopting-ai-in-pharmaceutical-discovery

https://www.technologyreview.com/2023/02/15/1067904/ai-automation-drug-development/

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7577280/

I am regurgitating (and not very seriously) my childhood friend's thoughts who does exactly that, doctorate in pharmaceutical chemistry who looks for novel cancer drugs at UofM. I'm not literally saying ChatGPT will cure cancer, but that emergent properties of AIs can allow them to discover patterns from the information we give it that we otherwise wouldn't for the creation of new things like potential medicines. Where, for a lot of it, the discovery itself is open to new suggestions.

Did you actually read the articles you posted? Or did you just Google "ai in drug discovery" and pick three hits on the first page? Nothing in them comes close to your example of an AI proposed novel molecule with a synthesis to boot, nor does the description from your friend. We *already* have far more proposed drugs then we have the capacity to test and approve, which is a problem that "AI" does nothing to solve.

The constant breathless misrepresentation of the capabilities of AI systems does nothing but further the interests of the programmers who develop them. If the computers are scary, dangerous, and capable, the only one who can reign them in are the caste of AI priests at the top.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply