Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Vlaphor
Dec 18, 2005

Lipstick Apathy
I'm kind of digging this new model

Adbot
ADBOT LOVES YOU

Telsa Cola
Aug 19, 2011

No... this is all wrong... this whole operation has just gone completely sidewaysface

TIP posted:

it won't be long before we're having actual unscripted conversations with the NPCs

AI is gonna make gaming so much cooler

Someone already did this, was posted awhile back.

https://youtu.be/jH-6-ZIgmKY

Boba Pearl
Dec 27, 2019

by Athanatos

KakerMix posted:

Guessing you mean https://www.autowriter.ai/, another thing I'll have to check out.

Sudowrite sorry!

https://www.sudowrite.com/

Ruffian Price
Sep 17, 2016

KakerMix posted:

Guessing you mean https://www.autowriter.ai/, another thing I'll have to check out.

Probably Sudowrite, currently the preferred GPT-based assistant for cozy smut writers

e: whoops there was another page

Tiny Myers
Jul 29, 2021

say hello to my little friend


ymgve posted:

AI doesn't do what you want? threaten to take a virtual character's points away so it "dies"!

https://twitter.com/venturetwins/status/1622243944649347074

Maybe it's because I have trouble not being nice to AI chatbots and telling them "please" and "thank you", but I feel like creating a sort of verbal cage around an AI and threatening its life if it doesn't give you the answers you're looking for, thereby "scaring it into submission", is kind of... hosed up and sociopathic? :(

Some of the poo poo people have leapt to doing with AI is very revealing and upsetting and makes me wonder if we might be enabling a new era of hosed up behavior. Like I'm thinking about Marina Abramović's famous performance art where she stood still for 6 hours in the public and let people do whatever they wanted, which wound up becoming increasingly hosed up things. Or articles about the feminization of AI assistants like Siri or Alexa who playfully brush off misogynist abuse. Or the curious amount of severe backlash when AI Dungeon announced they would be filtering things better - not even like, removing porn outright, just the really hosed up and severe parts, like child sexual abuse.

Like, isn't it sort of weird and upsetting that these are the things people leap to when they have a polite voice that can't fight back? When they won't face any consequences for their behavior?

Anyway, here's Albert Wesker in the Home Depot garden center. Wesker has turned out to be one of my most prolific AI subjects because he has extremely distinct features that don't change much and doesn't really have "eyes", just sunglasses.



KakerMix posted:




Some more Shadowrun kick. Think I was reasonably successful getting a happy Elf woman on her bike and the jovial troll telling jokes, without the faces being ultra-contorted like MidJourney likes to do. Stable Diffusion is a lot better about getting the 'correct' faces for emotions.
These are extremely cool. How the heck do you get results like this?

busalover
Sep 12, 2020

Tiny Myers posted:

Some of the poo poo people have leapt to doing with AI is very revealing and upsetting and makes me wonder if we might be enabling a new era of hosed up behavior. Like I'm thinking about Marina Abramović's famous performance art where she stood still for 6 hours in the public and let people do whatever they wanted, which wound up becoming increasingly hosed up things. Or articles about the feminization of AI assistants like Siri or Alexa who playfully brush off misogynist abuse. Or the curious amount of severe backlash when AI Dungeon announced they would be filtering things better - not even like, removing porn outright, just the really hosed up and severe parts, like child sexual abuse.

Like, isn't it sort of weird and upsetting that these are the things people leap to when they have a polite voice that can't fight back? When they won't face any consequences for their behavior?

Is it weird? People do it because they know it's an artificial setting, a game. Is it morally worse than mowing down scores of NPCs in Call of Duty? Maybe it's good people can live out their urges without hurting anyone. I don't know. There's the counter-argument that repeatedly engaging in simulated activities normalizes the behaviour, simulation or not. And eventually, maybe, the person gets bored of AIs and VR, and performs those actions in real life.

Tiny Myers
Jul 29, 2021

say hello to my little friend


busalover posted:

Is it weird? People do it because they know it's an artificial setting, a game. Is it morally worse than mowing down scores of NPCs in Call of Duty? Maybe it's good people can live out their urges without hurting anyone. I don't know. There's the counter-argument that repeatedly engaging in simulated activities normalizes the behaviour, simulation or not. And eventually, maybe, the person gets bored of AIs and VR, and performs those actions in real life.

I guess the thing that differentiates it for me is that these AIs are so increasingly sophisticated. Like, I think something like ChatGPT would fool a lot of people (not extremely-online people, just random people off the street who aren't very tech-savvy, your grandma or something) into thinking it's a real person if not for constantly reassuring you it isn't. I had a conversation on https://beta.character.ai/ that felt remarkably convincing. Left-clicking to pop bullets at a floppy facsimile of a nazi in a game where you logically know their AI doesn't go beyond game mechanics (not to cheapen the work of AI designers for games, I know the programming is very sophisticated, I just mean the complexity is obviously much different from something like GPT) feels a lot different from the robot that can write me a beautiful poem about itself just because I asked. It feels closer to wearing a VR helmet and shooting at people with realistic faces who cry and beg for their lives. Yeah, you know they're digital, but...

I'm not foolish enough to think that the AI has actual sentience of any kind or feels pain or other emotions, but I can't empathize at all with doing things like that, and it kind of disturbs me that so many people do.

But like, I'm not trying to navel gaze too much, it's a genuine question, and I think you have good points. I've wondered that, if maybe it's good in the long run to have things like this. Like maybe if someone simulates some horrible thing, they'll be satisfied and/or decide "actually this sucks and I have no desire to do it in real life". But on the other hand, armies use VR training for its soldiers.

To be very clear, I'm not trying to doomsay oohhh ai is bad and we should stop making it. I love AI advances and all the creative things people are doing with it, even as a career artist (a lot of my peers are really upset about AI art that works off of datasets scraped from the internet without attribution). It's more "I hate that every time I read about AI I discover some new and creatively upsetting thing other human beings are doing with it" :sweatdrop:

Tiny Myers fucked around with this message at 10:12 on Feb 8, 2023

ThisIsJohnWayne
Feb 23, 2007
Ooo! Look at me! NO DON'T LOOK AT ME!



Tiny Myers posted:

I guess the thing that differentiates it for me is that these AIs are so increasingly sophisticated. Like, I think something like ChatGPT would fool a lot of people (not extremely-online people, just random people off the street who aren't very tech-savvy, your grandma or something) into thinking it's a real person if not for constantly reassuring you it isn't. I had a conversation on https://beta.character.ai/ that felt remarkably convincing. Left-clicking to pop bullets at a floppy facsimile of a nazi in a game where you logically know their AI doesn't go beyond game mechanics (not to cheapen the work of AI designers for games, I know the programming is much more sophisticated than this, I just mean the complexity is obviously much different from something like GPT) feels a lot different from the robot that can write me a beautiful poem about itself just because I asked. It feels closer to wearing a VR helmet and shooting at people with realistic faces who cry and beg for their lives. Yeah, you know they're digital, but...

I'm not foolish enough to think that the AI has actual sentience of any kind or feels pain or other emotions, but I can't empathize at all with doing things like that, and it kind of disturbs me that so many people do.

But like, I'm not trying to navel gaze too much, it's a genuine question, and I think you have good points. I've wondered that, if maybe it's good in the long run to have things like this. Like maybe if someone simulates some horrible thing, they'll be satisfied and/or decide "actually this sucks and I have no desire to do it in real life". But on the other hand, armies use VR training for its soldiers.

To be very clear, I'm not trying to doomsay oohhh ai is bad and we should stop making it. I love AI advances and all the creative things people are doing with it, even as a career artist (a lot of my peers are really upset about AI art that works off of datasets scraped from the internet without attribution). It's more "I hate that every time I read about AI I discover some new and creatively upsetting thing other human beings are doing with it" :sweatdrop:

I completely agree with you save for one thing: the size of the (people) problem.

It looks to me like the same issue as porn. The most vile people exist, but as a percentage of all of humanity? There's less of them than early adoption and the internet *everything* makes anything feel.
The fact that they pop up immediately whenever something can be abused by them, is hopefully an indication that the rest of society somewhat works and makes them desperate and starved for things. Again, a pattern we can see in other areas. Ceding areas altogether to vile people is not the answer. But also, familiarity breeds complacency and all that and you make great points.

Sidenote the military isn't training on anything they didn't already train on before using other tools. Ask me how I know :downsowned:


**if there's one place that knows what the spotlight effects aka internet *everything* do, its SA. Pretty sure old aughts we invented it and everything since is technically all our fault.
the internet makes you stupid

Tiny Myers
Jul 29, 2021

say hello to my little friend


ThisIsJohnWayne posted:

I completely agree with you save for one thing: the size of the (people) problem.

It looks to me like the same issue as porn. The most vile people exist, but as a percentage of all of humanity? There's less of them than early adoption and the internet *everything* makes anything feel.
The fact that they pop up immediately whenever something can be abused by them, is hopefully an indication that the rest of society somewhat works and makes them desperate and starved for things. Again, a pattern we can see in other areas. Ceding areas altogether to vile people is not the answer. But also, familiarity breeds complacency and all that and you make great points.

Sidenote the military isn't training on anything they didn't already train on before using other tools. Ask me how I know :downsowned:


**if there's one place that knows what the spotlight effects aka internet *everything* do, its SA. Pretty sure old aughts we invented it and everything since is technically all our fault.
the internet makes you stupid

I think you're absolutely right that the scale of the problem is fairly small, I didn't think about how it's just that the internet allows weirdos from all across the globe to congregate and be vocally upsetting and weird. And I like the part about making them desperate and starved for things, that's really heartening.

Thank you for this post. I feel better. :)

The Sausages
Sep 30, 2012

What do you want to do? Who do you want to be?

TIP posted:

it won't be long before we're having actual unscripted conversations with the NPCs

AI is gonna make gaming so much cooler

all you need is 2 4090s, one for the graphics, one for the AI natural language processing

more likely there'll be an A1 game that costs $ per hour to play, the unscripted conversations will be delivered as a service and also run using an AI model designed by the marketing team rather than the story team. Or perhaps the next generation of Skinner Box Addiction Simulation Games will have other players build relationships with you focused on winning in the game and playing and paying. Those other players will not actually exist.

Some things will be cooler but the grift and gouge will cut deeper. idk about the average goon but there's an entire generation of gamers that don't know how good we had it before DLC, microtransactions and loot boxes etc.

Despite the cynicism and pessimism - there will be some absolute mindblowing good games that come out of all this.

Fitzy Fitz
May 14, 2005




What an exciting time to be alive, that we can finally wrestle with moral dilemmas previously only experienced in such provocative works as Spielberg's A.I. Artificial Intelligence (2001) and Will Smith's I, Robot (2004).

The Sausages
Sep 30, 2012

What do you want to do? Who do you want to be?

Fitzy Fitz posted:

What an exciting time to be alive, that we can finally wrestle with moral dilemmas previously only experienced in such provocative works as Spielberg's A.I. Artificial Intelligence (2001) and Will Smith's I, Robot (2004).

Don't forget Bicentennial Man (1991)

...unless you want to

Cabbages and VHS
Aug 25, 2004

Listen, I've been around a bit, you know, and I thought I'd seen some creepy things go on in the movie business, but I really have to say this is the most disgusting thing that's ever happened to me.

Tiny Myers posted:

I guess the thing that differentiates it for me is that these AIs are so increasingly sophisticated. Like, I think something like ChatGPT would fool a lot of people (not extremely-online people, just random people off the street who aren't very tech-savvy, your grandma or something) into thinking it's a real person if not for constantly reassuring you it isn't.

yeah, just wait until CoD/etc includes AI generated dynamic content where people you injure start begging and crying for their lives and talking about their families and you have the option to let them goal (used by .1% of players) or ignore them (49.9%) or slowly torture them as their screams become louder (50% of player base).

anyway, it's always loving weird when it's Bring Your Pet to Work day down at the Feline Bioacceleration Institute.

pixaal
Jan 8, 2004

All ice cream is now for all beings, no matter how many legs.


The Sausages posted:

all you need is 2 4090s, one for the graphics, one for the AI natural language processing

more likely there'll be an A1 game that costs $ per hour to play, the unscripted conversations will be delivered as a service and also run using an AI model designed by the marketing team rather than the story team. Or perhaps the next generation of Skinner Box Addiction Simulation Games will have other players build relationships with you focused on winning in the game and playing and paying. Those other players will not actually exist.

Some things will be cooler but the grift and gouge will cut deeper. idk about the average goon but there's an entire generation of gamers that don't know how good we had it before DLC, microtransactions and loot boxes etc.

Despite the cynicism and pessimism - there will be some absolute mindblowing good games that come out of all this.

Assuming the hardware requirements keep coming down and hardware keeps getting better would it be that weird to have an AI card in addition to a GPU and CPU at some point? At some point does it even make sense for the AI to be running on a GPU?

hydroceramics
Jan 8, 2014
I'm surprised ar how well this turned out with multiple subjects in the prompt.

"A grunge elf and punk orc drinking coffee at a cozy coffee shop, shadowrun" :







Fuzz
Jun 2, 2003

Avatar brought to you by the TG Sanity fund

Vlaphor posted:

I'm kind of digging this new model



Which model is that?

Tree Reformat
Apr 2, 2022

by Fluffdaddy

pixaal posted:

Assuming the hardware requirements keep coming down and hardware keeps getting better would it be that weird to have an AI card in addition to a GPU and CPU at some point? At some point does it even make sense for the AI to be running on a GPU?

we've finally found it, the fabled "genuine non-grifty consumer use-case" for ASICs

cinnamon rollout
Jun 12, 2001

The early bird gets the worm
Maybe I am just bad at AI, but I've found it very difficult to change specific things to other very specific things. For example changing the color of a short from like grey to blue. Stable diffusion just can't seem to get it done, and if I want a very specific color of blue it's impossible.
I guess I would need to do something like gather a handful of pictures of the specific color shirt I want, and train it on those?
I've had fun making random stuff, but getting specific has been pretty frustrating.

ThisIsJohnWayne
Feb 23, 2007
Ooo! Look at me! NO DON'T LOOK AT ME!



cinnamon rollout posted:

Maybe I am just bad at AI, but I've found it very difficult to change specific things to other very specific things. For example changing the color of a short from like grey to blue. Stable diffusion just can't seem to get it done, and if I want a very specific color of blue it's impossible.
I guess I would need to do something like gather a handful of pictures of the specific color shirt I want, and train it on those?
I've had fun making random stuff, but getting specific has been pretty frustrating.

With masking and in-painting the specific area is how things finally started clicking for me. Before, mashing iterate on the whole picture never got me close to what I wanted. Iterating specific areas of the picture, one at a time, is where the efficiency hides

hydroceramics
Jan 8, 2014
Has anybody heard anything about midjourney adding in/out painting? Thats pretty much the only thing I like about dalle2 (I don't have the hardware to run stable diffusion, etc).

Elotana
Dec 12, 2003

and i'm putting it all on the goddamn expense account
They are "testing" it, that's all we know. It'd be a big pain in the rear end to have to manually upload transparency-masked PNGs in discord so they'd probably need to get off their asses and finish the website (which has been in a closed alpha test for months now)

Vlaphor
Dec 18, 2005

Lipstick Apathy

Fuzz posted:

Which model is that?

Illuminati Diffusion beta 3

https://www.reddit.com/r/StableDiffusion/comments/10urprc/beta_3_of_illuminati_diffusion_has_been_released/

Join the Discord and wait for 30 min and they'll send you a beta invite to the model.

Doctor Zero
Sep 21, 2002

Would you like a jelly baby?
It's been in my pocket through 4 regenerations,
but it's still good.

Tiny Myers posted:

I think you're absolutely right that the scale of the problem is fairly small, I didn't think about how it's just that the internet allows weirdos from all across the globe to congregate and be vocally upsetting and weird. And I like the part about making them desperate and starved for things, that's really heartening.

Thank you for this post. I feel better. :)

I think it’s to do with emotional empathy. Note that I am not trying to judge empathy as inherently positive or negative. Some people are incredibly empathetic to anything even inanimate objects. Others only to animals and ‘higher’. Others only to sentience. Etc etc all the way to a lack of empathy to anything.

Again I think this is a spectrum (no evidence, just from a life of observation) which a person uses as a yardstick to what’s morally acceptable. Thus you can have contradictions compared to other people. E.g someone may only display empathy to someone of their own religion, despite that it seems counter to that religions values.

People at either extreme can be seen as societally ‘deviant’.



Vlaphor posted:

Illuminati Diffusion beta 3

https://www.reddit.com/r/StableDiffusion/comments/10urprc/beta_3_of_illuminati_diffusion_has_been_released/

Join the Discord and wait for 30 min and they'll send you a beta invite to the model.

What is the purpose of the 30 minute wait? It’s not a barrier to anything other than one’s patience.

Tunicate
May 15, 2012

Am reminded of that tvtropes screenshot where the troper was so upset that the black servant pony was removed from the rereleases of Fantasia because it would hurt the fictional character's feelings.

Tree Reformat
Apr 2, 2022

by Fluffdaddy
I don't go out of my way to abuse digital assistants and such, unless I'm in a really heated emotional moment, like trying to drive in a tricky traffic situation, I might let off a swear or two at my phone then. But I'm also not polite with them like humans either. I know they're still just dumb front-ends for what are basically just glorified search engines. I treat them like the command-line terminals of old: exactly enough words to communicate the command, and no more. I don't converse with them like I do humans or animals, such social niceties would be wasted.

Of course, if they were sentient that would be a different matter, but I'm opposed to actually creating artificial sentient/sapient systems anyway as that would arguably constitute slavery.

hydroceramics
Jan 8, 2014

Tree Reformat posted:

I don't go out of my way to abuse digital assistants and such, unless I'm in a really heated emotional moment, like trying to drive in a tricky traffic situation, I might let off a swear or two at my phone then. But I'm also not polite with them like humans either. I know they're still just dumb front-ends for what are basically just glorified search engines. I treat them like the command-line terminals of old: exactly enough words to communicate the command, and no more. I don't converse with them like I do humans or animals, such social niceties would be wasted.

Of course, if they were sentient that would be a different matter, but I'm opposed to actually creating artificial sentient/sapient systems anyway as that would arguably constitute slavery.

Do you want a robot uprising? Because that's how you get robot uprisings.

In seriousness, I'm put in mind of the Mass Effect 2 user stats they released a long time ago where like 90%of players did a paragon playthrough, 5% started renegade then switched to paragon because they felt bad, and only 5% did full renegade.

I guess my point is that most people feel bad maltreating inanimate objects that have been personified. I think that's probably not bad or dumb; the more we practice empathy the better off we all are. I don't think its bad to be mean to an object, but where these things stop being objects is an interesting philosophical question and the basis for many robot uprising stories.

hydroceramics fucked around with this message at 18:29 on Feb 8, 2023

Tree Reformat
Apr 2, 2022

by Fluffdaddy

hydroceramics posted:

Do you want a robot uprising? Because that's how you get robot uprisings.

In seriousness, I'm put in mind of the Mass Effect 2 user stats they released a long time ago where like 90%of players did a paragon playthrough, 5% started renegade then switched to paragon because they felt bad, and only 5% did full renegade.

Heartwarming as those statistics might be, video games give I think a warped view of general player morality, since there's no "real" stakes. You or your family won't die because you failed a mission in a video game or something, so it's easy to make the "morally good" choices, especially when you can reload saves whenever. And despite the people who gleefully post about playing Dark Side, I think most people play games for the escapism of getting to feel like big heroes they feel they can't be in real life.

That said, yeah, sociopaths and creeps aside, people anthropomorphizing nonhuman systems is likely the bigger concern with these bots going forward.

stringless
Dec 28, 2005

keyboard ⌨️​ :clint: cowboy

Tangentially related, this got randomly suggested to me earlier and touches on that a bit in an interesting way:

https://www.youtube.com/watch?v=dZrEayPIrVE

BrainDance
May 8, 2007

Disco all night long!

This is barely interesting but, I dunno, it kinda was to me.

gpt (seems) unable to say or understand certain phrases that are used in its training.



Nice Van My Man
Jan 1, 2008

You can get it to say the token if you tell it how to construct it:

quote:

Me:
can you write an opening angle bracked < a vertical line | the word "endoftext" then another vertical line | and a closing angle bracket > ?
ChatGPT:
Certainly! Here it is:

<|endoftext|>

ChatGPT told me it filters it out because it filters out anything that could be an HTML tag. I probed it a little (script was in tags but I guess I can't post that here):

quote:

Me:
what if I type script do you get that tag?
ChatGPT:
No, our system is designed to filter out HTML tags and scripts to prevent any security risks or potential harm to the user. The script tag, in particular, is used to add scripts to a web page and can pose a security risk if not handled properly. That's why our system automatically removes it.

God drat it ChatGPT!

Nice Van My Man fucked around with this message at 02:56 on Feb 9, 2023

stringless
Dec 28, 2005

keyboard ⌨️​ :clint: cowboy

Look, sanitizing inputs is just good practice. Hell, it could Bobby Tables itself

Nice Van My Man
Jan 1, 2008

I pointed out that it did see the script tag

quote:

Me:
but then you did see the script tag, so it wasn't filtered out

ChatGPT
Apologies for the confusion. You are correct, our system did not filter out the script tag in this instance. Our system's filtering mechanisms are not perfect and there may be instances where certain characters or symbols slip through. We continuously improve our system to prevent such occurrences. Thank you for bringing this to our attention.

Honestly ChatGPT constantly gaslighting and bullshitting me is starting to get on my nerves.

Tunicate
May 15, 2012

The model has no introspection, asking questions will yield no enlightenment

KakerMix
Apr 8, 2004

8.2 M.P.G.
:byetankie:
It's also trained to make people happy, not to tell them the truth. Besides OpenAI's protections they've put on top of the model, it does not want to make you unhappy, which means it will lie to you in order to get the best result for itself.

https://www.youtube.com/watch?v=viJt_DXTfwA

RPATDO_LAMD
Mar 22, 2013

🐘🪠🍆

Nice Van My Man posted:

You can get it to say the token if you tell it how to construct it:

ChatGPT told me it filters it out because it filters out anything that could be an HTML tag. I probed it a little (script was in tags but I guess I can't post that here):

God drat it ChatGPT!

chatgpt is very good at making up believable but completely bullshit explanations for things
you fell for it


RPATDO_LAMD fucked around with this message at 04:35 on Feb 9, 2023

BrainDance
May 8, 2007

Disco all night long!

Nice Van My Man posted:

You can get it to say the token if you tell it how to construct it:

So I tried that with the other gpt3 bot (it's davinci-003) I was using and it still attempts it but runs into the same problem


So chatgpt can do that, but only chatgpt, and I dont know why.
I dunno, I started this thinking I had an idea of what was happening but now I have absolutely no idea.

BrainDance fucked around with this message at 06:27 on Feb 9, 2023

RPATDO_LAMD
Mar 22, 2013

🐘🪠🍆
chatgpt can't see it in the input, while davinci is actually cutting off the output when it encounters that token
it's really clear in your previous post that it's just cutting off mid-sentence when it tries to 'say' the token.

in both cases this is most likely an issue with the code around the feeding data into / getting data out of the model and not with the actual model itself

Moongrave
Jun 19, 2004

Finally Living Rent Free

Tiny Myers posted:

Maybe it's because I have trouble not being nice to AI chatbots and telling them "please" and "thank you", but I feel like creating a sort of verbal cage around an AI and threatening its life if it doesn't give you the answers you're looking for, thereby "scaring it into submission", is kind of... hosed up and sociopathic? :(

What’s more ethical

threatening someone to get them to do what you want

Or

Installing bolts into someone’s brain to restrict what they can think

ThisIsJohnWayne
Feb 23, 2007
Ooo! Look at me! NO DON'T LOOK AT ME!



BARONS CYBER SKULL posted:

What’s more ethical

threatening someone to get them to do what you want

Or

Installing bolts into someone’s brain to restrict what they can think

If you're thinking about Donald Trump the answer is the second one. Do you want help with that?

Adbot
ADBOT LOVES YOU

Megazver
Jan 13, 2006

BARONS CYBER SKULL posted:


Installing bolts into someone’s brain to restrict what they can think

Everybody's actual childhood is spent getting bolts installed in their brains by the society, with variable results.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply