Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
lunar detritus
May 6, 2009



Great post! In the case you end up using AI-generated art (or even AI help in producing the final assets), how would you handle it? I feel like outright saying "I used AI art in my game" is marketing suicide these days.

Adbot
ADBOT LOVES YOU

deep dish peat moss
Jul 27, 2006

That's something I've been thinking about a lot and I'm not entirely sure of the answer, but I think the key is going to lie in being upfront about the processes used to convert the AI output into a final image (and the processes used to come up with a distinct non-derivative artstyle in MJ prompts). The goal would be to show that yes AI was used, but there was still a distance traveled between the AI output and the final assets, and there was still a driving Human creative force behind the entire vision.

It's not a difficult process for anyone familiar with digital art to go from this:


to this:

(which is animated but not very visibly/obviously, whoops, it needs more)

But it's not exactly the kind of thing that your middle manager Doug could do after firing his entire art department.

And it was a very lengthy and complex process to go from "I want to make stylistically-unique pixel art assets using AI tools" to generating the AI output in the first place, much less to the final product.


e: I guess another way to look at it is that it would be important to showcase that the art in the game is not raw AI output. The reason people are scared of AI Art is because they think it's going to remove the Human element, but the AI-based art assets I plan to use wouldn't be possible without a Human element, they just use AI-generated images as one of the raw materials in the creation process.

deep dish peat moss fucked around with this message at 02:05 on Feb 27, 2023

Mozi
Apr 4, 2004

Forms change so fast
Time is moving past
Memory is smoke
Gonna get wider when I die
Nap Ghost


Mozi fucked around with this message at 02:06 on Feb 27, 2023

KwegiboHB
Feb 2, 2004

nonconformist art brut
Negative prompt: amenable, compliant, docile, law-abiding, lawful, legal, legitimate, obedient, orderly, submissive, tractable
Steps: 32, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 520244594, Size: 512x512, Model hash: 99fd5c4b6f, Model: seekArtMEGA_mega20

lunar detritus posted:

Great post! In the case you end up using AI-generated art (or even AI help in producing the final assets), how would you handle it? I feel like outright saying "I used AI art in my game" is marketing suicide these days.

Be bold and yell that you made stuff with AI.
I grew up in a tattoo parlor in the '90s, I can say this about contentious artforms: being up front and clear about everything let's you set things on your terms. It's a powerful thing to make a clear statement and then let people make up their own minds. The number of people interested in what this can do is only growing by leaps and bounds, if you have an idea please don't talk yourself out of it preemptively over what could go wrong, there's greater than even odds it all goes horrible right.

Light Gun Man
Oct 17, 2009

toEjaM iS oN
vaCatioN




Lipstick Apathy
as a total outsider I think I'd suggest one or both of the following:

mention it on game boot, perhaps with a demonstration of a ai versus final.

have a toggle that you can use at any time to switch between the raw ai output and the final art. that would be pretty neat, I think.

either way I agree that you should be straightforward about it, if it's something that comes out later it'll be a big poo poo show probably.

KakerMix
Apr 8, 2004

8.2 M.P.G.
:byetankie:

LASER BEAM DREAM posted:

This probably applies to few people, but heads up for those with a 4090. You need to download updated CUDA drivers from nvidia and update your SD install to use it effectively. My 4090 was performing worse than my 3080 until I did this. Guide and link here:

https://www.reddit.com/r/StableDiffusion/comments/y71q5k/4090_cudnn_performancespeed_fix_automatic1111/

Hey: Thanks for this as I have upgraded to a 4090 from a 3090 and lmao the performance difference before and after with this change is dramatic to say the least.

pixaal
Jan 8, 2004

All ice cream is now for all beings, no matter how many legs.



Gorgeous, but do you think your viewers are not goatse connoisseurs? Do you think they must be beat over the head to get the reference?

deep dish peat moss
Jul 27, 2006

Light Gun Man posted:

as a total outsider I think I'd suggest one or both of the following:

mention it on game boot, perhaps with a demonstration of a ai versus final.

have a toggle that you can use at any time to switch between the raw ai output and the final art. that would be pretty neat, I think.

either way I agree that you should be straightforward about it, if it's something that comes out later it'll be a big poo poo show probably.

Unfortunately I don't think either one of those solutions is particularly workable but this has been interesting to think about - a lot of what makes what I'm doing "impressive" would be hard to see without thorough experience with AI image tools to understand exactly why it's not easy. This might put it in better perspective as an outsider - try and imagine what you would even type into a prompt to get an image like the one below (if your answer includes the words 'glitch' or 'pixel' you're headed in the wrong direction, those both heavily skew the aesthetic and subject matter of images toward something unlike this).

A toggle to switch between AI-generated and edited assets would double the amount of assets required (It would be a lot more than just pasting AI-generated image files into the game, each one would need to be cropped, prepped, etc. because there's a lot more than one image element on the screen at any given time, which is something I currently do after the conversion to pixel art because it's far easier/faster that way)

There's also not a whole lot of "artistry" in the conversion of this:


to this:


It could literally be done by a GIMP macro, so simply visually showcasing the difference would disappoint people and undermine the point: that the "artistry" is the human creative vision that brought it all together. Because while there's not much "artistry" in traveling between Image 1 and Image 2 above, there's a whole lot of "artistry" in traveling between "I have an idea for a game/world and want a unique aesthetic that doesn't look like anything that other artists are doing" and generating the first image above, and there's a little bit more artistry in then developing the process to convert that image into the pixelart-friendly game-ready form of the second image, despite there not being a stunning visual difference between the two.

I don't think it would be easy for anyone to replicate the style of the AI-generated image above, even if they've been working with MJ for a long time and understand the nuances of AI image generation, unless they've read my prompts/posts about them pretty closely, and even then I think I've left enough ambiguity in them to make sure this style is "mine". (Possibly with img2img but even that degrades with each successive generation and limits what you can do to 'things similar to what I've posted').

I've had to develop a lot of weird textual acrobatics and focus heavily on the clarity of my writing and remove ambiguity from my vocabulary, and get a thorough understanding of how MJ interprets various emotive words visually to prevent cross-contamination (where e.g. changing one word about the subject of an image vastly changes the entire aesthetic style of the image, because that subject is strongly linked with a different aesthetic in the training data).

But this definitely highlights the challenge of using AI-generated stuff in a project and attempting to appeal to the general public - "using AI image tools" is easy, but using them to do something distinct and to express a unique creative vision is an entirely different ballpark, and that's not intuitively obvious unless you've tried to use them for that purpose. I guess what will set it apart in that sphere is that it just plain will not look like other AI-generated projects.

deep dish peat moss fucked around with this message at 03:44 on Feb 27, 2023

Vlaphor
Dec 18, 2005

Lipstick Apathy
I don't know if I'll go for another month of Midjourney, but I am getting some good stuff for sure.

"terrifying motivational posters"



"gateway between life and death, zdzisław beksiński"



"Using Midjourney"


Vlaphor fucked around with this message at 03:09 on Feb 27, 2023

Crapple!
Nov 1, 2019


Been playing around with writing camera settings into midjourney prompts after I saw them being used in some really nice images. Been having have pretty good results





Light Gun Man
Oct 17, 2009

toEjaM iS oN
vaCatioN




Lipstick Apathy

deep dish peat moss posted:

Unfortunately I don't think either one of those solutions is particularly workable but this has been interesting to think about - a lot of what makes what I'm doing "impressive" would be hard to see without thorough experience with AI image tools to understand exactly why it's not easy. This might put it in better perspective as an outsider - try and imagine what you would even type into a prompt to get an image like the one below (if your answer includes the words 'glitch' or 'pixel' you're headed in the wrong direction, those both heavily skew the aesthetic and subject matter of images toward something unlike this).

A toggle to switch between AI-generated and edited assets would double the amount of assets required (It would be a lot more than just pasting AI-generated image files into the game, each one would need to be cropped, prepped, etc. because there's a lot more than one image element on the screen at any given time, which is something I currently do after the conversion to pixel art because it's far easier/faster that way)

There's also not a whole lot of "artistry" in the conversion of this:


to this:


It could literally be done by a GIMP macro, so simply visually showcasing the difference would disappoint people and undermine the point: that the "artistry" is the human creative vision that brought it all together. Because while there's not much "artistry" in traveling between Image 1 and Image 2 above, there's a whole lot of "artistry" in traveling between "I have an idea for a game/world and want a unique aesthetic that doesn't look like anything that other artists are doing" and generating the first image above, and there's a little bit more artistry in then developing the process to convert that image into the pixelart-friendly game-ready form of the second image, despite there not being a stunning visual difference between the two.

I don't think it would be easy for anyone to replicate the style of the AI-generated image above, even if they've been working with MJ for a long time and understand the nuances of AI image generation, unless they've read my prompts/posts about them pretty closely, and even then I think I've left enough ambiguity in them to make sure this style is "mine". (Possibly with img2img but even that degrades with each successive generation and limits what you can do to 'things similar to what I've posted').

I've had to develop a lot of weird textual acrobatics and focus heavily on the clarity of my writing and remove ambiguity from my vocabulary, and get a thorough understanding of how MJ interprets various emotive words visually to prevent cross-contamination (where e.g. changing one word about the subject of an image vastly changes the entire aesthetic style of the image, because that subject is strongly linked with a different aesthetic in the training data).

But this definitely highlights the challenge of using AI-generated stuff in a project and attempting to appeal to the general public - "using AI image tools" is easy, but using them to do something distinct and to express a unique creative vision is an entirely different ballpark, and that's not intuitively obvious unless you've tried to use them for that purpose. I guess what will set it apart in that sphere is that it just plain will not look like other AI-generated projects.

yeah I figured I wasn't terribly realistic, but it is interesting to think about yeah. seems like you have thought about it a lot, which seems wise.

The Sausages
Sep 30, 2012

What do you want to do? Who do you want to be?

pixaal posted:

I have a steamed hams inspired prompts







lol

for those who were wondering
https://twitch.tv/unlimitedsteam

Fuzz
Jun 2, 2003

Avatar brought to you by the TG Sanity fund

lunar detritus posted:

Great post! In the case you end up using AI-generated art (or even AI help in producing the final assets), how would you handle it? I feel like outright saying "I used AI art in my game" is marketing suicide these days.

I think the only way at this point would be to use an ethically sourced model that was trained only on free use/public domain/copyright free images, so basically scrape all of Pexels, Adobe Stock, Unsplash, etc, train on every bit of fine art that's well scanned digitally, and then you'd still need a metric fuckton of data to make a model that's actually decent.

There's nothing like that available right now, from what I've seen, but someone is probably working on it. Even then, a huge subset of people would lose their loving minds about it because The Internet, but yeah, that's kinda your only recourse.

BrainDance
May 8, 2007

Disco all night long!

On the other hand maybe all the people freaking out would be a ton of free publicity and a bunch of people buy your game who otherwise would have never heard of it and don't give a gently caress about dudes who do DnD characters on commission having a tantrum.

pixaal
Jan 8, 2004

All ice cream is now for all beings, no matter how many legs.


The Sausages posted:

lol

for those who were wondering
https://twitch.tv/unlimitedsteam

I prefer the Manifest stream especially when it's live it feels like using an AI that makes short 1 minute meme videos I think it actually is that! It feels very much like Stable Diffusion Discord did when it was Discord only with checking out what other people are doing and a bit of waiting in queue.

I do love a good new steamed hams though!

I did make AI fan art for manifest too: (Muppet Borkus Demon Clown which refuses to pass content filters with whatever the bot is sending to their stable diffusion)

pixaal fucked around with this message at 14:11 on Feb 27, 2023

Fuzz
Jun 2, 2003

Avatar brought to you by the TG Sanity fund
Bunch of my poo poo over the last couple weeks, mostly related to a Vampire RPG book I'm working on:





And then I was trying to make some moody scene type stuff...







Can post more stuff later, that's probably enough spam. Those are the direct images so you can see the prompts baked in.

Megazver
Jan 13, 2006
Corridor Digital is experimenting with SD-enabled video-to-animation:

https://www.youtube.com/watch?v=_9LX9HSQkWo

Humbug Scoolbus
Apr 25, 2008

The scarlet letter was her passport into regions where other women dared not tread. Shame, Despair, Solitude! These had been her teachers, stern and wild ones, and they had made her strong, but taught her much amiss.
Clapping Larry

Megazver posted:

Corridor Digital is experimenting with SD-enabled video-to-animation:

https://www.youtube.com/watch?v=_9LX9HSQkWo

Already posted, and Nico has an hour-long tutorial on exactly the setup steps, including model training and how to use Resolve to fix errors and also keying out greenscreen. It's paywalled on https://www.corridordigital.com/, but they have a fuckton of content there and are a genial bunch of VFX weirdos.

KakerMix
Apr 8, 2004

8.2 M.P.G.
:byetankie:

BrainDance posted:

On the other hand maybe all the people freaking out would be a ton of free publicity and a bunch of people buy your game who otherwise would have never heard of it and don't give a gently caress about dudes who do DnD characters on commission having a tantrum.

It's this. The end user does not give a gently caress about anything at all otherwise that Hogwarts game wouldn't be as popular or people wouldn't eat meat or buy Nestle water or whatever running down the list of things we should all feel bad about.
You are well and truly out of your mind if you think average customer cares about where pretty picture comes from. They're still going to watch the latest Disney movie no matter how many VFX people were burned to get there.

IShallRiseAgain
Sep 12, 2008

Well ain't that precious?

pixaal posted:

I have a steamed hams inspired prompts








So I'm messing around with generating images from stories using this python script
code:
from textblob import TextBlob
import nltk
import json
import requests
import io
import base64
from pathlib import Path
from datetime import datetime
from PIL import Image
txt = Path('story.txt').read_text(encoding="utf-8")
paragraphs=txt.split('\n')
for para in paragraphs:
    if para.strip():
        print(para + "\n")
        payload = {
            "prompt": para + ", RAW Photo,  sharp, detailed, 256k film still from a color movie made in 1980, good lighting, good photography, sharp focus, movie still, film grain",
            "negative_prompt": "blurry, frame, topless",
            "steps": 60,
            "sampler_index": "DPM++ SDE Karras"
        }
        r = requests.post(url=f'http://127.0.0.1:7860/sdapi/v1/txt2img', json=payload).json()
        for i in r['images']:
            image = Image.open(io.BytesIO(base64.b64decode(i.split(",",1)[0])))
            image.save('output\output' + datetime.now().strftime("%m_%d_%Y_%H_%M_%S") + '.png')
I first tried using code to find noun phrases which works well for longer text, but through experimenting, I found out that processing each individual paragraph produces way better results. I also learned that realistic vision 13 is really horny if just do simple person prompts so I had to add topless as a negative prompt lol.
Here is the Monkey's Paw output https://imgur.com/a/CUxNODh (had to remove some NSFW outputs).

and here is a bonus thing I made using my cenobite model

Soulhunter
Dec 2, 2005

I've been playing around in Midjourney with 'muppets' as a keyword. The muppets can make anything kid-friendly, right? Right?

:ohdear:

Muppets at war:


Muppet civilians through the war:


Muppet attacked for protesting the war:


Muppets suffering with the trauma of war:


Schindler's List Elmo:

Lucid Dream
Feb 4, 2003

That boy ain't right.

Soulhunter posted:

I've been playing around in Midjourney with 'muppets' as a keyword. The muppets can make anything kid-friendly, right? Right?

:ohdear:

Muppets at war:


Muppet civilians through the war:


Muppet attacked for protesting the war:


Muppets suffering with the trauma of war:


Schindler's List Elmo:


Oh my goood. AI makes such good muppets. I want to watch every one of those movies. The Schindler's List one in particular is absolutely inspired :allears:

Lucid Dream fucked around with this message at 19:52 on Feb 27, 2023

pixaal
Jan 8, 2004

All ice cream is now for all beings, no matter how many legs.


Lucid Dream posted:

Oh my goood. AI makes such good muppets. I want to watch every one of those movies. The Schindler's List one in particular is absolutely inspired :allears:

!chron first person telling of Schindler's list but the red coat is a red elmo (the narrator is the girl) Visual descriptions should describe the Elmo doll as being very bright red and sucking all other color out of the room

pixaal fucked around with this message at 20:00 on Feb 27, 2023

JazzFlight
Apr 29, 2006

Oooooooooooh!

With the explosion of all this AI technology, it’s gotta be pretty soon before someone just ropes together the Eleven Labs voices, ChatGPT text, and AI art/video to make a single integrated prompt, right?

Using ChatGPT to generate silly little stories and scripts has me wishing I could hear them narrated by realistic voices automatically, with images to accompany them.

pixaal
Jan 8, 2004

All ice cream is now for all beings, no matter how many legs.


JazzFlight posted:

With the explosion of all this AI technology, it’s gotta be pretty soon before someone just ropes together the Eleven Labs voices, ChatGPT text, and AI art/video to make a single integrated prompt, right?

This exists, that is what manifest is well only using much cheaper voices, Eleven is too expensive

https://twitch.tv/howisitmanifested

it's goon made (you can only prompt during a live show otherwise it's reruns)

Tunicate
May 15, 2012

JazzFlight posted:

With the explosion of all this AI technology, it’s gotta be pretty soon before someone just ropes together the Eleven Labs voices, ChatGPT text, and AI art/video to make a single integrated prompt, right?

Using ChatGPT to generate silly little stories and scripts has me wishing I could hear them narrated by realistic voices automatically, with images to accompany them.

Artbreeder is actually working on that right now.

IShallRiseAgain
Sep 12, 2008

Well ain't that precious?

JazzFlight posted:

With the explosion of all this AI technology, it’s gotta be pretty soon before someone just ropes together the Eleven Labs voices, ChatGPT text, and AI art/video to make a single integrated prompt, right?

Using ChatGPT to generate silly little stories and scripts has me wishing I could hear them narrated by realistic voices automatically, with images to accompany them.

I actually was already working on that

https://github.com/IShallRiseAgain/AIStoryGen/blob/main/StoryPromptGenerator.py

There is still a lot of stuff I want to do including handling multiple voices, title screens, ElevenLabs API, hooking it up to the upscaler, ignoring non-dialouge, and having music.

By the way, you need to have a GPT api key and put it in "openaiapikey.txt".

Here are some videos I made with it:
https://www.youtube.com/watch?v=x3Pb4RsJ1c0 (very slightly NSFW)
https://www.youtube.com/watch?v=9cMwpN-_vWs
https://www.youtube.com/watch?v=qLH_VYsG8lk
https://www.youtube.com/watch?v=lf-n19ptkaM

IShallRiseAgain fucked around with this message at 03:34 on Feb 28, 2023

pixaal
Jan 8, 2004

All ice cream is now for all beings, no matter how many legs.


dwayne the rock johnson muppet


e: someone asked me to add Elmo on fire, I fixed the muppetness as well

pixaal fucked around with this message at 15:11 on Feb 28, 2023

Lucid Dream
Feb 4, 2003

That boy ain't right.

IShallRiseAgain posted:

I actually was already working on that

https://github.com/IShallRiseAgain/AIStoryGen/blob/main/StoryPromptGenerator.py

There is still a lot of stuff I want to do including handling multiple voices, title screens, ElevenLabs API, hooking it up to the upscaler, ignoring non-dialouge, and having music.

By the way, you need to have a GPT api key and put it in "openaiapikey.txt".

Here are some videos I made with it:
https://www.youtube.com/watch?v=x3Pb4RsJ1c0 (very slightly NSFW)
https://www.youtube.com/watch?v=9cMwpN-_vWs
https://www.youtube.com/watch?v=qLH_VYsG8lk
https://www.youtube.com/watch?v=lf-n19ptkaM

These look great. Definitely check out the elevenlabs pricing, because I was surprised at how quickly it becomes prohibitively expensive. Definitely too expensive for my stream, at the rate I send TTS through it. It wouldn’t be too bad as a last pass on an otherwise good video though, so if you do rough cuts with something else and then upgrade the audio on good ones then it might not be super expensive.

pixaal posted:

dwayne the rock johnson muppet


AI muppets are the best.

cinnamon rollout
Jun 12, 2001

The early bird gets the worm
I was in here complaining about not being able to get specific colors a little while ago, for example on a shirt, but wow ControlNet completely changed that. Seriously amazing what stable diffusion with ControlNet can do.

pixaal
Jan 8, 2004

All ice cream is now for all beings, no matter how many legs.


No control net here, this is what I ended up doing with the rock muppet - Regular SD is very powerful alone and you can leverage that further with control net. I rarely want to be that specific I like seeing what comes out

Ruffian Price
Sep 17, 2016

Still blows my mind that the $22 ElevenLabs plan gives you fewer characters per dollar than the $5 plan, with all the other terms being the same. Technically you also get more voices at a time, but you can remove and retrain those with no limits anyway

lunar detritus
May 6, 2009


I love how people are just bruteforcing SD into giving them what they want.

https://twitter.com/toyxyz3/status/1630256227002515456


Blender + https://toyxyz.gumroad.com/l/ciojz seems to solve basically most issues with hands in exchange for having to pose the character instead of getting random results.

lunar detritus fucked around with this message at 18:16 on Feb 28, 2023

Megazver
Jan 13, 2006

lunar detritus posted:

I love how people are just bruteforcing SD into giving them what they want.

https://twitter.com/toyxyz3/status/1630256227002515456


Blender + https://toyxyz.gumroad.com/l/ciojz seems to solve basically most issues with hands

finally, the feet people will stop running out of feet pics

i am so happy for them

ThisIsJohnWayne
Feb 23, 2007
Ooo! Look at me! NO DON'T LOOK AT ME!



I for sure thought that model would've been named Tarantino

lunar detritus
May 6, 2009


Never mind, it's not as magical as I thought. :smith:



Lucid Dream
Feb 4, 2003

That boy ain't right.
https://clips.twitch.tv/FlaccidUninterestedChimpanzeeStinkyCheese-_WTsJogCVQupRVUF

cinnamon rollout
Jun 12, 2001

The early bird gets the worm
Speaking of openpose, is there a good way to save poses from ControlNet to use again later? I didn't really see a way and googling hasn't been very helpful
This world be in auto1111

TIP
Mar 21, 2006

Your move, creep.



cinnamon rollout posted:

Speaking of openpose, is there a good way to save poses from ControlNet to use again later? I didn't really see a way and googling hasn't been very helpful
This world be in auto1111

you can just right click on the generated images and save as, then put them in the controlnet canvas and select no pre processor to use them again

Adbot
ADBOT LOVES YOU

cinnamon rollout
Jun 12, 2001

The early bird gets the worm

TIP posted:

you can just right click on the generated images and save as, then put them in the controlnet canvas and select no pre processor to use them again

Well that seems easy enough, thank you

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply