Great post! In the case you end up using AI-generated art (or even AI help in producing the final assets), how would you handle it? I feel like outright saying "I used AI art in my game" is marketing suicide these days.
|
|
# ? Feb 27, 2023 01:42 |
|
|
# ? May 28, 2024 01:57 |
|
That's something I've been thinking about a lot and I'm not entirely sure of the answer, but I think the key is going to lie in being upfront about the processes used to convert the AI output into a final image (and the processes used to come up with a distinct non-derivative artstyle in MJ prompts). The goal would be to show that yes AI was used, but there was still a distance traveled between the AI output and the final assets, and there was still a driving Human creative force behind the entire vision. It's not a difficult process for anyone familiar with digital art to go from this: to this: (which is animated but not very visibly/obviously, whoops, it needs more) But it's not exactly the kind of thing that your middle manager Doug could do after firing his entire art department. And it was a very lengthy and complex process to go from "I want to make stylistically-unique pixel art assets using AI tools" to generating the AI output in the first place, much less to the final product. e: I guess another way to look at it is that it would be important to showcase that the art in the game is not raw AI output. The reason people are scared of AI Art is because they think it's going to remove the Human element, but the AI-based art assets I plan to use wouldn't be possible without a Human element, they just use AI-generated images as one of the raw materials in the creation process. deep dish peat moss fucked around with this message at 02:05 on Feb 27, 2023 |
# ? Feb 27, 2023 01:56 |
|
Mozi fucked around with this message at 02:06 on Feb 27, 2023 |
# ? Feb 27, 2023 01:59 |
|
lunar detritus posted:Great post! In the case you end up using AI-generated art (or even AI help in producing the final assets), how would you handle it? I feel like outright saying "I used AI art in my game" is marketing suicide these days. Be bold and yell that you made stuff with AI. I grew up in a tattoo parlor in the '90s, I can say this about contentious artforms: being up front and clear about everything let's you set things on your terms. It's a powerful thing to make a clear statement and then let people make up their own minds. The number of people interested in what this can do is only growing by leaps and bounds, if you have an idea please don't talk yourself out of it preemptively over what could go wrong, there's greater than even odds it all goes horrible right.
|
# ? Feb 27, 2023 02:09 |
|
as a total outsider I think I'd suggest one or both of the following: mention it on game boot, perhaps with a demonstration of a ai versus final. have a toggle that you can use at any time to switch between the raw ai output and the final art. that would be pretty neat, I think. either way I agree that you should be straightforward about it, if it's something that comes out later it'll be a big poo poo show probably.
|
# ? Feb 27, 2023 02:22 |
|
LASER BEAM DREAM posted:This probably applies to few people, but heads up for those with a 4090. You need to download updated CUDA drivers from nvidia and update your SD install to use it effectively. My 4090 was performing worse than my 3080 until I did this. Guide and link here: Hey: Thanks for this as I have upgraded to a 4090 from a 3090 and lmao the performance difference before and after with this change is dramatic to say the least.
|
# ? Feb 27, 2023 02:24 |
|
Gorgeous, but do you think your viewers are not goatse connoisseurs? Do you think they must be beat over the head to get the reference?
|
# ? Feb 27, 2023 02:27 |
|
Light Gun Man posted:as a total outsider I think I'd suggest one or both of the following: Unfortunately I don't think either one of those solutions is particularly workable but this has been interesting to think about - a lot of what makes what I'm doing "impressive" would be hard to see without thorough experience with AI image tools to understand exactly why it's not easy. This might put it in better perspective as an outsider - try and imagine what you would even type into a prompt to get an image like the one below (if your answer includes the words 'glitch' or 'pixel' you're headed in the wrong direction, those both heavily skew the aesthetic and subject matter of images toward something unlike this). A toggle to switch between AI-generated and edited assets would double the amount of assets required (It would be a lot more than just pasting AI-generated image files into the game, each one would need to be cropped, prepped, etc. because there's a lot more than one image element on the screen at any given time, which is something I currently do after the conversion to pixel art because it's far easier/faster that way) There's also not a whole lot of "artistry" in the conversion of this: to this: It could literally be done by a GIMP macro, so simply visually showcasing the difference would disappoint people and undermine the point: that the "artistry" is the human creative vision that brought it all together. Because while there's not much "artistry" in traveling between Image 1 and Image 2 above, there's a whole lot of "artistry" in traveling between "I have an idea for a game/world and want a unique aesthetic that doesn't look like anything that other artists are doing" and generating the first image above, and there's a little bit more artistry in then developing the process to convert that image into the pixelart-friendly game-ready form of the second image, despite there not being a stunning visual difference between the two. I don't think it would be easy for anyone to replicate the style of the AI-generated image above, even if they've been working with MJ for a long time and understand the nuances of AI image generation, unless they've read my prompts/posts about them pretty closely, and even then I think I've left enough ambiguity in them to make sure this style is "mine". (Possibly with img2img but even that degrades with each successive generation and limits what you can do to 'things similar to what I've posted'). I've had to develop a lot of weird textual acrobatics and focus heavily on the clarity of my writing and remove ambiguity from my vocabulary, and get a thorough understanding of how MJ interprets various emotive words visually to prevent cross-contamination (where e.g. changing one word about the subject of an image vastly changes the entire aesthetic style of the image, because that subject is strongly linked with a different aesthetic in the training data). But this definitely highlights the challenge of using AI-generated stuff in a project and attempting to appeal to the general public - "using AI image tools" is easy, but using them to do something distinct and to express a unique creative vision is an entirely different ballpark, and that's not intuitively obvious unless you've tried to use them for that purpose. I guess what will set it apart in that sphere is that it just plain will not look like other AI-generated projects. deep dish peat moss fucked around with this message at 03:44 on Feb 27, 2023 |
# ? Feb 27, 2023 02:52 |
|
I don't know if I'll go for another month of Midjourney, but I am getting some good stuff for sure. "terrifying motivational posters" "gateway between life and death, zdzisław beksiński" "Using Midjourney" Vlaphor fucked around with this message at 03:09 on Feb 27, 2023 |
# ? Feb 27, 2023 02:59 |
|
Been playing around with writing camera settings into midjourney prompts after I saw them being used in some really nice images. Been having have pretty good results
|
# ? Feb 27, 2023 06:35 |
|
deep dish peat moss posted:Unfortunately I don't think either one of those solutions is particularly workable but this has been interesting to think about - a lot of what makes what I'm doing "impressive" would be hard to see without thorough experience with AI image tools to understand exactly why it's not easy. This might put it in better perspective as an outsider - try and imagine what you would even type into a prompt to get an image like the one below (if your answer includes the words 'glitch' or 'pixel' you're headed in the wrong direction, those both heavily skew the aesthetic and subject matter of images toward something unlike this). yeah I figured I wasn't terribly realistic, but it is interesting to think about yeah. seems like you have thought about it a lot, which seems wise.
|
# ? Feb 27, 2023 10:56 |
|
pixaal posted:I have a steamed hams inspired prompts for those who were wondering https://twitch.tv/unlimitedsteam
|
# ? Feb 27, 2023 11:41 |
|
lunar detritus posted:Great post! In the case you end up using AI-generated art (or even AI help in producing the final assets), how would you handle it? I feel like outright saying "I used AI art in my game" is marketing suicide these days. I think the only way at this point would be to use an ethically sourced model that was trained only on free use/public domain/copyright free images, so basically scrape all of Pexels, Adobe Stock, Unsplash, etc, train on every bit of fine art that's well scanned digitally, and then you'd still need a metric fuckton of data to make a model that's actually decent. There's nothing like that available right now, from what I've seen, but someone is probably working on it. Even then, a huge subset of people would lose their loving minds about it because The Internet, but yeah, that's kinda your only recourse.
|
# ? Feb 27, 2023 12:58 |
|
On the other hand maybe all the people freaking out would be a ton of free publicity and a bunch of people buy your game who otherwise would have never heard of it and don't give a gently caress about dudes who do DnD characters on commission having a tantrum.
|
# ? Feb 27, 2023 14:05 |
|
The Sausages posted:lol I prefer the Manifest stream especially when it's live it feels like using an AI that makes short 1 minute meme videos I think it actually is that! It feels very much like Stable Diffusion Discord did when it was Discord only with checking out what other people are doing and a bit of waiting in queue. I do love a good new steamed hams though! I did make AI fan art for manifest too: (Muppet Borkus Demon Clown which refuses to pass content filters with whatever the bot is sending to their stable diffusion) pixaal fucked around with this message at 14:11 on Feb 27, 2023 |
# ? Feb 27, 2023 14:07 |
|
Bunch of my poo poo over the last couple weeks, mostly related to a Vampire RPG book I'm working on: And then I was trying to make some moody scene type stuff... Can post more stuff later, that's probably enough spam. Those are the direct images so you can see the prompts baked in.
|
# ? Feb 27, 2023 14:41 |
|
Corridor Digital is experimenting with SD-enabled video-to-animation: https://www.youtube.com/watch?v=_9LX9HSQkWo
|
# ? Feb 27, 2023 14:45 |
|
Megazver posted:Corridor Digital is experimenting with SD-enabled video-to-animation: Already posted, and Nico has an hour-long tutorial on exactly the setup steps, including model training and how to use Resolve to fix errors and also keying out greenscreen. It's paywalled on https://www.corridordigital.com/, but they have a fuckton of content there and are a genial bunch of VFX weirdos.
|
# ? Feb 27, 2023 16:12 |
|
BrainDance posted:On the other hand maybe all the people freaking out would be a ton of free publicity and a bunch of people buy your game who otherwise would have never heard of it and don't give a gently caress about dudes who do DnD characters on commission having a tantrum. It's this. The end user does not give a gently caress about anything at all otherwise that Hogwarts game wouldn't be as popular or people wouldn't eat meat or buy Nestle water or whatever running down the list of things we should all feel bad about. You are well and truly out of your mind if you think average customer cares about where pretty picture comes from. They're still going to watch the latest Disney movie no matter how many VFX people were burned to get there.
|
# ? Feb 27, 2023 16:36 |
|
pixaal posted:I have a steamed hams inspired prompts So I'm messing around with generating images from stories using this python script code:
Here is the Monkey's Paw output https://imgur.com/a/CUxNODh (had to remove some NSFW outputs). and here is a bonus thing I made using my cenobite model
|
# ? Feb 27, 2023 16:49 |
|
I've been playing around in Midjourney with 'muppets' as a keyword. The muppets can make anything kid-friendly, right? Right? Muppets at war: Muppet civilians through the war: Muppet attacked for protesting the war: Muppets suffering with the trauma of war: Schindler's List Elmo:
|
# ? Feb 27, 2023 16:59 |
|
Soulhunter posted:I've been playing around in Midjourney with 'muppets' as a keyword. The muppets can make anything kid-friendly, right? Right? Oh my goood. AI makes such good muppets. I want to watch every one of those movies. The Schindler's List one in particular is absolutely inspired Lucid Dream fucked around with this message at 19:52 on Feb 27, 2023 |
# ? Feb 27, 2023 19:50 |
|
Lucid Dream posted:Oh my goood. AI makes such good muppets. I want to watch every one of those movies. The Schindler's List one in particular is absolutely inspired !chron first person telling of Schindler's list but the red coat is a red elmo (the narrator is the girl) Visual descriptions should describe the Elmo doll as being very bright red and sucking all other color out of the room pixaal fucked around with this message at 20:00 on Feb 27, 2023 |
# ? Feb 27, 2023 19:55 |
|
With the explosion of all this AI technology, it’s gotta be pretty soon before someone just ropes together the Eleven Labs voices, ChatGPT text, and AI art/video to make a single integrated prompt, right? Using ChatGPT to generate silly little stories and scripts has me wishing I could hear them narrated by realistic voices automatically, with images to accompany them.
|
# ? Feb 27, 2023 20:01 |
|
JazzFlight posted:With the explosion of all this AI technology, it’s gotta be pretty soon before someone just ropes together the Eleven Labs voices, ChatGPT text, and AI art/video to make a single integrated prompt, right? This exists, that is what manifest is well only using much cheaper voices, Eleven is too expensive https://twitch.tv/howisitmanifested it's goon made (you can only prompt during a live show otherwise it's reruns)
|
# ? Feb 27, 2023 20:02 |
|
JazzFlight posted:With the explosion of all this AI technology, it’s gotta be pretty soon before someone just ropes together the Eleven Labs voices, ChatGPT text, and AI art/video to make a single integrated prompt, right? Artbreeder is actually working on that right now.
|
# ? Feb 28, 2023 00:00 |
|
JazzFlight posted:With the explosion of all this AI technology, it’s gotta be pretty soon before someone just ropes together the Eleven Labs voices, ChatGPT text, and AI art/video to make a single integrated prompt, right? I actually was already working on that https://github.com/IShallRiseAgain/AIStoryGen/blob/main/StoryPromptGenerator.py There is still a lot of stuff I want to do including handling multiple voices, title screens, ElevenLabs API, hooking it up to the upscaler, ignoring non-dialouge, and having music. By the way, you need to have a GPT api key and put it in "openaiapikey.txt". Here are some videos I made with it: https://www.youtube.com/watch?v=x3Pb4RsJ1c0 (very slightly NSFW) https://www.youtube.com/watch?v=9cMwpN-_vWs https://www.youtube.com/watch?v=qLH_VYsG8lk https://www.youtube.com/watch?v=lf-n19ptkaM IShallRiseAgain fucked around with this message at 03:34 on Feb 28, 2023 |
# ? Feb 28, 2023 02:48 |
|
dwayne the rock johnson muppet e: someone asked me to add Elmo on fire, I fixed the muppetness as well pixaal fucked around with this message at 15:11 on Feb 28, 2023 |
# ? Feb 28, 2023 14:22 |
|
IShallRiseAgain posted:I actually was already working on that These look great. Definitely check out the elevenlabs pricing, because I was surprised at how quickly it becomes prohibitively expensive. Definitely too expensive for my stream, at the rate I send TTS through it. It wouldn’t be too bad as a last pass on an otherwise good video though, so if you do rough cuts with something else and then upgrade the audio on good ones then it might not be super expensive. pixaal posted:dwayne the rock johnson muppet AI muppets are the best.
|
# ? Feb 28, 2023 14:37 |
|
I was in here complaining about not being able to get specific colors a little while ago, for example on a shirt, but wow ControlNet completely changed that. Seriously amazing what stable diffusion with ControlNet can do.
|
# ? Feb 28, 2023 16:23 |
|
No control net here, this is what I ended up doing with the rock muppet - Regular SD is very powerful alone and you can leverage that further with control net. I rarely want to be that specific I like seeing what comes out
|
# ? Feb 28, 2023 16:53 |
|
Still blows my mind that the $22 ElevenLabs plan gives you fewer characters per dollar than the $5 plan, with all the other terms being the same. Technically you also get more voices at a time, but you can remove and retrain those with no limits anyway
|
# ? Feb 28, 2023 18:02 |
I love how people are just bruteforcing SD into giving them what they want. https://twitter.com/toyxyz3/status/1630256227002515456 Blender + https://toyxyz.gumroad.com/l/ciojz seems to solve basically most issues with hands in exchange for having to pose the character instead of getting random results. lunar detritus fucked around with this message at 18:16 on Feb 28, 2023 |
|
# ? Feb 28, 2023 18:11 |
|
lunar detritus posted:I love how people are just bruteforcing SD into giving them what they want. finally, the feet people will stop running out of feet pics i am so happy for them
|
# ? Feb 28, 2023 18:17 |
|
I for sure thought that model would've been named Tarantino
|
# ? Feb 28, 2023 18:18 |
Never mind, it's not as magical as I thought.
|
|
# ? Feb 28, 2023 19:04 |
|
https://clips.twitch.tv/FlaccidUninterestedChimpanzeeStinkyCheese-_WTsJogCVQupRVUF
|
# ? Feb 28, 2023 20:06 |
|
Speaking of openpose, is there a good way to save poses from ControlNet to use again later? I didn't really see a way and googling hasn't been very helpful This world be in auto1111
|
# ? Feb 28, 2023 21:21 |
|
cinnamon rollout posted:Speaking of openpose, is there a good way to save poses from ControlNet to use again later? I didn't really see a way and googling hasn't been very helpful you can just right click on the generated images and save as, then put them in the controlnet canvas and select no pre processor to use them again
|
# ? Feb 28, 2023 21:27 |
|
|
# ? May 28, 2024 01:57 |
|
TIP posted:you can just right click on the generated images and save as, then put them in the controlnet canvas and select no pre processor to use them again Well that seems easy enough, thank you
|
# ? Feb 28, 2023 21:33 |