|
Leperflesh posted:I tried using chatGTP a week or so ago to do some improvisational roleplaying and while it insisted on reminding me constantly that we were just doing a hypothetical, it definitely remembered what we were talking about throughout the conversation. And the conversation is still available to continue, if I want. To me, ChatGPT seems so eager () to try to give the human what they want that it ends up contradicting itself or changing its assumptions () on the fly. It’s actually really annoying in role playing terms.
|
# ? Jan 30, 2023 19:19 |
|
|
# ? May 2, 2024 07:04 |
|
Doctor Zero posted:To me, ChatGPT seems so eager () to try to give the human what they want that it ends up contradicting itself or changing its assumptions () on the fly. It’s actually really annoying in role playing terms. Yeah, it's obviously really primitive in what it's doing with this format right now, and disagreeing with the user shouldn't always be seen as a failure mode. But it is "remembering" at least in some respect, I think by re-crawling/indexing what's already in that chat when it generates new output.
|
# ? Jan 30, 2023 21:04 |
|
Leperflesh posted:Yeah, it's obviously really primitive in what it's doing with this format right now, and disagreeing with the user shouldn't always be seen as a failure mode. But it is "remembering" at least in some respect, I think by re-crawling/indexing what's already in that chat when it generates new output. OpenAI wants a corporate friendly and completely toothless AI. they don't want an AI bot to generate anything that can be vaguely considered controversial. IShallRiseAgain fucked around with this message at 21:20 on Jan 30, 2023 |
# ? Jan 30, 2023 21:16 |
|
I didn’t read the whole thing but I found it interesting that it was concerned about the energy requirements of faster than light travel even when just pretending.
|
# ? Jan 30, 2023 22:03 |
|
I have found that chat gpt is terrible at generating specific original ideas. It can be modestly useful for high level brainstorming, but that's about it. What it does do well is if you give it specific, narrow parameters, and ask it to bang out a fist draft. You can't ask it to make a coherent lore book on it's own, but you can ask it to, say, write a description of a fictional city with properties X, Y, and Z.
|
# ? Jan 30, 2023 22:45 |
|
Leperflesh posted:
Yeah it's at least somewhat aware, compared to gpt-j or whatever which is just all new every prompt. But I think (can't know because, proprietary) it's probably "memory" in a hacky way though, but I wish we could see exactly how. What I mean is, like take SD, making videos from SD and keeping the frames consistent is hard because it doesn't remember what it generated a frame before the next frame. But Deforum manages to keep some kind of consistency with it, not from the model really remembering what it did but by feeding that last frame into img2img and basing the next frame off that, over and over. It's memory of what it did but it's also not, if that makes sense, and I suspect chat-gpt is some similar pseudo-memory too. Ideally what we'd have, I think, is something like an embedding. An extra addendum file used by the model to remember what it's said and what you've told it so it can reference it whenever it's needed, or swapped out when you need it to be aware of different things. At least that's just what I hope for, but what absolutely has to happen first is someone needs to figure out a way to get a large language model able to run without insane amounts of vram. It happened with image gen and SD, so maybe it will happen with language models too if we get lucky.
|
# ? Jan 31, 2023 02:52 |
BrainDance posted:Yeah it's at least somewhat aware, compared to gpt-j or whatever which is just all new every prompt. But I think (can't know because, proprietary) it's probably "memory" in a hacky way though, but I wish we could see exactly how.
|
|
# ? Jan 31, 2023 03:01 |
|
BrainDance posted:Ideally what we'd have, I think, is something like an embedding. An extra addendum file used by the model to remember what it's said and what you've told it so it can reference it whenever it's needed, or swapped out when you need it to be aware of different things. This was exactly what AI Dungeon added in one of their updates over two years ago. The user has to manually update the memory field, though.
|
# ? Jan 31, 2023 06:13 |
|
Fuschia tude posted:This was exactly what AI Dungeon added in one of their updates over two years ago. The user has to manually update the memory field, though. How do they manage it, same deal as chatGPT and just feeding it all back into the AI with the new prompt?
|
# ? Jan 31, 2023 13:35 |
|
BrainDance posted:How do they manage it, same deal as chatGPT and just feeding it all back into the AI with the new prompt? Yup. I think every prompt is basically starting the AI from scratch, feeding it the contents of the "history" box along with the previous X words of back and forth exchange for context.
|
# ? Feb 2, 2023 06:11 |
|
Shadowrun is my most favorite of all PnP role-playing universes and this whole AI art deal has finally allowed me to make it real A combination of MidJourney to 'block' out everything, then I bring it into Stable Diffusion and Krita and with my drawing tablet draw all over it, using Stable Diffusion as some uber content aware brush. I've spend the last two weeks working out a work-flow using it all, getting really precise with inpainting and trying my best to eliminate the AI tells as best I can, biggest one being the hands. Not even close to where I want to be but I think I'm making progress to, uh, something. I adore these tools for letting my imagination bring things into reality quicker than ever before. I'm never satisfied with one-shot generations as there is always something I notice that I want to reach in and fix. Being able to generate within Krita is so, so powerful and compelling to me.
|
# ? Feb 6, 2023 06:17 |
|
KakerMix posted:Shadowrun is my most favorite of all PnP role-playing universes and this whole AI art deal has finally allowed me to make it real How dare you put artists out of work. I really, I’d love to see more detail on your workflow. Those are amazing!
|
# ? Feb 8, 2023 17:12 |
|
Doctor Zero posted:How dare you put artists out of work. I don't have a like, teachable good flow yet, all this stuff is bleeding edge and nobody knows what they are doing, especially me. I do have a background in art and especially photo editing, which I'd say this is mostly the nearest analog. The analogy of the content aware tool works really well for this. I should mention that the Krita plugin presents as new layers every time you ask it for a new image. If you ask for a new txt2img, you get that over whatever the image is underneath, ignoring it the picture below. If you ask it to img2img, it will take whatever is visible in your image and render a new one based off that with whatever settings you have set. Inpainting works the same way as img2img, but will only render the masked area. This allows you to paint directly on the image (or an entirely new layer) then render a new image on top of those, being informed by whatever is underneath. Here is what I've had good luck with on these Shadowrun images. Tools: MidJourney account, though I don't always use it. For these Shadowrun ones though each one started out as MidJourney. Automatic1111 with a big pile of models, for these combinations of Protogen Infinity, Analog Diffusion, and various others. Krita Auto SD Paint Extension which allows Stable Diffusion to be run inside of Kirta itself. I've got a pretty powerful computer with a 3090 and a whole heap of RAM as well. I'd like to upgrade to a 4090 or depending on how far I go into this, maybe a non-videogame render card with even more VRAM. I've also got an XP-Pen Artist 24 Pro along with two other monitors. I'll have Krita with the plugin on the drawing tablet, with Auto1111's interface in a different window on another monitor. You can use both at different times to generate using Stable Diffusion, just have to take turns and also share the model, if you switch the model back and forth within Krita or the Auto1111 interface it switches it for the other as well. However Auto1111 recently added an option to keep models in memory, and I'll do 4 at once. This means that switching between models is very quick. It ALSO means that I never use just one model, no reason to. Different models are better at different things, switching as needed. With how precise inpainting can get I don't find the absolute need for inpainting specific models, however I do want to explore than and see if they help avoid seams when I have to cut across an seamless expanse. They might be a lot better at that aspect, I'll have to see. I'll have Auto1111 on the one monitor because sometimes I want to generate something outside of Krita quickly, then just pull the generation into Krita by dragging and dropping directly in. Having two work services like this helps me work. Likewise the other monitor can be used for other stuff. More monitors is always better pretty much. I'll go into MidJourney and just kinda generate some images and see if a vague idea comes about. I did the more somber ones first which is cool but I wanted more emotion. I thought a more relaxed setting with a running team during downtime laughing with each other would be cool, so I tried 'telling jokes' and built a prompt around that. Then I thought about a troll telling jokes and altered the prompt toward that. Lucky for me I checked to see if MidJourney knew what a 'Shadowrun troll' is, since for Shadowrun a troll is just a big human with maybe tusks and maybe horns, but is normal human skin colors. MidJourney for some reason has the trolls as more goblin like and blue probably because classically trolls would be more like this in other universes. Still, I think I can work with this. Prompt: 'shadowrun troll' Right, let's try this prompt then: Year 1985, Shadowrun, cyberpunk 2077, intimate conversation, troll metahuman telling jokes, laughing, humor, smiling, joyous, dynamic camera angle, film grain, movie shot, ektachrome 100 photo Lucky for me the first roll I did after I learned it vaugley knew what I meant with 'shadowrun' and 'troll' yielded this output. The first image seemed like a good direction to go, kinda big monsterous sorta guy, blue sure but that's easy enough to change in Krita. Click upscale and see what we get. Cool, I can see a future with this image. I'm sure most people can't easily tell, but it seems like I can see a MidJourney image, it has a style all its own. Into Auto1111 and make a few passes first to de-Midjourney the style. img2img, de-noising at a low setting of 0.3 with Protogen a few passes to see what comes out. The trick is to make slow changes but multiple passes so you don't radically alter the base image yet steer the results where you want to go. It's going to be a dramatic change from MidJourney to Stable Diffusion anyway. Changed the prompt to this and looped it into itself two or three times: intimate conversation, troll telling jokes, laughing, humor, smiling, joyous, cyberpunk, 1985, ektachrome 100 movie still I work on the troll first. I want a horn so I swipe a picture of some plastic horns, trim them out, flip and warp them around and place it over the guy's head in a way that might work. I also gotta un-blue the guy so I mask the face and gently caress with the colors and get it reasonably closer to human skin tones AND draw in the pointed ear. You can see that the horn isn't perfectly trimmed and there is a bunch of outline noise, and that the top of the ear is literally me using the airbrush tool and swiping altered colors from the bottom of the ear and just vaguely suggesting a pointed ear. I then use the airbrush tool and mask over the face and inpaint the mask (using neon green so I can see the mask easier, you can use whatever color you'd like though) with this new, altered prompt: dark skinned native american troll with horns and pointed ears, telling jokes, laughing, humor, smiling, joyous, cyberpunk, 1985, ektachrome 100 movie still Awesome result, but what the gently caress the ear isn't pointed at all! I redraw the ear like last time with the airbrush tool and sampling the colors as needed and mask ONLY it, change the prompt to: pointed ears on a dark skinned native american troll with horns and pointed ears, telling jokes, laughing, humor, smiling, joyous, cyberpunk, 1985, ektachrome 100 movie still Good, looks great. You might notice that his mouth is hosed up with a weird bit of flesh, or the outline around where the mask is. These are things I will take care of when the rest of the image is done and I'm doing the last bit of tweaking to finalize the whole image. Learned my lesson with the ear. I do this sort of thing to all sorts of bits of the image. The troll's hands holding the bottle which is me painting out whatever stuff the guy is holding and mashing a bit of hand together, cloning the bottle in the center of the image and roughly putting it his hand, masking it and changing the prompt to compensate. Here is what I did to the woman's hand holding the cup: I then started working on her overall but came back to the hand and tweaked it later. Still wasn't fully aware I needed to work up but the hand was not in her main mask so it was ok. Speaking of the lady, mask her off and change the prompt yet again. I iterate a few times till I get a result I like. New prompt: gorgeous techno disco queen, telling jokes, laughing, humor, smiling, joyous, cyberpunk, 1985, ektachrome 100 movie still Last one is THE one. It's her smile, it feels more full, more joyous and real. She's not being polite, she can't help but crack up. Combined with her body's stance is like she's heaving with a deep laugh. Again with the masking I'll attempt to tackle that towards the end of me messing with the image. The rest of the work is variations on that, masking out things, changing the prompt, refining things down. It still does have that thing where you'll be working towards something then go "NOPE!" and just back right out from the last 40 minutes of work you put in to go a completely different direction. This is how the plate of food ended up on the table, I got rid of all the extra cups and the hosed up hands. The plate of food was a stock image of wings. For getting rid of the tell-tale signs of masking I will mask like normal, but not bound it with the selection tool. Finally I'll switch to analog_diffuision, then strip the prompt down to the original prompt, set denoising to 0.1 and run it through a couple times to add in some nice noise and help even out everything so it looks more like an 80s movie. Not entirely happy with this specific image, but it was the first one where I did a combination of a lot of things to make it. Lots of errant halos still, the hands are hosed up, if less so than normal AI stuff. I doubt I'll come back to this image, but it's all in a single krita file with all its layers at least! Big takeaways are: 1. Like drawing, work towards details. If I did that going in I wouldn't have wasted my time with drawing that ear twice. Work in broad strokes and 'layer' up as you finish parts. 2. Smaller the better. If you can work in smaller areas within Krita you get the whole resolution for just the area you are working on. This means that two major AI tells for me, car wheels and people's faces, are rendered 'full res' when you draw a bounding box around your mask and inpaint only within there. The overall resolution doesn't change but you render that detail into the small area, like super sampling. Maybe it IS super sampling, idk. The exception to this is when I'm cleaning up errant seam lines from masking. Those I'll paint on the image directly to obfuscate the seam behind some brush strokes, then inpaint without the bounding box (thus giving a lower resolution result because it's doing the whole image even though the result is just the mask) and adjust the layer opacity as needed. Easy! 3. Using actual pictures is better than drawing details yourself. This is because the AI wants noise, that's in-built to any image you take off the internet. In the Native American troll guy above the horn ended up turning into some sort of fuzzy talisman like horn thing, but that's cool by me because it gives character that I like. It worked that I had a picture, even of cheap plastic horns, because the noise was in the image so Stable Diffusion could use that in a realistic way. If I drew a flat-shaded horn it wouldn't work as well because it's trying to put detail where there isn't any. A worse image with more noise is better than a well-drawn, flat one when it comes to realistic photo-like images. You can almost completely ignore watermarks too as that is noise that will certainly get lost in the first past, or if needed you can roughly paint them out using the surrounding colors and the AI is smart enough to know what you are going for, depending on the prompt. 4. Leave the overall bit of your prompt and change as needed. I'd keep the prompt the same, but add whatever I was inpainting on the front for that specific inpaint. Pointed ears, holding a coffee cup, etc. 5. It is easier to have a vague idea about the emotion of whatever image you want, rather than being specific. In making these I'm not saying to myself 'Ok I am going to have a troll on the left and then a woman on the right and another person just out of frame and...' instead it's 'a troll telling a joke and everyone legit enjoying it would be cool' and see what comes up. Working with things the system provides you is a lot easier to make a cohesive image. The hands are hosed up but there are fingers there for the woman so I used those and built her hand in a way that didn't immediately make my brain go "AI HANDS" and notice it to a fault. 6. SAVE YOUR PROMPTS. You can alleviate this a bit if you keep your generations and can toss them back into the 'PNG INFO' tab in Auto1111, but the prompts are NOT saved within Krita at all or the file itself. I've taken to just putting the prompts I use in a .txt file where I keep the main Krita file. KakerMix fucked around with this message at 06:26 on Feb 9, 2023 |
# ? Feb 9, 2023 06:11 |
|
KakerMix posted:cool poo poo Those bottles aren't symmetrical though
|
# ? Feb 9, 2023 06:31 |
|
drat well I started by like, hmm I should look into this tool, and this next tool, and... but where I wound up is "this person already has a lot of skills I don't" so I'm just spectating now. But it's extremely cool to see someone engage with the technology and apply artistic choices in a conscious way to produce a result. I see this as not much different from an artist I know who works in collage - any kid can cut things out of magazines and make a collage, but raising it to an art form means using the ephemera the world gives you and then applying a combination of choices (artistic vision, whatever you want to call it) and technique (skill with the tools). Only by having both can you wind up with good art. I'm really impressed, Kaker, and I hope you keep at it.
|
# ? Feb 9, 2023 19:59 |
|
KakerMix posted:Goddamn son Getting caught up with this thread. This is really useful, thanks for posting.
|
# ? Feb 9, 2023 22:06 |
|
Thanks for that huge post. This is very interesting!
|
# ? Feb 9, 2023 23:17 |
|
Leperflesh posted:drat well I started by like, hmm I should look into this tool, and this next tool, and... Tbh I think it sounds like in a way OP's process makes things more like writing where, imo, with enough edits anyone can produce perfectly serviceable writing. If you followed the same process perhaps it would take you more iterations than OP but I doubt it's as out of reach as you might be feeling.
|
# ? Feb 10, 2023 01:28 |
|
Appreciate the kind words, thanks. I would like and hope that anyone that is even remotely interested in diving in though to go ahead and do it. At worst you spend some time exploring the tech, at best you feel pretty good exploring your own head. reignonyourparade posted:Tbh I think it sounds like in a way OP's process makes things more like writing where, imo, with enough edits anyone can produce perfectly serviceable writing. If you followed the same process perhaps it would take you more iterations than OP but I doubt it's as out of reach as you might be feeling. It's very much exactly like this, yeah. I can't draw anything presentable at all on my own without an extreme time commitment, but I'm decently good at writing and have a grasp on visuals and things. It really seems like I've been waiting for these AI systems my whole life, if I were a kid I'd be losing my poo poo that this tech is real. I suppose I am, just now I'm approaching 40 years old rather than being in high school. It's just a combination of having an idea of what looks good enough, being able to convince the system to give you want you want through actual writing, and editing or like I think of it as, refining. Most of what people are making with MidJourney or Stable Diffusion is, in my opinion, trash garbage that needs only the slightest touch, the most remote little push to really elevate it.
|
# ? Feb 10, 2023 04:26 |
|
KakerMix posted:Most of what people are making with MidJourney or Stable Diffusion is, in my opinion, trash garbage that needs only the slightest touch, the most remote little push to really elevate it. Now you're thinking like a true artist.
|
# ? Feb 10, 2023 04:29 |
quote:please write a sea shanty about a rabbit person that was tied to a ballista bolt and launched at a blue whale, two verses per chorus quote:(Chorus) stringless fucked around with this message at 07:53 on Feb 12, 2023 |
|
# ? Feb 12, 2023 07:50 |
|
Gonna run some Mausritter in a month or so and I'm trying to figure out how to generate art that's similar to the art in the book. MJ can do generic cute Redwall-ish mice just fine, so I'll have something to use but it's be nice to figure out how to do something more similar. Anyone have any ideas how to a) describe that particular cartoony style and b) that type of, uh, coarse pencil shading?
|
# ? Feb 24, 2023 16:32 |
|
You could try just using the artist name or the work. I wanted to generate a Wall Street Journal style dotted portrait style picture so that’s exactly what I typed. Took a couple of tries, but it got close enough.
|
# ? Feb 24, 2023 20:59 |
|
Anyone got any ideas as to how's the best way to go about generating battlemaps/building maps? I mainly want to generate the general idea to work off of as reference to make myself in CSP but making the general basic layout of a map, whether it's a building, town, or some random outdoor terrain is a little harder to conceptualize beyond vague ideas.
|
# ? Feb 24, 2023 21:29 |
|
Does anybody know a hoard of DnD character backstories hosted anywhere that is also archived by archive.org? I don't really wanna rip massive amounts of text from anybody but then because that's a dick move, but archive.org is explicitly ok with you scraping stuff from them. I want to use it to finetune GPT-Neo to make a model for specifically creating backstories. Of if you can think of anything else DnD related that I could possibly get a lot of text of that might work well with a GPT-Neo model, I'm looking for ideas too. Kinda in a "train everything" mood now. Raenir Salazar posted:Anyone got any ideas as to how's the best way to go about generating battlemaps/building maps? I mainly want to generate the general idea to work off of as reference to make myself in CSP but making the general basic layout of a map, whether it's a building, town, or some random outdoor terrain is a little harder to conceptualize beyond vague ideas. There was a push for this a little while ago and some experimental dreambooth models for it in SD but it never really worked out well. I'll ask around but from what I remember it all kind of stalled after that. Here were some of the experiments https://huggingface.co/VTTRPGResources/BattlemapExperiments/tree/main but you likely wont get much from them. The prompt triggers were AfternoonMapAi and DnDAiBattlemap I think. Or try in just normal SD 1.5 something like birdseye straight-down shot from a drone, battlemap floorplan photo of *** But, I think one of the reasons it stalled was because Midjourney just kind of does it well already, you'll probably get best results that way. I'm not sure what the midjourney prompts to get a good battlemap are though.
|
# ? Feb 25, 2023 02:17 |
|
I've been playing with chatGPT to make Delta Green adventures. I tried five prompts. I am impressed on how it could add twists or NPC characters in a certain writer's style. I think chatGPT sees 'Delta Green' as 'generic supernatural horror plots' though and just choses one type of plot from that genre and runs with it. However if you add more to the prompt, the output gets more interesting. The Delta Green adventures and their prompts are here: https://pastebin.com/mLeCjWqz. I'll post the two most interesting ones. quote:Make me a short adventure in the style of the Delta Green roleplaying game shotgun scenarios. Add a twist to the plot in the style of O. Henry the American writer. Include an NPC with a personality in the style of Cormac Mccarthy's stories. quote:Make me a short adventure in the style of the Delta Green roleplaying game as if it was written by the creators of the tv show The Office. Limit humor in the story. Add an antagonist from Ian Fleming's stories who hates the supernatural. Add a twist to the plot. So apparently chatGPT does fairly well when it comes to style mashups. Out of curiosity I asked it to make a wargame scenario for Necromunda. It did pretty well. quote:write for me a Necromunda wargame scenario I did not expect a scenario twist or special rules. This scenario actually seems very playable. To do some comparison I asked chatGPT to generate some Dark Sun adventures in a new session. I was surprised by the fact that the adventures were much more detailed than the Delta Green ones. Perhaps chatGPT doesn't really know what Delta Green and it's themes are. At least at the time of this writing. quote:write me an adventure in the world of Dark Sun featuring a Thri-kreen character. Add a psionic battle Also I found the following out quote:Create for me a halfling Dark Sun character who is a cannibal and worships the element of Fire I also wanted to know if it understood what psionics were in Dark Sun. It did pretty good. quote:describe for me a psionic battle between a Telepath and a Pyrokinetic in Dark Sun Then I tried literature mashups with Dark Sun. quote:Write for me a short story set in the Dark Sun world as if Joseph Heller was writing it. Include two protagonists at odds with each other.
|
# ? Feb 26, 2023 17:57 |
|
After some research into chatGPT prompts that would give me more specific results, I asked it some more questions about making Dark Sun adventures. The following is in one thread. quote:prompt: I want you to act as a storyteller. You will come up with entertaining stories that are engaging, imaginative and captivating for the audience. I want you to write a story in the world of Dark Sun with multiple details from the Dark Sun world book which has the potential to capture people’s attention and imagination. My first request is “I need an interesting adventure story for a Game Master in Dungeons and Dragons.” Doesn't seem bad but didn't knock my socks off. I tried another. quote:prompt: My second request is "I need an interesting adventure for a group of players who are low level in Dark Sun". Include details for a psionics user and a mul gladiator. Make the adventure have a challenging combat. So chatGPT added an unexpected Templar attack. Better. It created a plot but not specific details I was looking for. quote:prompt: What type of treasure would a gladiator find in the above adventure? Alright now we are getting somewhere. The gladiator's net and suit of armor show that chatGPT understands the world of Dark Sun and what items would be appropriate to it. Then I tried to get more specific. quote:prompt: What type of treasure would a psionics user find in the adventure? Suggests exotic materials the treasure is made out of for the world of Dark Sun. Now the Chitin and the Sunstone impressed me because they are very thematic with Dark Sun's world. Also the three items generated had magical abilities that make sense with their item type (Obsidian Crystal giving resistance to fire for example).
|
# ? Feb 28, 2023 18:36 |
|
https://www.reddit.com/r/Pathfinder2e/comments/11faxjl/paizo_announces_ai_policy_for_itself_and/ Paizo announces that it prohibits use of AI art both in its own books and in Community Program published ones. I am torn on this. On one hand, I definitely want the publishers who can actually afford to commission real human artists to continue to do so and welcome pledges in that regard. On the other hand, if you're just some dude who's publishing his homebrew adventure, it's the choice between AI art and "well, I guess it's just a Word document with some stock art, then". I find that disappointing.
|
# ? Mar 1, 2023 23:15 |
|
The legal hazard of publishing AI art (as opposed to using it for noncommercial purposes) could be significant. Even leaving aside the ethical arguments, if you are contemplating publishing something for money or even just for cred, you might want to not put AI art in it. Unrelated: The most recent Last Week Tonight with John Oliver was focused in its main story on AI art and it's not a bad watch, even if it glosses over some of the technical details a bit. Leperflesh fucked around with this message at 23:53 on Mar 1, 2023 |
# ? Mar 1, 2023 23:51 |
|
Megazver posted:https://www.reddit.com/r/Pathfinder2e/comments/11faxjl/paizo_announces_ai_policy_for_itself_and/ The pathfinder subreddits have all gone full authoritarian too, I'm watching regular non heated conversation on the subject being blanket nuked for even questioning the move. Rule breaking posts that defend the change are fine, but anyone who questions it, you can just see all their posts being deleted.
|
# ? Mar 1, 2023 23:54 |
|
I don't understand how this doesn't lead to people just lying?
|
# ? Mar 1, 2023 23:55 |
|
Boba Pearl posted:I don't understand how this doesn't lead to people just lying? If Paizo says "you're not allowed to put AI generated art in stuff" and then people lie about it, that absolves Paizo of legal responsibility. It's cover. Again they may actually also be taking a moral stand, but I feel like the legal cover part is the important one that their lawyers may have been involved in. Actually it's also moral cover. If someone says "hey, such-and-such content on your site contains images that were clearly an AI ripping off my art/my client's art" there's less chance of a brigade of outraged internet people going "woah, Paizo supports plagiarism!" if Paizo can point to their policy and say "no we don't, we specifically said this isn't allowed, this is one person's fault and of course we're taking down the art immediately" etc. Leperflesh fucked around with this message at 00:04 on Mar 2, 2023 |
# ? Mar 2, 2023 00:01 |
|
Boba Pearl posted:I don't understand how this doesn't lead to people just lying? It absolutely does. But some people will grasp at any opportunity to enact their own little version of McCarthyism I guess. When the rules are arbitrary and entirely opaque you can just use your opinions in place of actual rules. I just saw a guy get banned because he doesn't post in the sub often enough to be entitled to an opinion? (According to the guy, I can't verify that) I worry that we are going to see people being forced to prove they created the art going forward, it's already started really, and since the rules are, again, arbitrary and opaque, it's going to be abused. Leperflesh posted:If Paizo says "you're not allowed to put AI generated art in stuff" and then people lie about it, that absolves Paizo of legal responsibility. It's cover. That's fair imho. I fully expect companies to follow suit, and reasonably so really. I'm more worried about the translation into community guidelines personally. Cousin Todd fucked around with this message at 00:08 on Mar 2, 2023 |
# ? Mar 2, 2023 00:05 |
|
Megazver posted:On the other hand, if you're just some dude who's publishing his homebrew adventure, it's the choice between AI art and "well, I guess it's just a Word document with some stock art, then". I find that disappointing. Which is exactly what's happened before AI art, so I'm not seeing the problem. Yes, AI art lowers the floor for creation. But it also doesn't mean everyone gets to do everything. It's up to the publisher, just as it's always been.
|
# ? Mar 2, 2023 01:00 |
|
Lowering the floor is part of the problem if you're a publisher. I think everyone kinda had to do something, just to head off the submission spam side. And sure, if you have enough sophistication with the tools you can generate images that don't have tells, but submitting them wouldn't really be an ethical grey area at all, just wrong. If Paizo doesn't want it that's kinda it there. Raenir Salazar posted:Anyone got any ideas as to how's the best way to go about generating battlemaps/building maps? I mainly want to generate the general idea to work off of as reference to make myself in CSP but making the general basic layout of a map, whether it's a building, town, or some random outdoor terrain is a little harder to conceptualize beyond vague ideas. I find this set to be pretty good. Then you pop it down as a base layer in something like dungeon/wonderdraft and go to town.
|
# ? Mar 2, 2023 16:22 |
|
Yeah I think its understandable to ban AI art until a better means of ascertaining the sourcing of the dataset at a minimum can be determined. The thing is I don't think its quite as stark a choice of either "AI art" or "stock photos" I think with enough effort you could kit bash something, or find affordable artists. Its just much harder and takes admittedly a lot of effort and false starts. Like you could use AI art to create a prototype for your images and then figure out free but high effort ways of representing it. Blender + Free/Cheap (from gumroad/flippednormals etc) 3D Models + Post Processing effects might get you part of the way there? Also gpose in FFXIV with shaders like GShade and so on. I think the creative effort is transformative enough that with processing and photo editing should pass legal muster to let you publish it as photos? The distinction between using 3D editing software and 3D map generation tools to create a backdrop that you take screenshots with post processing effects (especially if you can get access to a depth buffer) and AI Art to my mind isn't very large as you're still using "tools" but at least might get people close to what they want and be publishable? Maybe someone needs to start cranking out tutorials and guides. Buffer posted:Lowering the floor is part of the problem if you're a publisher. Thanks! I think the main thing an AI battlemap generator would be good for though is specifying features not within the scope of more traditional procedural map generators. "Make me a cave system with a tower somewhere" and see what happens and if I like the result, keep it. While if I have that idea I could generate a cave map with one of the above tools and kitbash it with a tower base room in it somewhere and fiddle with it, but that's a lot more emotional and creative labour for a map that might not get used. Raenir Salazar fucked around with this message at 17:07 on Mar 2, 2023 |
# ? Mar 2, 2023 17:02 |
|
Megazver posted:I am torn on this. On one hand, I definitely want the publishers who can actually afford to commission real human artists to continue to do so and welcome pledges in that regard. On the other hand, if you're just some dude who's publishing his homebrew adventure, it's the choice between AI art and "well, I guess it's just a Word document with some stock art, then". I find that disappointing. Yeah as somebody who was finally getting back to their adventure (which I do as a total hobby) and is a perfectionist about that kind of thing, AI art was a godsend. Now it's shut off from me. Oh well, I'll just wait a year and find somebody who will agree to be credited for it, and then we're back on track. I honestly think Paizo's pretty smart to give themselves cover, but eventually there's going to be laws and this will all shake out just fine. KakerMix, that is a very cool tutorial and process. You should absolutely make a youtube montage and post it with a "Shadowrun as a Dark 80s Movie", those are getting around 100k views and if you monetize it you'll get a little money in your pocket to buy yourself a tablet or some other gear to improve your workflow. It's also kind of wild to me that ChatGPT "knows" about Dark Sun. It's doing a pretty good job of it.
|
# ? Mar 2, 2023 18:18 |
|
https://www.drivethrurpg.com/product/428080/Hoic-Haco Pretty sure the copy is also ChatGPT, lol.
|
# ? Mar 2, 2023 22:15 |
|
Megazver posted:https://www.drivethrurpg.com/product/428080/Hoic-Haco You gonna pony up the $0 and write a review?
|
# ? Mar 2, 2023 22:23 |
|
|
# ? May 2, 2024 07:04 |
|
I have better things to do, tbh. That description, though, and what's in the preview screams ChatGPT, imho. I only saw it because it's number 12 in DTRPG's bestsellers' list atm. Hopefully no one actually paid the $20.
|
# ? Mar 2, 2023 22:28 |