|
|
# ? Oct 14, 2022 02:35 |
|
|
# ? May 30, 2024 06:26 |
|
EVIL Gibson posted:More Danzig content using the leaked novel AI that is really good at making anime/manga So I'm assuming it's been confirmed that this isn't a big box of viruses? The .ckpt files are just made with the Python pickle() method (I think?) and that can unpack to pretty much anything I'd ask for a link, but I guess this counts as
|
# ? Oct 14, 2022 03:40 |
|
It felt more like a "robin hood" leak to me than someone trying to build a new cryptocurrency mining botnet. I have no doubt that even the open source anime models will reach the same level of fidelity soon enough.
|
# ? Oct 14, 2022 04:15 |
|
BoldFace posted:It felt more like a "robin hood" leak to me than someone trying to build a new cryptocurrency mining botnet. I have no doubt that even the open source anime models will reach the same level of fidelity soon enough. It is not a paid for product and based on open source material. It was like a secret project made by goons from 4chan that want to make the hentai/furry/mlp porn poo poo and had a loooot of that material to help train; both SFW and NSFW. SFW version so you don't need worry if you have kids or whatever.
|
# ? Oct 14, 2022 04:52 |
|
Variations of: A cinematic shot of a rocky desert landscape with a towering castlevania gothic temple in the background, (horseback warrior), might and magic, insane detail, 8k, artstation, witcher 4 I use negative prompts of render, cartoon, drawing, illustration
|
# ? Oct 14, 2022 05:07 |
|
Wow, those are really good. Midjourney or Stable Diffusion?
|
# ? Oct 14, 2022 05:40 |
|
Brutal Garcon posted:So I'm assuming it's been confirmed that this isn't a big box of viruses? The .ckpt files are just made with the Python pickle() method (I think?) and that can unpack to pretty much anything It's pretty easy to find atm if you just google for a magnet link but yeah ckpt files can basically have any arbitrary python code so you should trust them about as much as you trust a random .exe file off the internet. content: disintegrating (kaiju), sharp teeth, action pose, key visual, fire breath, pyrotechnics, composite art, iridescent, colorful, by leonid afremov and alphonse mucha disintegrating (kaiju), sharp teeth, action pose, key visual, breathing lightning, pyrotechnics, composite art, iridescent, colorful, by leonid afremov i just wanted to do something with the disintegrating keyword since I saw it in this thread. "Key visual" is great btw, it comes up with framing that looks like a movie poster or something. (Now that I think of it, "movie poster" probably works too...) Gotta make sure to have stuff like text/watermark/CAPTCHA in the negative prompt though since otherwise it loves to put text and titles on. AARD VARKMAN posted:Anyone else been making phone/pc wallpapers using tiling? Would love to see That's actually a great idea, I'm gonna have to try it out. It'll drive me crazy that 512 doesn't evenly divide 1080 though... RPATDO_LAMD fucked around with this message at 05:50 on Oct 14, 2022 |
# ? Oct 14, 2022 05:48 |
|
chiefstormcloud posted:Hey man, great art. #3 of the bottom 4 would look satisfying. The road bordering the mountain so pleasantly like that. Hey thanks I'm working on a bigger one now, I'll post it when I'm done. AARD VARKMAN posted:Anyone else been making phone/pc wallpapers using tiling? Would love to see I'll post some of mine later but I just wanted to add a link to this neat tool for anyone reading, you can drag these images into it to see them seamlessly textured at various different sizes: https://www.pycheung.com/checker/
|
# ? Oct 14, 2022 06:15 |
|
deep dish peat moss posted:I'll post some of mine later but I just wanted to add a link to this neat tool for anyone reading, you can drag these images into it to see them seamlessly textured at various different sizes: neat, thanks for sharing
|
# ? Oct 14, 2022 06:20 |
|
WhiteHowler posted:Wow, those are really good. Midjourney or Stable Diffusion? Stable Diffusion
|
# ? Oct 14, 2022 06:30 |
|
I accidentally clicked generate without a prompt and got this The img2img interrogate function for that image gave me this prompt: a painting of a child and a bear with a mirror on their head and a mirror on their head, by David Choe applying that prompt to the original
|
# ? Oct 14, 2022 06:47 |
What is the minimum resolution? Can you force it down to like half? A quarter? 32 pixels wide? 8x8?
|
|
# ? Oct 14, 2022 07:48 |
|
Squatch Ambassador posted:
holds your hand on the CFG number slider and moves it down to 7 This is a tool and misuse leads to the model to huff massive amounts of rubber cement, inhale whippets, and smoke rocks.
|
# ? Oct 14, 2022 07:53 |
SniperWoreConverse posted:What is the minimum resolution? Can you force it down to like half? A quarter? 32 pixels wide? 8x8?
|
|
# ? Oct 14, 2022 07:59 |
|
I tested it with SD in August, and though I don’t remember the breaking point, iirc by 128*128 it was complete junk.
|
# ? Oct 14, 2022 08:03 |
Now that I've loaded it up, can confirm Automatic1111's interface goes as low as 64x64
|
|
# ? Oct 14, 2022 09:13 |
|
WhiteHowler posted:Other than using the Highres Fix setting in the Automatic1111 repo, I don't know of any. The AI was trained on square images, and when generating something with another aspect ratio, it can "lose track" of its subject and think that it hasn't yet drawn the thing it was asked for. If you want the highres fix to be more consistent, turn down the denoiser slider. The lower it is, the more it'll just try to make tweaks to the existing image. The default value of 0.7 is way too high, and at that value it'll often just pave over the image with a mostly fresh high-res one that it'll botch because the AI is bad at high resolutions. 0.2 to 0.3 is usually pretty good. 0.4 if you're feeling frisky. Dr. Video Games 0031 fucked around with this message at 13:15 on Oct 14, 2022 |
# ? Oct 14, 2022 13:12 |
|
Dr. Video Games 0031 posted:If you want the highres fix to be more consistent, turn down the denoiser slider. The lower it is, the more it'll just try to make tweaks to the existing image. The default value of 0.7 is way too high, and at that value it'll often just pave over the image with a mostly fresh high-res one that it'll botch because the AI is bad at high resolutions. 0.2 to 0.3 is usually pretty good. 0.4 if you're feeling frisky. For generating new images, I just keep "Highres fix" enabled all the time and with denoising at 0.7 works just fine usually. Yesterday I was trying really hard to make a Kristen from the show Evil and failed but out of 80 images not once did it try to draw multiple people and only once she had an extra leg The biggest image I managed in 8GB, 1152x768. Highres fix and 0.7 denoising, and again without any weirdness. Any idea how to get it to apply the impressionist style to the subject as well though? I'm guessing it's never seen an impressionist Miata but if it can make a cave painting Mona Lisa, this should be possible as well. But cranking up the weight on "impressionist" doesn't seem to do it. I'm sure I could just use a photoshop filter but c'mon. Not the best in terms of composition but at this point it takes a minute per image so I'm not getting as many experiments done. mobby_6kl fucked around with this message at 14:06 on Oct 14, 2022 |
# ? Oct 14, 2022 13:39 |
|
mobby_6kl posted:You're talking about upscaling though right? That's where I find I need to turn down denoising to avoid getting an entirely different image. Or I suppose resizing and filling an image, as you still want to retain the look of the original part. Highres fix does upscaling. It generates a low res image and basically automatically upscales it using the same prompt, is my understanding. That's why if you look at the command prompt, you'll see it go through two generation processes.
|
# ? Oct 14, 2022 13:54 |
|
I assumed 64x64 would be the minimum since the resolution for SD has to scale in increments of 64. Not just automatic1111
|
# ? Oct 14, 2022 14:22 |
|
Dr. Video Games 0031 posted:Highres fix does upscaling. It generates a low res image and basically automatically upscales it using the same prompt, is my understanding. That's why if you look at the command prompt, you'll see it go through two generation processes. But still IMO it doesn't really matter as long as you keep it always enabled. You just see the final result so whatever was the intermediate step isn't really important, just like all the different layers in the neural net. Same prompt, seed, etc. Without fix: With fix: It is a completely different image, but BrainDance posted:I assumed 64x64 would be the minimum since the resolution for SD has to scale in increments of 64.
|
# ? Oct 14, 2022 14:30 |
|
mobby_6kl posted:Oh, ok, thanks. Haven't looked into how it actually works under the hood. Try comparing a 512x512 image with no highres fix to a 1024x1024 image with highres fix with the same seed (edit: or any other higher resolution of the same aspect ratio). Because that's what's getting upscaled. You can see that they'll be almost identical at 0.05 denoiser, quite similar but higher quality at 0.3 or so, and then the likelihood of messy results start cropping in the higher you go. At 1.0, it's basically just starting over from scratch. Not saying it never works at higher denoiser levels because it obviously does, but the results are more consistent at lower levels in my experience. But keep doing whatever's working for you I guess, because these AI models can be super finicky and a config gets results with one prompt may be a mess at another edit: though to be fair, if you're using lower resolutions than 1024x1024, you can get away with using higher denoiser values. Dr. Video Games 0031 fucked around with this message at 14:42 on Oct 14, 2022 |
# ? Oct 14, 2022 14:38 |
|
I haven't tried 1024x1024 or above because of VRAM but I get what you're saying. I tested it with 768x768 here: Original 512 768, 0.1 denoising 0.2 0.4 0.7 The last one definitely messed up the proportions of the car a bit and got rid of a lot of details, so I think it's fair to say that 0.4 is overall much better. Buuut. I took the best image out of the batch for that experiment. Another one looked like this: And despite the changes to the background, I think 0.7 fixed the weird car. Ahhh and I managed to squeeze in a 1024 image with 7.9GB used This is 0.4 And 0.7 So I don't know if it's more likely to mess up a good image or improve a poor original In any case thanks for bringing it up and explaining, I'll keep in in mind and experiment more with it in the future.
|
# ? Oct 14, 2022 15:57 |
|
deep dish peat moss posted:I'll post some of mine later but I just wanted to add a link to this neat tool for anyone reading, you can drag these images into it to see them seamlessly textured at various different sizes: Thanks! Will try this later on some more. A feature/tool of SD to do img to img tiling would be sick for this, specifically being able to have it generate something that "tiles with" an input image. Would be nice combined with the current generation for some more complex/varying backgrounds. Not sure how the current tiling feature works though to say how hard it'd be to code. Mite take a look at the source later
|
# ? Oct 14, 2022 16:02 |
|
EVIL Gibson posted:holds your hand on the CFG number slider and moves it down to 7 Those were with CFG at 7, 40 steps, and denoise between 0.5 - 0.75. I think the craziness is coming the abstract source image + David Choe's art having a surreal and dreamlike style. This is what it looks like with CFG above 7 :
|
# ? Oct 14, 2022 16:36 |
|
One of my favorites. https://labs.openai.com/s/CIfENoLzO70zOEAARtFF7x4J
|
# ? Oct 14, 2022 17:15 |
|
some more tileables
|
# ? Oct 14, 2022 20:30 |
|
I like the vibe of these, what's the prompt?
|
# ? Oct 14, 2022 21:40 |
|
Lord Hydronium posted:I like the vibe of these, what's the prompt? a King Crimson lyric with a style prompt I am in love with: Earth stream and tree encircled by sea, elegant, digital painting, artstation, concept art, matte, sharp focus, illustration, art by josan gonzalez Negative prompt: (((deformed))), [blurry], bad anatomy, disfigured, poorly drawn face, mutation, mutated, (extra_limb), (ugly), (poorly drawn hands), messy drawing, text, CAPTCHA Steps: 50, Sampler: Euler a, CFG scale: 9, Size: 512x512, Model hash: 7460a6fa
|
# ? Oct 14, 2022 22:12 |
|
It is all happening so fast! https://twitter.com/ScottieFoxTTV/status/1579903471943569410
|
# ? Oct 14, 2022 22:57 |
|
unzin posted:
Will be obvious, but try to guess what I changed Prompt: Both used "A cinematic shot of a rocky desert landscape with a towering castlevania gothic temple in the background, (Link on horseback), might and magic, insane detail, 8k, artstation, ((breath of the wild))"
|
# ? Oct 14, 2022 22:59 |
|
Hi. Hello friends. I have some AI art questions!!! My PCs processor is AMD Ryzen 5 2600 and video card is Radeon RX 580. I take it that I'm completely hosed if I'm wanting to do some img2img AI art on my PC? A friend sent me this link to try to set things up - https://rentry.org/voldy They also gave me the info for some specific models. Unfortunately, they didn't realize I'm not on NVidia, so that process doesn't work for me. I did come across this guide - https://rentry.org/ayymd-stable-diffustion-v1_4-guide But it takes like 5 minutes to produce a single image and it only goes via a text prompt. It doesn't seem like there is any way to integrate models or do img2img. I'm very new to all this AI Art stuff so maybe I'm just speaking gibberish right now. All I know is that they were able to do some cool stuff with my FFXIV character and I want to be able to do that. vvvvvvvvvvvvvvvvvvvvv It was him!!! HE planted this evil seed within me. Mordiceius fucked around with this message at 23:12 on Oct 14, 2022 |
# ? Oct 14, 2022 23:09 |
|
so, i've been learning the NAI model
|
# ? Oct 14, 2022 23:10 |
|
AARD VARKMAN posted:a King Crimson lyric with a style prompt I am in love with:
|
# ? Oct 14, 2022 23:12 |
|
Mordiceius posted:I did come across this guide - https://rentry.org/ayymd-stable-diffustion-v1_4-guide But it takes like 5 minutes to produce a single image and it only goes via a text prompt. It doesn't seem like there is any way to integrate models or do img2img. I was going to say I was sure automatic1111 had an AMD installer but that seems to be only for Linux and they refer windows users to that guide you posted.
|
# ? Oct 14, 2022 23:23 |
|
an actual frog posted:Oh my, five minutes?! Was that just for a basic 512x512 image or were you generating something much larger? With how many iterations and retries a good image can take, 5 mins is untenable. Yeah. Basically, I have to edit the prompt into a file and then execute the file via command prompt and it takes like 3-5 min to make an image - during which, my computer acts like it is dying.
|
# ? Oct 14, 2022 23:25 |
Try https://rentry.org/sdamd
|
|
# ? Oct 15, 2022 00:18 |
|
an actual frog posted:Oh my, five minutes?! Was that just for a basic 512x512 image or were you generating something much larger? With how many iterations and retries a good image can take, 5 mins is untenable. 5 minutes sounds like the speed you'd get if you were doing cpu-only generation (which IS possible) That guide is specifically for linux, as it depends on AMD ROCm which is not compatible with Windows. Mordiceius posted:Yeah. Basically, I have to edit the prompt into a file and then execute the file via command prompt and it takes like 3-5 min to make an image - during which, my computer acts like it is dying. Your best option is probably using somebody else's computer. DreamStudio and novelai both work, starting at I think 10 bux per month(?) but if you want a more cost effective but more technical option you could also Rent an Nvidia A4000 GPU from RunPod for only 32 cents per hour of use time. This is a specialized gpu built for workstations and compute/cuda loads instead of for gaming, and should be faster than basically any consumer GPU. If you go to their templates page they even have a built in AUTOMATIC1111 preset so you should be able to get it up and running pretty easily with little setup/configuration. e: Also check out midjourney which also does paid generations and has a different model with a much different training set. You can also play around with free services like this one at dezgo RPATDO_LAMD fucked around with this message at 00:54 on Oct 15, 2022 |
# ? Oct 15, 2022 00:46 |
RPATDO_LAMD posted:That guide is specifically for linux, as it depends on AMD ROCm which is not compatible with Windows. It has a docker section that should work in any OS.
|
|
# ? Oct 15, 2022 00:53 |
|
|
# ? May 30, 2024 06:26 |
|
RPATDO_LAMD posted:e: Also check out midjourney which also does paid generations and has a different model with a much different training set. I am not sure that Midjourney uses a different training set. In an interview with Emad (the main SD guy), he mentioned that MJ's renderer is just the current stock Stable Diffusion model, but they do a lot of pre- and post-processing. He wouldn't go into details since their platform is proprietary. I suspect that MJ injects extra keywords into your prompts. Here's a set of keywords someone on Reddit posted that tends to give very MJ-ish outputs from Stable Diffusion: Splash art, light dust, magnificent, theme park, medium shot, details, sharp focus, elegant, highly detailed, illustration, by jordan grimmer and greg rutkowski and ocellus and alphonse mucha and wlop, intricate, beautiful, triadic contrast colors, trending artstation, pixiv, digital art I've run several prompts through it, and the output is close but not exact. I feel like a paid service would be hesitant to automatically inject the names of living artists into their prompts. Even without the "style of" part, this still gives fairly consistent MJ-ish output.
|
# ? Oct 15, 2022 01:05 |